The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: TypeError
Message: Couldn't cast array of type
struct<conference: string, category: string, sheet: string, accepted_tags: string, authors_included: bool, year: int64, cutoff_period: string, track: string, conference_group: string, source_split: string, source_dataset: string>
to
{'conference': Value('string'), 'category': Value('string'), 'sheet': Value('string'), 'accepted_tags': Value('string'), 'authors_included': Value('bool'), 'year': Value('int64'), 'cutoff_period': Value('string'), 'source_csv': Value('string'), 'url': Value('string'), 'pdf_url': Value('string'), 'track': Value('string'), 'conference_group': Value('string'), 'tags': Value('string')}
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
cast_array_to_feature(
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2092, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<conference: string, category: string, sheet: string, accepted_tags: string, authors_included: bool, year: int64, cutoff_period: string, track: string, conference_group: string, source_split: string, source_dataset: string>
to
{'conference': Value('string'), 'category': Value('string'), 'sheet': Value('string'), 'accepted_tags': Value('string'), 'authors_included': Value('bool'), 'year': Value('int64'), 'cutoff_period': Value('string'), 'source_csv': Value('string'), 'url': Value('string'), 'pdf_url': Value('string'), 'track': Value('string'), 'conference_group': Value('string'), 'tags': Value('string')}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
question
string | choices
list | answer
string | context
string | metadata
dict |
|---|---|---|---|---|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: A Theory of Response Sampling in LLMs: Part Descriptive and Part Prescriptive
Abstract: Large Language Models (LLMs) are increasingly utilized in autonomous decision-making, where they sample options from vast action spaces. However, the heuristics that guide this sampling process remain under-explored. We study this sampling behavior and show that this underlying heuristics resembles that of human decision-making: comprising a descriptive component (reflecting statistical norm) and a prescriptive component (implicit ideal encoded in the LLM) of a concept. We show that this deviation of a sample from the statistical norm towards a prescriptive component consistently appears in concepts across diverse real-world domains like public health, and economic trends. To further illustrate the theory, we demonstrate that concept prototypes in LLMs are affected by prescriptive norms, similar to the concept of normality in humans. Through case studies and comparison with human studies, we illustrate that in real-world applications, the shift of samples toward an ideal value in LLMs’ outputs can result in significantly biased decision-making, raising ethical concerns.
|
{
"conference": "ACL 2025",
"category": "Best paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1454",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Abstract: Algorithmic fairness has conventionally adopted the mathematically convenient perspective of racial color-blindness (i.e., difference unaware treatment). However, we contend that in a range of important settings, group difference awareness matters. For example, differentiating between groups may be necessary in legal contexts (e.g., the U.S. compulsory draft applies to men but not women) and harm assessments (e.g., referring to girls as “terrorists” may be less harmful than referring to Muslim people as such). Thus, in contrast to most fairness work, we study fairness through the perspective of treating people differently — when it is contextually appropriate to. We first introduce an important distinction between descriptive (fact-based), normative (value-based), and correlation (association-based) benchmarks. This distinction is significant because each category requires separate interpretation and mitigation tailored to its specific characteristics. Then, we present a benchmark suite composed of eight different scenarios for a total of 16k questions that enables us to assess difference awareness. Finally, we show results across ten models that demonstrate difference awareness is a distinct dimension to fairness where existing bias mitigation strategies may backfire.
|
{
"conference": "ACL 2025",
"category": "Best paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.341",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: Language Models Resist Alignment: Evidence From Data Compression
Abstract: Large language models (LLMs) may exhibit unintended or undesirable behaviors. Recent works have concentrated on aligning LLMs to mitigate harmful outputs. Despite these efforts, some anomalies indicate that even a well-conducted alignment process can be easily circumvented, whether intentionally or accidentally. Does alignment fine-tuning yield have robust effects on models, or are its impacts merely superficial? In this work, we make the first exploration of this phenomenon from both theoretical and empirical perspectives. Empirically, we demonstrate the of post-alignment models, i.e., the tendency to revert to the behavior distribution formed during the pre-training phase upon further fine-tuning. Leveraging compression theory, we formally deduce that fine-tuning disproportionately undermines alignment relative to pre-training, potentially by orders of magnitude. We validate the presence of elasticity through experiments on models of varying types and scales. Specifically, we find that model performance declines rapidly before reverting to the pre-training distribution, after which the rate of decline drops significantly. Furthermore, we further reveal that elasticity positively correlates with the increased model size and the expansion of pre-training data. Our findings underscore the need to address the inherent elasticity of LLMs to mitigate their resistance to alignment.
|
{
"conference": "ACL 2025",
"category": "Best paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1141",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
Abstract: Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trained Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle.
Authors: Jingyang Yuan, Huazuo Gao, Damai Dai, Junyu Luo, Liang Zhao, Zhengyan Zhang, Zhenda Xie, Yuxing Wei, Lean Wang, Zhiping Xiao, Yuqing Wang, Chong Ruan, Ming Zhang, Wenfeng Liang, Wangding Zeng
|
{
"conference": "ACL 2025",
"category": "Best paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1126",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset
Abstract: Recent advancements in large language model (LLM) performance on medical multiplechoice question (MCQ) benchmarks have stimulated interest from healthcare providers and patients globally. Particularly in low-andmiddle-income countries (LMICs) facing acute physician shortages and lack of specialists, LLMs offer a potentially scalable pathway to enhance healthcare access and reduce costs. However, their effectiveness in the Global South, especially across the African continent, remains to be established. In this work, we introduce AfriMed-QA , the first largescale Pan-African English multi-specialty medical Question-Answering (QA) dataset, 15,000 questions (open and closed-ended) sourced from over 60 medical schools across 16 countries, covering 32 medical specialties. We further evaluate 30 LLMs across multiple axes including correctness and demographic bias. Our findings show significant performance variation across specialties and geographies, MCQ performance clearly lags USMLE (MedQA). We find that biomedical LLMs underperform general models and smaller edge-friendly LLMs struggle to achieve a passing score. Interestingly, human evaluations show a consistent consumer preference for LLM answers and explanations when compared with clinician answers.
|
{
"conference": "ACL 2025",
"category": "Best Social Impact Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.96",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: The AI Gap: How Socioeconomic Status Affects Language Technology Interactions
Abstract: Socioeconomic status (SES) fundamentally influences how people interact with each other and, more recently, with digital technologies like large language models (LLMs). While previous research has highlighted the interaction between SES and language technology, it was limited by reliance on proxy metrics and synthetic data. We survey 1,000 individuals from ‘diverse socioeconomic backgrounds’ about their use of language technologies and generative AI, and collect 6,482 prompts from their previous interactions with LLMs. We find systematic differences across SES groups in language technology usage (i.e., frequency, performed tasks), interaction styles, and topics. Higher SES entail a higher level of abstraction, convey requests more concisely, and topics like ‘inclusivity’ and ‘travel’. Lower SES correlates with higher anthropomorphization of LLMs (using ”hello” and ”thank you”) and more concrete language. Our findings suggest that while generative language technologies are becoming more accessible to everyone, socioeconomic linguistic differences still stratify their use to create a digital divide. These differences underscore the importance of considering SES in developing language technologies to accommodate varying linguistic needs rooted in socioeconomic factors and limit the AI Gap across SES groups.
|
{
"conference": "ACL 2025",
"category": "Best Social Impact Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.914",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: Are Rules Meant to be Broken? Understanding Multilingual Moral Reasoning as a Computational Pipeline with UniMoral
Abstract: Moral reasoning is a complex cognitive process shaped by individual experiences and cultural contexts and presents unique challenges for computational analysis. While natural language processing (NLP) offers promising tools for studying this phenomenon, current research lacks cohesion, employing discordant datasets and tasks that examine isolated aspects of moral reasoning. We bridge this gap with UniMoral, a unified dataset integrating psychologically grounded and social-media-derived moral dilemmas annotated with labels for action choices, ethical principles, contributing factors, and consequences, alongside annotators’ moral and cultural profiles. Recognizing the cultural relativity of moral reasoning, UniMoral spans six languages, Arabic, Chinese, English, Hindi, Russian, and Spanish, capturing diverse socio-cultural contexts. We demonstrate UniMoral’s utility through a benchmark evaluations of three large language models (LLMs) across four tasks: action prediction, moral typology classification, factor attribution analysis, and consequence generation. Key findings reveal that while implicitly embedded moral contexts enhance the moral reasoning capability of LLMs, there remains a critical need for increasingly specialized approaches to further advance moral reasoning in these models.
|
{
"conference": "ACL 2025",
"category": "Best Resource Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.294",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion Recognition Datasets for 28 Languages
Abstract: People worldwide use language in subtle and complex ways to express emotions. Although emotion recognition–an umbrella term for several NLP tasks–impacts various applications within NLP and beyond, most work in this area has focused on high-resource languages. This has led to significant disparities in research efforts and proposed solutions, particularly for under-resourced languages, which often lack high-quality annotated datasets.In this paper, we present BRIGHTER–a collection of multi-labeled, emotion-annotated datasets in 28 different languages and across several domains. BRIGHTER primarily covers low-resource languages from Africa, Asia, Eastern Europe, and Latin America, with instances labeled by fluent speakers. We highlight the challenges related to the data collection and annotation processes, and then report experimental results for monolingual and crosslingual multi-label emotion identification, as well as emotion intensity recognition. We analyse the variability in performance across languages and text domains, both with and without the use of LLMs, and show that the BRIGHTER datasets represent a meaningful step towards addressing the gap in text-based emotion recognition.
Authors: Shamsuddeen Hassan Muhammad, Nedjma Ousidhoum, Idris Abdulmumin, Jan Philip Wahle, Terry Ruas, Meriem Beloucif, Christine de Kock, Nirmal Surange, Daniela Teodorescu, Ibrahim Said Ahmad, David Ifeoluwa Adelani, Alham Fikri Aji, Felermino D. M. A. Ali, Ilseyar Alimova, Vladimir Araujo, Nikolay Babakov, Naomi Baes, Ana-Maria Bucur, Andiswa Bukula, Guanqun Cao, Rodrigo Tufiño, Rendi Chevi, Chiamaka Ijeoma Chukwuneke, Alexandra Ciobotaru, Daryna Dementieva, Murja Sani Gadanya, Robert Geislinger, Bela Gipp, Oumaima Hourrane, Oana Ignat, Falalu Ibrahim Lawan, Rooweither Mabuya, Rahmad Mahendra, Vukosi Marivate, Alexander Panchenko, Andrew Piper, Charles Henrique Porto Ferreira, Vitaly Protasov, Samuel Rutunda, Manish Shrivastava, Aura Cristina Udrea, Lilian Diana Awuor Wanzare, Sophie Wu, Florian Valentin Wunderlich, Hanif Muhammad Zhafran, Tianhui Zhang, Yi Zhou, Saif M. Mohammad
|
{
"conference": "ACL 2025",
"category": "Best Resource Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.436",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: Palm: A Culturally Inclusive and Linguistically Diverse Dataset for Arabic LLMs
Abstract: As large language models (LLMs) become increasingly integrated into daily life, ensuring their cultural sensitivity and inclusivity is paramount. We introduce PALM, a year-long community-driven project covering all 22 Arab countries. The dataset contains instruction–response pairs in both Modern Standard Arabic (MSA) and dialectal Arabic (DA), spanning 20 diverse topics. Built by a team of 44 researchers across the Arab world—each an author of this paper—PALM offers a broad, inclusive perspective. We use PALM to evaluate the cultural and dialectal capabilities of several frontier LLMs, revealing notable limitations: while closed-source LLMs generally perform strongly, they still exhibit flaws, and smaller open-source models face greater challenges. Furthermore, certain countries (e.g., Egypt, the UAE) appear better represented than others (e.g., Iraq, Mauritania, Yemen). Our annotation guidelines, code, and data are publicly available for reproducibility. More information about PALM is available on our project page: https://github.com/UBC-NLP/palm.
Authors: Fakhraddin Alwajih, Abdellah El Mekki, Samar Mohamed Magdy, AbdelRahim A. Elmadany, Omer Nacar, El Moatez Billah Nagoudi, Reem Abdel-Salam, Hanin Atwany, Youssef Nafea, Abdulfattah Mohammed Yahya, Rahaf Alhamouri, Hamzah A. Alsayadi, Hiba Zayed, Sara Shatnawi, Serry Sibaee, Yasir Ech-chammakhy, Walid Al-Dhabyani, Marwa Mohamed Ali, Imen Jarraya, Ahmed Oumar El-Shangiti, Aisha Alraeesi, Mohammed Anwar AL-Ghrawi, Abdulrahman S. Al-Batati, Elgizouli Mohamed, Noha Taha Elgindi, Muhammed Saeed, Houdaifa Atou, Issam Ait Yahia, Abdelhak Bouayad, Mohammed Machrouh, Amal Makouar, Dania Alkawi, Mukhtar Mohamed, Safaa Taher Abdelfadil, Amine Ziad Ounnoughene, Anfel Rouabhia, Rwaa Assi, Ahmed Sorkatti, Mohamedou Cheikh Tourad, Anis Koubaa, Ismail Berrada, Mustafa Jarrar, Shady Shehata, Muhammad Abdul-Mageed
|
{
"conference": "ACL 2025",
"category": "Best Resource Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1579",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection
Abstract: We present a new adaptation method MaCP, Minimal yet Mighty adaptive Cosine Projection, that achieves exceptional performance while requiring minimal parameters and memory for fine-tuning large foundation models.Its general idea is to exploit the superior energy compaction and decorrelation properties of cosine projection to improve both model efficiency and accuracy.Specifically, it projects the weight change from the low-rank adaptation into the discrete cosine space.Then, the weight change is partitioned over different levels of the discrete cosine spectrum, and each partition’s most critical frequency components are selected.Extensive experiments demonstrate the effectiveness of MaCP across a wide range of single-modality tasks, including natural language understanding, natural language generation, text summarization, as well as multi-modality tasks such as image classification and video understanding. MaCP consistently delivers superior accuracy, significantly reduced computational complexity, and lower memory requirements compared to existing alternatives.
Authors: Yixian Shen, Qi Bi, Jia-hong Huang, Hongyi Zhu, Andy D. Pimentel, Anuj Pathania
|
{
"conference": "ACL 2025",
"category": "Best Theme Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1006",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models
Abstract: The composition of pre-training datasets for large language models (LLMs) remains largely undisclosed, hindering transparency and efforts to optimize data quality—a critical driver of model performance. Current data selection methods, such as natural language quality assessments, diversity-based filters, and classifier-based approaches, are limited by single-dimensional evaluation or redundancy-focused strategies. To address these gaps, we propose four dimensions to evaluate data quality: professionalism, readability, reasoning, and cleanliness. We further introduce , a multi-dimensional data selection method that integrates these dimensions with existing quality metrics through learned optimal weightings. Meta-rater employs proxy models to train a regression model that predicts validation loss, enabling the identification of optimal combinations of quality scores. Experiments demonstrate that Meta-rater for 1.3B parameter models and improves downstream task performance by , with advantages that scale to models as large as 7.2B parameters. Our work establishes that holistic, multi-dimensional quality integration significantly outperforms conventional single-dimension approaches, offering a scalable paradigm for enhancing pre-training efficiency and model capability. To advance future research, we release scripts, data, and models at .
|
{
"conference": "ACL 2025",
"category": "Best Theme Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.533",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: SubLIME: Subset Selection via Rank Correlation Prediction for Data-Efficient LLM Evaluation
Abstract: The rapid expansion of Large Language Models (LLMs) and natural language processing datasets has made exhaustive benchmark evaluations computationally prohibitive. Inspired by high-stakes competitions like the International Mathematical Olympiad-where a few well-chosen problems suffice to differentiate top performers—we present SubLIME, which reduces evaluation costs by 80% to 99% while preserving ranking fidelity. It trains a Rank Correlation Prediction (RCP) model that combines limited performance data from only 5-20 anchor LLMs with dataset intrinsic metrics - Difficulty, Quality, and Distributional Dispersion-to predict how closely a candidate subset reflects full-benchmark rankings. Guided by these predictions, SubLIME selects a “winning” subset (1-20% of full set data) for evaluating new LLMs, preserving global rankings significant better than other data-efficient methods across ten diverse benchmarks.
Authors: Gayathri Saranathan, Cong Xu, Mahammad Parwez Alam, Tarun Kumar, Martin Foltin, Soon Yee Wong, Suparna Bhattacharya
|
{
"conference": "ACL 2025",
"category": "Best Theme Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1477",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: A New Formulation of Zipf’s Meaning-Frequency Law through Contextual Diversity
Abstract: This paper proposes formulating Zipf’s meaning-frequency law, the power law between word frequency and the number of meanings, as a relationship between word frequency and contextual diversity. The proposed formulation quantifies meaning counts as contextual diversity, which is based on the directions of contextualized word vectors obtained from a Language Model (LM). This formulation gives a new interpretation to the law and also enables us to examine it for a wider variety of words and corpora than previous studies have explored. In addition, this paper shows that the law becomes unobservable when the size of the LM used is small and that autoregressive LMs require much more parameters than masked LMs to be able to observe the law.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.744",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: All That Glitters is Not Novel: Plagiarism in AI Generated Research
Abstract: Automating scientific research is considered the final frontier of science. Recently, several papers claim autonomous research agents can generate novel research ideas. Amidst the prevailing optimism, we document a critical concern: a considerable fraction of such research documents are smartly plagiarized. Unlike past efforts where experts evaluate the novelty and feasibility of research ideas, we request 13 experts to operate under a different situational logic: to identify similarities between LLM-generated research documents and existing work. Concerningly, the experts identify 24% of the 50 evaluated research documents to be either paraphrased (with one-to-one methodological mapping), or significantly borrowed from existing work. These reported instances are cross-verified by authors of the source papers. Experts find an additional 32% ideas to partially overlap with prior work, and a small fraction to be completely original. Problematically, these LLM-generated research documents do not acknowledge original sources, and bypass inbuilt plagiarism detectors. Lastly, through controlled experiments we show that automated plagiarism detectors are inadequate at catching plagiarized ideas from such systems. We recommend a careful assessment of LLM-generated research, and discuss the implications of our findings on academic publishing.
Authors: Tarun Gupta, Danish Pruthi
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1249",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases
Abstract: Pretraining language models on formal language can improve their acquisition of natural language. Which features of the formal language impart an inductive bias that leads to effective transfer? Drawing on insights from linguistics and complexity theory, we hypothesize that effective transfer occurs when two conditions are met: the formal language should capture the dependency structures present in natural language, and it should remain within the computational limitations of the model architecture. We experiment with pre-pretraining (training on formal language before natural languages) on transformers and find that formal languages capturing hierarchical dependencies indeed enable language models to achieve lower loss on natural language and better linguistic generalization compared to other formal languages. We also find modest support for the hypothesis that the formal language should fall within the computational limitations of the architecture. Strikingly, pre-pretraining reduces loss more efficiently than training on a matched amount of natural language. For a 1B-parameter language model trained on roughly 1.6B tokens of natural language, pre-pretraining achieves the same loss and better linguistic generalization with a 33% smaller token budget. Finally, we also give mechanistic evidence of transfer from formal tonatural language: attention heads acquired during pre-pretraining remain crucial for the model’s performance on syntactic evaluations.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.478",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Beyond N-Grams: Rethinking Evaluation Metrics and Strategies for Multilingual Abstractive Summarization
Abstract: Automatic N-gram based metrics such as ROUGE are widely used for evaluating generative tasks such as summarization. While these metrics are considered indicative (even if imperfect), of human evaluation for English, their suitability for other languages remains unclear. To address this, in this paper we systematically assess evaluation metrics for generation — both n-gram-based and neural-based— to assess their effectiveness across languages and tasks. Specifically, we design a large-scale evaluation suite across eight languages from four typological families — agglutinative, isolating, low-fusional, and high-fusional — from both low- and high-resource languages, to analyze their correlations with human judgments. Our findings highlight the sensitivity of the evaluation metric to the language type at hand. For example, for fusional languages, n-gram-based metrics demonstrate a lower correlation with human assessments, compared to isolating and agglutinative languages. We also demonstrate that tokenization considerations can significantly mitigate this for fusional languages with rich morphology, up to reversing such negative correlations. Additionally, we show that neural-based metrics specifically trained for evaluation, such as COMET, consistently outperform other neural metrics and correlate better than ngrmas metrics with human judgments in low-resource languages. Overall, our analysis highlights the limitations of n-gram metrics for fusional languages and advocates for investment in neural-based metrics trained for evaluation tasks.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.932",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Bridging the Language Gaps in Large Language Models with Inference-Time Cross-Lingual Intervention
Abstract: Large Language Models (LLMs) have shown remarkable capabilities in natural language processing but exhibit significant performance gaps among different languages. Most existing approaches to address these disparities rely on pretraining or fine-tuning, which are resource-intensive. To overcome these limitations without incurring significant costs, we propose Inference-Time Cross-Lingual Intervention (INCLINE), a novel framework that enhances LLM performance on low-performing (source) languages by aligning their internal representations with those of high-performing (target) languages during inference. INCLINE initially learns alignment matrices using parallel sentences from source and target languages through a Least-Squares optimization, and then applies these matrices during inference to transform the low-performing language representations toward the high-performing language space. Extensive experiments on nine benchmarks with five LLMs demonstrate that INCLINE significantly improves performance across diverse tasks and languages, compared to recent strong baselines. Our analysis demonstrates that INCLINE is highly cost-effective and applicable to a wide range of applications. In addition, we release the code to foster research along this line.
Authors: Weixuan Wang, Minghao Wu, Barry Haddow, Alexandra Birch
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.270",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Byte Latent Transformer: Patches Scale Better Than Tokens
Abstract: We introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it. We present the first FLOP controlled scaling study of byte-level models – up to 8B parameters and 4T training bytes – demonstrating the feasibility of scaling models trained on raw bytes without a fixed vocabulary. Both training and inference efficiency improve due to dynamically selecting long patches when data is predictable, along with qualitative improvements on reasoning and long tail generalization. For fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.453",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law
Abstract: Scaling law builds the relationship between training computation and validation loss, enabling researchers to effectively predict the loss trending of models across different levels of computation. However, a gap still remains between validation loss and the model’s downstream capabilities, making it untrivial to apply scaling law to direct performance prediction for downstream tasks. The loss typically represents a cumulative penalty for predicted tokens, which are implicitly considered to have equal importance. Nevertheless, our studies have shown evidence that when considering different training data distributions, we cannot directly model the relationship between downstream capability and computation or token loss. To bridge the gap between validation loss and downstream task capabilities, in this work, we introduce Capability Salience Vector, which decomposes the overall loss and assigns different importance weights to tokens to assess a specific meta-capability, aligning the validation loss with downstream task performance in terms of the model’s capabilities. Experiments on various popular benchmarks demonstrate that our proposed Capability Salience Vector could significantly improve the predictability of language model performance on downstream tasks.
Authors: Qiming Ge, Shuhao Xing, Songyang Gao, Yunhua Zhou, Yicheng Zou, Songyang Zhang, Zhi Chen, Hang Yan, Qi Zhang, Qipeng Guo, Kai Chen
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1157",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: From Real to Synthetic: Synthesizing Millions of Diversified and Complicated User Instructions with Attributed Grounding
Abstract: The pursuit of diverse, complex, and large-scale instruction data is crucial for automatically aligning large language models (LLMs). While there are methods capable of generating synthetic instructions at scale, they either suffer from limited grounding sources, leading to a narrow distribution, or rely on trivial extensions that fail to produce meaningful trajectories in terms of complexity. In contrast, instructions that benefit efficient alignment are typically crafted with cognitive insights and grounded in real-world use cases. In this paper, we synthesize such instructions using attributed grounding, which involves 1) a top-down attribution process that grounds a selective set of real instructions to situated users, and 2) a bottom-up synthesis process that leverages web documents to first generate a situation, then a meaningful instruction. This framework allows us to harvest diverse and complex instructions at scale, utilizing the vast range of web documents. Specifically, we construct a dataset of 1 million instructions, called SynthQuestions, and demonstrate that models trained on it achieve leading performance on several common benchmarks, with improvements that continually scale with more web corpora.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.517",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: HALOGEN: Fantastic LLM Hallucinations and Where to Find Them
Abstract: Despite their impressive ability to generate high-quality and fluent text, generative large language models (LLMs) also produce hallucinations: statements that are misaligned with established world knowledge or provided input context. However, measuring hallucination can be challenging, as having humans verify model generations on-the-fly is both expensive and time-consuming. In this work, we release HALoGEN, a comprehensive hallucination benchmark consisting of: (1) 10,923 prompts for generative models spanning nine domains including programming, scientific attribution, and summarization, and (2) automatic high-precision verifiers for each use case that decompose LLM generations into atomic units, and verify each unit against a high-quality knowledge source. We use this framework to evaluate ~150,000 generations from 14 language models, finding that even the best-performing models are riddled with hallucinations (sometimes up to 86% of generated atomic facts depending on the domain). We further define a novel error classification for LLM hallucinations based on whether they likely stem from incorrect recollection of training data (Type A errors), or incorrect knowledge in training data (Type B errors), or are fabrication (Type C errors). We hope our framework provides a foundation to enable the principled study of why generative models hallucinate, and advances the development of trustworthy large language models.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.71",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Abstract: To address the global challenge of online hate speech, prior research has developed detection models to flag such content on social media. However, due to systematic biases in evaluation datasets, the real-world effectiveness of these models remains unclear, particularly across geographies. We introduce HateDay, the first global hate speech dataset representative of social media settings, constructed from a random sample of all tweets posted on September 21, 2022 and covering eight languages and four English-speaking countries. Using HateDay, we uncover substantial variation in the prevalence and composition of hate speech across languages and regions. We show that evaluations on academic datasets greatly overestimate real-world detection performance, which we find is very low, especially for non-European languages. Our analysis identifies key drivers of this gap, including models’ difficulty to distinguish hate from offensive speech and a mismatch between the target groups emphasized in academic datasets and those most frequently targeted in real-world settings. We argue that poor model performance makes public models ill-suited for automatic hate speech moderation and find that high moderation rates are only achievable with substantial human oversight. Our results underscore the need to evaluate detection systems on data that reflects the complexity and diversity of real-world social media.
Authors: Manuel Tonneau, Diyi Liu, Niyati Malhotra, Scott A. Hale, Samuel Fraiberger, Victor Orozco-Olvera, Paul Röttger
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.115",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: I0T: Embedding Standardization Method Towards Zero Modality Gap
Abstract: Contrastive Language-Image Pretraining (CLIP) enables zero-shot inference in downstream tasks such as image-text retrieval and classification. However, recent works extending CLIP suffer from the issue of *modality gap*, which arises when the image and text embeddings are projected to disparate manifolds, deviating from the intended objective of image-text contrastive learning. We discover that this phenomenon is linked to the modality-specific characteristic that each image or text encoder independently possesses. Herein, we propose two methods to address the modality gap: (1) a post-hoc embedding standardization method, that reduces the modality gap approximately to zero and (2) a trainable method, , to alleviate the modality gap problem by adding two normalization layers for each encoder. Our I0T framework can significantly reduce the modality gap while preserving the original embedding representations of trained models with their locked parameters. In practice, can serve as an alternative explainable automatic evaluation metric of widely used CLIPScore (CLIP-S). The code is available in https://github.com/xfactlab/I0T.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1319",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: IndicSynth: A Large-Scale Multilingual Synthetic Speech Dataset for Low-Resource Indian Languages
Abstract: Recent advances in synthetic speech generation technology have facilitated the generation of high-quality synthetic (fake) speech that emulates human voices. These technologies pose a threat of misuse for identity theft and the spread of misinformation. Consequently, the misuse of such powerful technologies necessitates the development of robust and generalizable audio deepfake detection (ADD) and anti-spoofing models. However, such models are often linguistically biased. Consequently, the models trained on datasets in one language exhibit a low accuracy when evaluated on out-of-domain languages. Such biases reduce the usability of these models and highlight the urgent need for multilingual synthetic speech datasets for bias mitigation research. However, most available datasets are in English or Chinese. The dearth of multilingual synthetic datasets hinders multilingual ADD and anti-spoofing research. Furthermore, the problem intensifies in countries with rich linguistic diversity, such as India. Therefore, we introduce IndicSynth, which contains 4,000 hours of synthetic speech from 989 target speakers, including 456 females and 533 males for 12 low-resourced Indian languages. The dataset includes rich metadata covering gender details and target speaker identifiers. Experimental results demonstrate that IndicSynth is a valuable contribution to multilingual ADD and anti-spoofing research. The dataset can be accessed from https://github.com/vdivyas/IndicSynth.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1070",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models
Abstract: State space models (SSMs), such as Mamba, have emerged as an efficient alternative to transformers for long-context sequence modeling. However, despite their growing adoption, SSMs lack the interpretability tools that have been crucial for understanding and improving attention-based architectures. While recent efforts provide insights into Mamba’s internal mechanisms, they struggle to capture precisetoken-level interactions at the layer level, leaving gaps in understanding how Mamba selectively processes sequences across layers. In this work, we introduce LaTIM, a novel token-level decomposition method for both Mamba-1 and Mamba-2 that enables fine-grained interpretability. We extensively evaluate our method across diverse tasks, including machine translation, copying, and retrieval-based generation, demonstrating its effectiveness in revealing Mamba’s token-to-token interaction patterns.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1194",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Llama See, Llama Do: A Mechanistic Perspective on Contextual Entrainment and Distraction in LLMs
Abstract: We observe a novel phenomenon, *contextual entrainment*, across a wide range of language models (LMs) and prompt settings, providing a new mechanistic perspective on how LMs become distracted by “irrelevant” contextual information in the input prompt. Specifically, LMs assign significantly higher logits (or probabilities) to any tokens that have previously appeared in the context prompt, even for random tokens. This suggests that contextual entrainment is a phenomenon, occurring independently of the relevance or semantic relation of the tokens to the question or the rest of the sentence. We find statistically significant evidence that the magnitude of contextual entrainment is influenced by semantic factors. Counterfactual prompts have a greater effect compared to factual ones, suggesting that while contextual entrainment is a mechanistic phenomenon, it is modulated by semantic factors.We hypothesise that there is a circuit of attention heads — the *entrainment heads* — that corresponds to the contextual entrainment phenomenon. Using a novel entrainment head discovery method based on differentiable masking, we identify these heads across various settings. When we “turn off” these heads, i.e., set their outputs to zero, the effect of contextual entrainment is significantly attenuated, causing the model to generate output that capitulates to what it would produce if no distracting context were provided. Our discovery of contextual entrainment, along with our investigation into LM distraction via the entrainment heads, marks a key step towards the mechanistic analysis and mitigation of the distraction problem.
Authors: Jingcheng Niu, Xingdi Yuan, Tong Wang, Hamidreza Saghir, Amir H. Abdi
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.791",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: LLMs know their vulnerabilities: Uncover Safety Gaps through Natural Distribution Shifts
Abstract: Safety concerns in large language models (LLMs) have gained significant attention due to their exposure to potentially harmful data during pre-training. In this paper, we identify a new safety vulnerability in LLMs: their susceptibility to between attack prompts and original toxic prompts, where seemingly benign prompts, semantically related to harmful content, can bypass safety mechanisms. To explore this issue, we introduce a novel attack method, , which identifies actors related to toxic prompts within pre-training distribution to craft multi-turn prompts that gradually lead LLMs to reveal unsafe content. ActorBreaker is grounded in Latour’s actor-network theory, encompassing both human and non-human actors to capture a broader range of vulnerabilities. Our experimental results demonstrate that ActorBreaker outperforms existing attack methods in terms of diversity, effectiveness, and efficiency across aligned LLMs. To address this vulnerability, we propose expanding safety training to cover a broader semantic space of toxic content. We thus construct a multi-turn safety dataset using ActorBreaker. Fine-tuning models on our dataset shows significant improvements in robustness, though with some trade-offs in utility. Code is available at https://github.com/AI45Lab/ActorAttack.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1207",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Mapping 1,000+ Language Models via the Log-Likelihood Vector
Abstract: To compare autoregressive language models at scale, we propose using log-likelihood vectors computed on a predefined text set as model features. This approach has a solid theoretical basis: when treated as model coordinates, their squared Euclidean distance approximates the Kullback-Leibler divergence of text-generation probabilities. Our method is highly scalable, with computational cost growing linearly in both the number of models and text samples, and is easy to implement as the required features are derived from cross-entropy loss. Applying this method to over 1,000 language models, we constructed a “model map,” providing a new perspective on large-scale model analysis.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1584",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models
Abstract: Long Context Understanding (LCU) is a critical area for exploration in current large language models (LLMs). However, due to the inherently lengthy nature of long-text data, existing LCU benchmarks for LLMs often result in prohibitively high evaluation costs, like testing time and inference expenses. Through extensive experimentation, we discover that existing LCU benchmarks exhibit significant redundancy, which means the inefficiency in evaluation. In this paper, we propose a concise data compression method tailored for long-text data with sparse information characteristics. By pruning the well-known LCU benchmark LongBench, we create MiniLongBench. This benchmark includes only 237 test samples across six major task categories and 21 distinct tasks. Through empirical analysis of over 60 LLMs, MiniLongBench achieves an average evaluation cost reduced to only 4.5% of the original while maintaining an average rank correlation coefficient of 0.97 with LongBench results. Therefore, our MiniLongBench, as a low-cost benchmark, holds great potential to substantially drive future research into the LCU capabilities of LLMs.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.560",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: PARME: Parallel Corpora for Low-Resourced Middle Eastern Languages
Abstract: The Middle East is characterized by remarkable linguistic diversity, with over 400 million inhabitants speaking more than 60 languages across multiple language families. This study presents a pioneering work in developing the first parallel corpora for eight severely under-resourced varieties in the region–PARME, addressing fundamental challenges in low-resource scenarios including non-standardized writing and dialectal complexity. Through an extensive community-driven initiative, volunteers contributed to the creation of over 36,000 translated sentences, marking a significant milestone in resource development. We evaluate machine translation capabilities through zero-shot approaches and fine-tuning experiments with pretrained machine translation models and provide a comprehensive analysis of limitations. Our findings reveal significant gaps in existing technologies for processing the selected languages, highlighting critical areas for improvement in language technology for Middle Eastern languages.
Authors: Sina Ahmadi, Rico Sennrich, Erfan Karami, Ako Marani, Parviz Fekrazad, Gholamreza Akbarzadeh Baghban, Hanah Hadi, Semko Heidari, Mahîr Dogan, Pedram Asadi, Dashne Bashir, Mohammad Amin Ghodrati, Kourosh Amini, Zeynab Ashourinezhad, Mana Baladi, Farshid Ezzati, Alireza Ghasemifar, Daryoush Hosseinpour, Behrooz Abbaszadeh, Amin Hassanpour, Bahaddin Jalal Hamaamin, Saya Kamal Hama, Ardeshir Mousavi, Sarko Nazir Hussein, Isar Nejadgholi, Mehmet Ölmez, Horam Osmanpour, Rashid Roshan Ramezani, Aryan Sediq Aziz, Ali Salehi Sheikhalikelayeh, Mohammadreza Yadegari, Kewyar Yadegari, Sedighe Zamani Roodsari
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1451",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Past Meets Present: Creating Historical Analogy with Large Language Models
Abstract: Historical analogies, which compare known past events with contemporary but unfamiliar events, are important abilities that help people make decisions and understand the world. However, research in applied history suggests that people have difficulty finding appropriate analogies. And previous studies in the AI community have also overlooked historical analogies. To fill this gap, in this paper, we focus on the historical analogy acquisition task, which aims to acquire analogous historical events for a given event. We explore retrieval and generation methods for acquiring historical analogies based on different large language models (LLMs). Furthermore, we propose a self-reflection method to mitigate hallucinations and stereotypes when LLMs generate historical analogies. Through human evaluations and our specially designed automatic multi-dimensional assessment, we find that LLMs generally have a good potential for historical analogies. And the performance of the models can be further improved by using our self-reflection method. Resources of this paper can be found at https://anonymous.4open.science/r/Historical-Analogy-of-LLMs-FC17
Authors: Nianqi Li, Siyu Yuan, Jiangjie Chen, Jiaqing Liang, Feng Wei, Zujie Liang, Deqing Yang, Yanghua Xiao
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.200",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Pre³: Enabling Deterministic Pushdown Automata for Faster Structured LLM Generation
Abstract: Extensive LLM applications demand efficient structured generations, particularly for LR(1) grammars, to produce outputs in specified formats (e.g., JSON). Existing methods primarily parse LR(1) grammars into a pushdown automaton (PDA), leading to runtime execution overhead for context-dependent token processing, especially inefficient under large inference batches.To address these issues, we propose that exploits deterministic pushdown automata (DPDA) to optimize the constrained LLM decoding efficiency.First, by **pre**computing **pre**fix-conditioned edges during the **pre**processing, enables ahead-of-time edge analysis and thus makes parallel transition processing possible.Futher, leveraging the prefix-conditioned edges, introduces a novel approach that transforms LR(1) transition graphs into DPDA, eliminating the need for runtime path exploration and achieving edge transitions with minimal overhead. can be seamlessly integrated into standard LLM inference frameworks, improving time per output token (TPOT) by up to 40% and throughput by up to 36% in our experiments. Our code is available at https://github.com/ModelTC/lightllm.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.551",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Rethinking the Role of Prompting Strategies in LLM Test-Time Scaling: A Perspective of Probability Theory
Abstract: Recently, scaling test-time compute on Large Language Models (LLM) has garnered wide attention. However, there has been limited investigation of how various reasoning prompting strategies perform as scaling. In this paper, we focus on a standard and realistic scaling setting: majority voting. We systematically conduct experiments on 6 LLMs 8 prompting strategies 6 benchmarks. Experiment results consistently show that as the sampling time and computational overhead increase, complicated prompting strategies with superior initial performance gradually fall behind simple Chain-of-Thought.We analyze this phenomenon and provide theoretical proofs. Additionally, we propose a probabilistic method to efficiently predict scaling performance and identify the best prompting strategy under large sampling times, eliminating the need for resource-intensive inference processes in practical applications.Furthermore, we introduce two ways derived from our theoretical analysis to significantly improve the scaling performance. We hope that our research can promote to re-examine the role of complicated prompting, unleash the potential of simple prompting strategies, and provide new insights for enhancing test-time scaling performance. Code is available at https://github.com/MraDonkey/rethinking_prompting.
Authors: Yexiang Liu, Zekun Li, Zhi Fang, Nan Xu, Ran He, Tieniu Tan
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1356",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Revisiting Compositional Generalization Capability of Large Language Models Considering Instruction Following Ability
Abstract: In generative commonsense reasoning tasks such as CommonGen, generative large language models (LLMs) compose sentences that include all given concepts. However, when focusing on instruction-following capabilities, if a prompt specifies a concept order, LLMs must generate sentences that adhere to the specified order. To address this, we propose Ordered CommonGen, a benchmark designed to evaluate the compositional generalization and instruction-following abilities of LLMs. This benchmark measures ordered coverage to assess whether concepts are generated in the specified order, enabling a simultaneous evaluation of both abilities. We conducted a comprehensive analysis using 36 LLMs and found that, while LLMs generally understand the intent of instructions, biases toward specific concept order patterns often lead to low-diversity outputs or identical results even when the concept order is altered. Moreover, even the most instruction-compliant LLM achieved only about 75% ordered coverage, highlighting the need for improvements in both instruction-following and compositional generalization capabilities.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1508",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Toward Automatic Discovery of a Canine Phonetic Alphabet
Abstract: Dogs communicate intelligently but little is known about the phonetic properties of their vocalization communication. For the first time, this paper presents an iterative algorithm inspired by human phonetic discovery, which is based on minimal pairs that determine phonemes by distinguishing different words in human language, and is able to produce a complete alphabet of distinct canine phoneme-like units. In addition, the algorithm produces a number of canine repeated acoustic units, which may correspond to specific environments and activities of a dog, composed exclusively of the canine phoneme-like units in the alphabet. The framework outlined in this paper is expected to function not only on canines but other animal species.
Authors: Theron S. Wang, Xingyuan Li, Hridayesh Lekhak, Tuan Minh Dang, Mengyue Wu, Kenny Q. Zhu
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.451",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Towards the Law of Capacity Gap in Distilling Language Models
Abstract: Language model (LM) distillation aims at distilling the knowledge in a large teacher LM to a small student one. As a critical issue facing LM distillation, a superior student often arises from a teacher of a relatively small scale instead of a larger one, especially in the presence of substantial capacity gap between the teacher and student. This issue, often referred to as the , suggests that there is likely an optimal teacher yielding the best-performing student along the scaling course of the teacher. Consequently, distillation trials on teachers of a wide range of scales are called for to determine the optimal teacher, which becomes computationally intensive in the context of large LMs (LLMs). This paper addresses this critical bottleneck by providing the inducted from a preliminary study on distilling a broad range of small-scale (<3B) LMs, where the optimal teacher consistently scales linearly with the student scale across different model and data scales. By extending the law to LLM distillation on a larger scale (7B), we succeed in obtaining versatile LLMs that outperform a wide array of competitors.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1097",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Abstract: The rapid growth in the parameters of LLMs has made inference latency a fundamental bottleneck. Speculative decoding represents a lossless approach to accelerate inference through a guess-and-verify paradigm. Some methods rely on additional architectures to guess draft tokens, which need extra training before use. Alternatively, retrieval-based train-free techniques build libraries from pre-existing corpora or by n-gram generation. However, they face challenges like large storage requirements, time-consuming retrieval, and limited adaptability. Observing that candidate tokens generated during the decoding process are likely to reoccur in future sequences, we propose Token Recycling. This approach stores candidate tokens in an adjacency matrix and employs a breadth-first-search (BFS)-like algorithm to construct a draft tree, which is then validated through tree attention. New candidate tokens from the decoding process are then used to update the matrix. Token Recycling requires <2MB of additional storage and achieves approximately 2x speedup across all sizes of LLMs. It significantly outperforms existing train-free methods by 30% and even a training method by 25%.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.338",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Typology-Guided Adaptation in Multilingual Models
Abstract: Multilingual models often treat language diversity as a problem of data imbalance, overlooking structural variation. We introduce the *Morphological Index* (MoI), a typologically grounded metric that quantifies how strongly a language relies on surface morphology for noun classification. Building on MoI, we propose *MoI-MoE*, a Mixture of Experts model that routes inputs based on morphological structure. Evaluated on 10 Bantu languages—a large, morphologically rich and underrepresented family—MoI-MoE outperforms strong baselines, improving Swahili accuracy by 14 points on noun class recognition while maintaining performance on morphology-rich languages like Zulu. These findings highlight typological structure as a practical and interpretable signal for multilingual model adaptation.
|
{
"conference": "ACL 2025",
"category": "Outstanding Paper",
"sheet": "ACL",
"accepted_tags": "ACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1059",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
Abstract: As language models (LMs) become capable of handling a wide range of tasks, their evaluation is becoming as challenging as their development. Most generation benchmarks currently assess LMs using abstract evaluation criteria-like helpfulness and harmlessness-which often lack the flexibility and granularity of human assessment. Additionally, these benchmarks tend to focus disproportionately on specific capabilities such as instruction following, leading to coverage bias. To overcome these limitations, we introduce the BiGGen Bench, a principled generation benchmark designed to thoroughly evaluate nine distinct capabilities of LMs across 77 diverse tasks. A key feature of the BiGGen Bench is its use of instance-specific evaluation criteria, closely mirroring the nuanced discernment of human evaluation. We apply this benchmark to assess 100 frontier LMs using five evaluator LMs. Our code, data, and evaluation results are all publicly available at https://github.com/prometheus-eval/prometheus-eval.
|
{
"conference": "NAACL 2025",
"category": "Best Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.303",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: REL-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance
Abstract: The ability to communicate uncertainty and knowledge limitations is crucial for the safety of large language models (LLMs). Current evaluations of these abilities typically examine the correspondence between model accuracy and its internal probabilities or linguistic outputs. However, evaluation of the uncertainty of LLM communication should also focus on the behaviors of their human interlocutors: how much do users rely on what the LLM says? We introduce an interaction-centered evaluation approach called Rel-A.I. (pronounced ârelyâ) that quantifies whether and how humans rely on LLMsâ responses, complementing existing calibration evaluations. Through nine user studies with 450 participants, we investigate three crucial aspects that influence user reliance. We show that emphatic expressions of politeness (e.g., âIâm happy to help!â) that precede LLM answers will cause participants to perceive these models as more competent, and in turn, rely 30% more on their generations. Additionally, the context of the interaction, such as the knowledge domain and nature of previous interactions with the LLM, substantially influences user reliance (e.g., users will rely 10% more on LLMs when responding to questions involving calculations). Our results show that calibration and language quality alone are insufficient in informing which LLMs are safely calibrated, and illustrate the need to consider features of the interactional context.
Authors: Kaitlyn Zhou, Jena D. Hwang, Xiang Ren, Nouha Dziri, Dan Jurafsky, Maarten Sap
|
{
"conference": "NAACL 2025",
"category": "Best Paper Runner-up",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.556",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: FLEURS-ASL: Including American Sign Language in Massively Multilingual Multitask Evaluation
Abstract: Sign language translation has historically been peripheral to mainstream machine translation research. In order to help converge the fields, we introduce FLEURS-ASL, an extension of the multiway parallel benchmarks FLORES (for text) and FLEURS (for speech) to support their first sign language (as video), American Sign Language, translated by 5 Certified Deaf Interpreters. FLEURS-ASL can be used to evaluate a variety of tasksâprimarily sentence- and discourse-level translationâbetween ASL and 200 other languages as text, or 102 languages as speech. We provide baselines for tasks from ASL to English text using a unified modeling approach that incorporates timestamp tokens and previous text tokens in a 34-second context window, trained on random video clips from YouTube-ASL. This model meets or exceeds the performance of phrase-level baselines while supporting a multitude of new tasks. We also use FLEURS-ASL to show that multimodal frontier models have virtually no understanding of ASL, underscoring the importance of including sign languages in standard evaluation suites.
|
{
"conference": "NAACL 2025",
"category": "Best Social Impact Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.314",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines
Abstract: Vision Language Models (VLMs) often struggle with culture-specific knowledge, particularly in languages other than English and in underrepresented cultural contexts. To evaluate their understanding of such knowledge, we introduce WorldCuisines, a massive-scale benchmark for multilingual and multicultural, visually grounded language understanding. This benchmark includes a visual question answering (VQA) dataset with text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark to date. It includes tasks for identifying dish names and their origins. We provide evaluation datasets in two sizes (12k and 60k instances) alongside a training dataset (1 million instances). Our findings show that while VLMs perform better with correct location context, they struggle with adversarial contexts and predicting specific regional cuisines and languages. To support future research, we release a knowledge base with annotated food entries and images along with the VQA data.
|
{
"conference": "NAACL 2025",
"category": "Best Theme Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.167",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Best
|
Title: Developing multilingual speech synthesis system for Ojibwe, Mi’kmaq, and Maliseet
Abstract: We present lightweight flow matching multilingual text-to-speech (TTS) systems for Ojibwe, Miâkmaq, and Maliseet, three Indigenous languages in North America. Our results show that training a multilingual TTS model on three typologically similar languages can improve the performance over monolingual models, especially when data are scarce. Attention-free architectures are highly competitive with self-attention architecture with higher memory efficiency. Our research provides technical development to language revitalization for low-resource languages but also highlights the cultural gap in human evaluation protocols, calling for a more community-centered approach to human evaluation.
|
{
"conference": "NAACL 2025",
"category": "Best Theme Paper Runner-up",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-short.69",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: PeerQA: A Scientific Question Answering Dataset from Peer Reviews
Abstract: We present PeerQA, a real-world, scientific, document-level Question Answering (QA) dataset. PeerQA questions have been sourced from peer reviews, which contain questions that reviewers raised while thoroughly examining the scientific article. Answers have been annotated by the original authors of each paper. The dataset contains 579 QA pairs from 208 academic articles, with a majority from ML and NLP, as well as a subset of other scientific communities like Geoscience and Public Health.PeerQA supports three critical tasks for developing practical QA systems: Evidence retrieval, unanswerable question classification, and answer generation. We provide a detailed analysis of the collected dataset and conduct experiments establishing baseline systems for all three tasks. Our experiments and analyses reveal the need for decontextualization in document-level retrieval, where we find that even simple decontextualization approaches consistently improve retrieval performance across architectures. On answer generation, PeerQA serves as a challenging benchmark for long-context modeling, as the papers have an average size of 12k tokens.
Authors: Tim Baumgärtner, Ted Briscoe, Iryna Gurevych
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.22",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Is your benchmark truly adversarial? AdvScore: Evaluating Human-Grounded Adversarialness
Abstract: Adversarial datasets should validate AI robustness by providing samples on which humans perform well, but models do not. However, as models evolve, datasets can become obsolete. Measuring whether a dataset remains adversarial is hindered by the lack of a standardized metric for measuring adversarialness. We propose ADVSCORE, a human-grounded evaluation metric that assesses a datasetâs adversarialness by capturing modelsâ and humansâ varying abilities, while also identifying poor examples. We then use ADVSCORE to motivate a new dataset creation pipeline for realistic and high-quality adversarial samples, enabling us to collect an adversarial question answering (QA) dataset, ADVQA. We apply ADVSCORE using 9,347 human responses and ten language modelsâ predictions to track model improvement over five years (2020â2024). ADVSCORE thus provides guidance for achieving robustness comparable with human capabilities. Furthermore, it helps determine to what extent adversarial datasets continue to pose challenges, ensuring that, rather than reflecting outdated or overly artificial difficulties, they effectively test model capabilities.
Authors: Yoo Yeon Sung, Maharshi Gor, Eve Fleisig, Ishani Mondal, Jordan Lee Boyd-Graber
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.27",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: NLI under the Microscope: What Atomic Hypothesis Decomposition Reveals
Abstract: Decomposition of text into atomic propositions is a flexible framework allowing for the closer inspection of input and output text. We use atomic decomposition of hypotheses in two natural language reasoning tasks, traditional NLI and defeasible NLI, to form atomic sub-problems, or granular inferences that models must weigh when solving the overall problem. These atomic sub-problems serve as a tool to further understand the structure of both NLI and defeasible reasoning, probe a modelâs consistency and understanding of different inferences, and measure the diversity of examples in benchmark datasets. Our results indicate that LLMs still struggle with logical consistency on atomic NLI and defeasible NLI sub-problems. Lastly, we identify critical atomic sub-problems of defeasible NLI examples, or those that most contribute to the overall label, and propose a method to measure the inferential consistency of a model, a metric designed to capture the degree to which a model makes consistently correct or incorrect predictions about the same fact under different contexts.
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.130",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models
Abstract: Despite the widespread adoption of Large language models (LLMs), their remarkable capabilities remain limited to a few high-resource languages. Additionally, many low-resource languages (e.g. African languages) are often evaluated only on basic text classification tasks due to the lack of appropriate or comprehensive benchmarks outside of high-resource languages. In this paper, we introduce IrokoBenchâa human-translated benchmark dataset for 17 typologically-diverse low-resource African languages covering three tasks: natural language inference(AfriXNLI), mathematical reasoning(AfriMGSM), and multi-choice knowledge-based QA(AfriMMLU). We use IrokoBench to evaluate zero-shot, few-shot, and translate-test settings(where test sets are translated into English) across 10 open and four proprietary LLMs. Our evaluation reveals a significant performance gap between high-resource languages (such as English and French) and low-resource African languages. We observe a significant performance gap between open and proprietary models, with the highest performing open model, Gemma 2 27B only at 63% of the best-performing proprietary model GPT-4o performance. Machine translating the test set to English before evaluation helped to close the gap for larger models that are English-centric, like Gemma 2 27B and LLaMa 3.1 70B. These findings suggest that more efforts are needed to develop and adapt LLMs for African languages.
Authors: David Ifeoluwa Adelani, Jessica Ojo, Israel Abebe Azime, Jian Yun Zhuang, Jesujoba Oluwadara Alabi, Xuanli He, Millicent Ochieng, Sara Hooker, Andiswa Bukula, En-Shiun Annie Lee, Chiamaka Ijeoma Chukwuneke, Happy Buzaaba, Blessing Kudzaishe Sibanda, Godson Koffi Kalipe, Jonathan Mukiibi, Salomon Kabongo Kabenamualu, Foutse Yuehgoh, Mmasibidi Setaka, Lolwethu Ndolela, Nkiruka Odu, Rooweither Mabuya, Salomey Osei, Shamsuddeen Hassan Muhammad, Sokhar Samb, Tadesse Kebede Guge, Tombekai Vangoni Sherman, Pontus Stenetorp
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.139",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: ACCORD: Closing the Commonsense Measurability Gap
Abstract: We present ACCORD, a framework and benchmark suite for disentangling the commonsense grounding and reasoning abilities of large language models (LLMs) through controlled, multi-hop counterfactuals. ACCORD introduces formal elements to commonsense reasoning to explicitly control and quantify reasoning complexity beyond the typical 1 or 2 hops. Uniquely, ACCORD can automatically generate benchmarks of arbitrary reasoning complexity, so it scales with future LLM improvements. Indeed, our experiments on state-of-the-art LLMs show performance degrading to below random chance with only moderate scaling, leaving substantial headroom for improvement. We release a leaderboard of the benchmark suite tested in this work, as well as code for automatically generating more complex benchmarks.
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.193",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: DrawEduMath: Evaluating Vision Language Models with Expert-Annotated Students’ Hand-Drawn Math Images
Abstract: In real-world settings, vision language models (VLMs) should robustly handle naturalistic, noisy visual content as well as domain-specific language and concepts. For example, K-12 educators using digital learning platforms may need to examine and provide feedback across many images of studentsâ math work. To assess the potential of VLMs to support educators in settings like this one, we introduce DrawEduMath, an English-language dataset of 2,030 images of studentsâ handwritten responses to K-12 math problems. Teachers provided detailed annotations, including free-form descriptions of each image and 11,661 question-answer (QA) pairs. These annotations capture a wealth of pedagogical insights, ranging from studentsâ problem-solving strategies to the composition of their drawings, diagrams, and writing. We evaluate VLMs on teachersâ QA pairs, as well as 44,362 synthetic QA pairs derived from teachersâ descriptions using language models (LMs). We show that even state-of-the-art VLMs leave much room for improvement on DrawEduMath questions. We also find that synthetic QAs, though imperfect, can yield similar model rankings as teacher-written QAs. We release DrawEduMath to support the evaluation of VLMsâ abilities to reason mathematically over images gathered with educational contexts in mind.
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.352",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: A Logical Fallacy-Informed Framework for Argument Generation
Abstract: Despite the remarkable performance of large language models (LLMs), they still struggle with generating logically sound arguments, resulting in potential risks such as spreading misinformation. An important factor contributing to LLMsâ suboptimal performance in generating coherent arguments is their oversight of logical fallacies. To address this issue, we introduce fallacy-informed preference optimization (FIPO) that helps steer LLMs toward generating logically sound arguments. FIPO includes a classification loss to capture the fine-grained information on fallacy types. Our results on argument generation tasks show that FIPO reduces the fallacy errors by up to 17.5%. Furthermore, our human evaluation results reveal that the quality of the arguments generated by our method significantly outperforms the fine-tuned baselines and other preference optimization methods, such as DPO. These findings highlight the importance of ensuring models are aware of logical fallacies for effective argument generation.
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.374",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Learning vs Retrieval: The Role of In-Context Examples in Regression with Large Language Models
Abstract: Generative Large Language Models (LLMs) are capable of being in-context learners. However, the underlying mechanism of in-context learning (ICL) is still a major research question, and experimental research results about how models exploit ICL are not always consistent. In this work, we propose a framework for evaluating in-context learning mechanisms, which we claim are a combination of retrieving internal knowledge and learning from in-context examples by focusing on regression tasks. First, we show that LLMs can solve real-world regression problems and then design experiments to measure the extent to which the LLM retrieves its internal knowledge versus learning from in-context examples. We argue that this process lies on a spectrum between these two extremes. We provide an in-depth analysis of the degrees to which these mechanisms are triggered depending on various factors, such as prior knowledge about the tasks and the type and richness of the information provided by the in-context examples. We employ three LLMs and utilize multiple datasets to corroborate the robustness of our findings. Our results shed light on how to engineer prompts to leverage meta-learning from in-context examples and foster knowledge retrieval depending on the problem being addressed.
Authors: Aliakbar Nafar, K. Brent Venable, Parisa Kordjamshidi
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.417",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: Multi3Hate: Multimodal, Multilingual, and Multicultural Hate Speech Detection with Vision–Language Models
Abstract: Hate speech moderation on global platforms poses unique challenges due to the multimodal and multilingual nature of content, along with the varying cultural perceptions. How well do current vision-language models (VLMs) navigate these nuances? To investigate this, we create the first multimodal and multilingual parallel hate speech dataset, annotated by a multiculturally diverse set of annotators, called MultiHate. It contains 300 parallel meme samples across 5 languages: English, German, Spanish, Hindi, and Mandarin. We demonstrate that cultural background significantly affects multimodal hate speech annotation in our dataset. The average pairwise agreement among countries is just 74%, significantly lower than that of randomly selected annotator groups. Our qualitative analysis indicates that the lowest pairwise label agreementâonly 67% between the USA and Indiaâcan be attributed to cultural factors. We then conduct experiments with 5 large VLMs in a zero-shot setting, finding that these models align more closely with annotations from the US than with those from other cultures, even when the memes and prompts are presented in the native language of the other culture.
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.490",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Outstanding
|
Title: How Good Are LLMs for Literary Translation, Really? Literary Translation Evaluation with Humans and LLMs
Abstract: Recent research has focused on literary machine translation (MT) as a new challenge in MT. However, the evaluation of literary MT remains an open problem. We contribute to this ongoing discussion by introducing LITEVAL-CORPUS, a paragraph-level parallel corpus containing verified human translations and outputs from 9 MT systems, which totals over 2k translations and 13k evaluated sentences across four language pairs, costing 4.5kâ¬. This corpus enables us to (i) examine the consistency and adequacy of human evaluation schemes with various degrees of complexity, (ii) compare evaluations by students and professionals, assess the effectiveness of (iii) LLM-based metrics and (iv) LLMs themselves. Our findings indicate that the adequacy of human evaluation is controlled by two factors: the complexity of the evaluation scheme (more complex is less adequate) and the expertise of evaluators (higher expertise yields more adequate evaluations). For instance, MQM (Multidimensional Quality Metrics), a complex scheme and the de facto standard for non-literary human MT evaluation, is largely inadequate for literary translation evaluation: with student evaluators, nearly 60% of human translations are misjudged as indistinguishable or inferior to machine translations. In contrast, BWS (BEST-WORST SCALING), a much simpler scheme, identifies human translations at a rate of 80-100%. Automatic metrics fare dramatically worse, with rates of at most 20%. Our overall evaluation indicates that published human translations consistently outperform LLM translations, where even the most recent LLMs tend to produce considerably more literal and less diverse translations compared to humans.
|
{
"conference": "NAACL 2025",
"category": "Outstanding Paper",
"sheet": "NAACL",
"accepted_tags": "NAACL 2025",
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.548",
"pdf_url": "",
"track": null,
"conference_group": null,
"tags": null
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: MEXA: Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment
Abstract: English-centric large language models (LLMs) often show strong multilingual capabilities. However, their multilingual performance remains unclear and is under-evaluated for many other languages. Most benchmarks for multilinguality focus on classic NLP tasks or cover a minimal number of languages. We introduce MEXA, a method for assessing the multilingual capabilities of pre-trained English-centric LLMs using parallel sentences, which are available for more languages than existing downstream tasks. MEXA leverages that English-centric LLMs use English as a pivot language in their intermediate layers. MEXA computes the alignment between English and non-English languages using parallel sentences to evaluate the transfer of language understanding from English to other languages. This alignment can be used to estimate model performance in different languages. We conduct controlled experiments using various parallel datasets (FLORES-200 and Bible), models (Llama family, Gemma family, Mistral, and OLMo), and established downstream tasks (Belebele, m-MMLU, and m-ARC). We explore different methods to compute embeddings in decoder-only models. Our results show that MEXA, in its default settings, achieves an average Pearson correlation of 0.90 between its predicted scores and actual task performance across languages. This suggests that MEXA is a reliable method for estimating the multilingual capabilities of English-centric LLMs, providing a clearer understanding of their multilingual potential and the inner workings of LLMs. Leaderboard: https://cis-lmu-mexa.hf.space, Code: https://github.com/cisnlp/MEXA.
Authors: Amir Hossein Kargaran, Ali Modarressi, Nafiseh Nikeghbal, Jana Diesner, François Yvon, Hinrich Schuetze
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.1385",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Towards Explainable Hate Speech Detection
Abstract: Recent advancements in deep learning have significantly enhanced the efficiency and accuracy of natural language processing (NLP) tasks. However, these models often require substantial computational resources, which remains a major drawback. Reducing the complexity of deep learning architectures, and exploring simpler yet effective approaches can lead to cost-efficient NLP solutions. This is also a step towards explainable AI, i.e., uncovering how a particular task is carried out. For this analysis, we chose the task of hate speech detection. We address hate speech detection by introducing a model that employs a weighted sum of valence, arousal, and dominance (VAD) scores for classification. To determine the optimal weights and classification strategies, we analyze hate speech and non-hate speech words based on both their individual and summed VAD-values. Our experimental results demonstrate that this straightforward approach can compete with state-of-the-art neural network methods, including GPT-based models, in detecting hate speech.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.667",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Disambiguate First, Parse Later: Generating Interpretations for Ambiguity Resolution in Semantic Parsing
Abstract: Handling ambiguity and underspecification is an important challenge in natural language interfaces, particularly for tasks like text-to-SQL semantic parsing. We propose a modular approach that resolves ambiguity using natural language interpretations before mapping these to logical forms (e.g., SQL queries). Although LLMs excel at parsing unambiguous utterances, they show strong biases for ambiguous ones, typically predicting only preferred interpretations. We constructively exploit this bias to generate an initial set of preferred disambiguations and then apply a specialized infilling model to identify and generate missing interpretations. To train the infilling model, we introduce an annotation method that uses SQL execution to validate different meanings. Our approach improves interpretation coverage and generalizes across datasets with different annotation styles, database structures, and ambiguity types.
Authors: Irina Saparina, Mirella Lapata
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.863",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models
Abstract: Automated Essay Scoring (AES) plays a crucial role in educational assessment by providing scalable and consistent evaluations of writing tasks. However, traditional AES systems face three major challenges: (i) reliance on handcrafted features that limit generalizability, (ii) difficulty in capturing fine-grained traits like coherence and argumentation, and (iii) inability to handle multimodal contexts. In the era of Multimodal Large Language Models (MLLMs), we propose **EssayJudge**, the **first multimodal benchmark to evaluate AES capabilities across lexical-, sentence-, and discourse-level traits**. By leveraging MLLMs’ strengths in trait-specific scoring and multimodal context understanding, EssayJudge aims to offer precise, context-rich evaluations without manual feature engineering, addressing longstanding AES limitations. Our experiments with 18 representative MLLMs reveal gaps in AES performance compared to human evaluation, particularly in discourse-level traits, highlighting the need for further advancements in MLLM-based AES research. Our dataset and code will be available upon acceptance.
Authors: Jiamin Su, Yibo Yan, Fangteng Fu, Zhang Han, Jingheng Ye, Xiang Liu, Jiahao Huo, Huiyu Zhou, Xuming Hu
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.329",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Unsupervised Morphological Tree Tokenizer
Abstract: As a cornerstone in language modeling, tokenization involves segmenting text inputs into pre-defined atomic units. Conventional statistical tokenizers often disrupt constituent boundaries within words, thereby corrupting semantic information. To address this drawback, we introduce morphological structure guidance to tokenization and propose a deep model to induce character-level structures of words. Specifically, the deep model jointly encodes internal structures and representations of words with a mechanism named MorphOverriding to ensure the indecomposability of morphemes. By training the model with self-supervised objectives, our method is capable of inducing character-level structures that align with morphological rules without annotated training data. Based on the induced structures, our algorithm tokenizes words through vocabulary matching in a top-down manner. Empirical results indicate that the proposed method effectively retains complete morphemes and outperforms widely adopted methods such as BPE and WordPiece on both morphological segmentation tasks and language modeling tasks.
Authors: Qingyang Zhu, Xiang Hu, Pengyu Ji, Wei Wu, Kewei Tu
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.1146",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: ^2 M-TabFact: Multi-Document Multi-Modal Fact Verification with Visual and Textual Representations of Tabular Data
Abstract: Tabular data is used to store information in many real-world systems ranging from finance to healthcare. However, such structured data is often communicated to humans in visually interpretable formats (e.g. charts and textual paragraphs), making it imperative that fact-checking models should be able to reason over multiple pieces of structured evidence presented across different modalities. In this paper, we propose Multi-Document Multi-Modal Table-based Fact Verification (M-TabFact), a challenging fact verification task that requires jointly reasoning over visual and textual representations of structured data. We design an automatic data generation pipeline that converts existing tabular data into descriptive visual and textual evidence. We then use Large Language Models to generate complex claims that depend on multi-document, multi-modal evidence. In total, we create 8,856 pairs of complex claims and multi-modal evidence through this procedure and systematically evaluate M-TabFact with a set of strong vision-language models (VLM). We find that existing VLMs have large gaps in fact verification performance compared to humans. Moreover, we find that they are imbalanced when it comes to their ability to handle reason about different modalities, and currently struggle to reason about information extracted from multiple documents.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.1345",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Federated Data-Efficient Instruction Tuning for Large Language Models
Abstract: Instruction tuning is a crucial step in improving the responsiveness of pretrained large language models (LLMs) to human instructions. Federated learning (FL) helps to exploit the use of vast private instruction data from clients, becoming popular for LLM tuning by improving data diversity. Existing federated tuning simply consumes all local data, causing excessive computational overhead and overfitting to local data, while centralized data-efficient solutions are not suitable for FL due to privacy concerns. This work presents FedHDS, a federated data-efficient instruction tuning approach, which tunes LLMs with a representative subset of edge-side data. It reduces the data redundancy at both intra- and inter-client levels without sharing raw data. Experiments with various LLMs, datasets and partitions show that FedHDS improves Rouge-L on unseen tasks by an average of 10.72% over the SOTA full-data federated instruction tuning methods, while using less than 1.5% of the data samples, improving training efficiency by up to tens of times.
Authors: Zhen Qin, Zhaomin Wu, Bingsheng He, Shuiguang Deng
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.803",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Generative Error Correction for Emotion-aware Speech-to-text Translation
Abstract: This paper explores emotion-aware speech-to-text translation (ST) using generative error correction (GER) by large language models (LLMs). Despite recent advancements in ST, the impact of the emotional content has been overlooked. First, we enhance the translation of emotional speech by adopting the GER paradigm: Finetuned an LLM to generate the translation based on the decoded N-best hypotheses. Moreover, we combine the emotion and sentiment labels into the LLM finetuning process to enable the model to consider the emotion content. In addition, we project the ST model’s latent representation into the LLM embedding space to further improve emotion recognition and translation. Experiments on an English-Chinese dataset show the effectiveness of the combination of GER, emotion and sentiment labels, and the projector for emotion-aware ST. Our code is available at https://github.com/N-Orien/EmoST.
Authors: Zhengdong Yang, Sheng Li, Chenhui Chu
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.1047",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?
Abstract: Large language models (LLMs) have been widely adopted as the core of agent frameworks in various scenarios, such as social simulations and AI companions. However, the extent to which they can replicate human-like motivations remains an underexplored question. Existing benchmarks are constrained by simplistic scenarios and the absence of character identities, resulting in an information asymmetry with real-world situations. To address this gap, we propose MotiveBench, which consists of 200 rich contextual scenarios and 600 reasoning tasks covering multiple levels of motivation. Using MotiveBench, we conduct extensive experiments on seven popular model families, comparing different scales and versions within each family. The results show that even the most advanced LLMs still fall short in achieving human-like motivational reasoning. Our analysis reveals key findings, including the difficulty LLMs face in reasoning about “love & belonging” motivations and their tendency toward excessive rationality and idealism. These insights highlight a promising direction for future research on the humanization of LLMs.
Authors: Xixian Yong, Jianxun Lian, Xiaoyuan Yi, Xiao Zhou, Xing Xie
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.1029",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Abstract: In real-world NLP applications, Large Language Models (LLMs) offer promising solutions due to their extensive training on vast datasets. However, the large size and high computation demands of LLMs limit their practicality in many applications, especially when further fine-tuning is required. To address these limitations, smaller models are typically preferred for deployment. However, their training is hindered by the scarcity of labeled data. In contrast, unlabeled data is often readily which can be leveraged by using LLMs to generate pseudo-labels for training smaller models. This enables the smaller models (student) to acquire knowledge from LLMs (teacher) while reducing computational costs. This process introduces challenges, such as potential noisy pseudo-labels. % and the high computational expense of processing large unlabeled datasets. Selecting high-quality and informative data is therefore critical to enhance model performance while improving the efficiency of data utilization. To address this, we propose LLKD that enables Learning with Less computational resources and less data for Knowledge Distillation from LLMs. LLKD is an adaptive sample selection method that incorporates signals from both the teacher and student. Specifically, it prioritizes samples where the teacher demonstrates high confidence in its labeling, indicating reliable labels, and where the student exhibits a high information need, identifying challenging samples that require further learning. Our comprehensive experiments show that LLKD achieves superior performance across various datasets with higher data efficiency.
Authors: Juanhui Li, Sreyashi Nag, Hui Liu, Xianfeng Tang, Sheikh Muhammad Sarwar, Limeng Cui, Hansu Gu, Suhang Wang, Qi He, Jiliang Tang
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.142",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: How Well Do LLMs Handle Cantonese? Benchmarking Cantonese Capabilities of Large Language Models
Abstract: The rapid evolution of large language models (LLMs) has transformed the competitive landscape in natural language processing (NLP), particularly for English and other data-rich languages. However, underrepresented languages like Cantonese, spoken by over 85 million people, face significant development gaps, which is particularly concerning given the economic significance of the Guangdong-Hong Kong-Macau Greater Bay Area, and in substantial Cantonese-speaking populations in places like Singapore and North America. Despite its wide use, Cantonese has scant representation in NLP research, especially compared to other languages from similarly developed regions. To bridge these gaps, we outline current Cantonese NLP methods and introduce new benchmarks designed to evaluate LLM performance in factual generation, mathematical logic, complex reasoning, and general knowledge in Cantonese, which aim to advance open-source Cantonese LLM technology. We also propose future research directions and recommended models to enhance Cantonese LLM development.
Authors: Jiyue Jiang, Pengan Chen, Liheng Chen, Sheng Wang, Qinghang Bao, Lingpeng Kong, Yu Li, Chuan Wu
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.253",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
Abstract: Triton, a high-level Python-like language designed for building efficient GPU kernels, is widely adopted in deep learning frameworks due to its portability, flexibility, and accessibility. However, programming and parallel optimization still require considerable trial and error from Triton developers. Despite advances in large language models (LLMs) for conventional code generation, these models struggle to generate accurate, performance-optimized Triton code, as they lack awareness of its specifications and the complexities of GPU programming. More critically, there is an urgent need for systematic evaluations tailored to Triton. In this work, we introduce TritonBench, the first comprehensive benchmark for Triton operator generation. TritonBench features two evaluation channels: a curated set of 184 real-world operators from GitHub and a collection of operators aligned with PyTorch interfaces. Unlike conventional code benchmarks prioritizing functional correctness, TritonBench also profiles efficiency performance on widely deployed GPUs aligned with industry applications. Our study reveals that current state-of-the-art code LLMs struggle to generate efficient Triton operators, highlighting a significant gap in high-performance code generation.
Authors: Jianling Li, ShangZhan Li, Zhenye Gao, Qi Shi, Yuxuan Li, Zefan Wang, Jiacheng Huang, WangHaojie WangHaojie, Jianrong Wang, Xu Han, Zhiyuan Liu, Maosong Sun
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.1183",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: ProMind-LLM: Proactive Mental Health Care via Causal Reasoning with Sensor Data
Abstract: Mental health risk is a critical global public health challenge, necessitating innovative and reliable assessment methods. With the development of large language models (LLMs), they stand out to be a promising tool for explainable mental health care applications. Nevertheless, existing approaches predominantly rely on subjective textual mental records, which can be distorted by inherent mental uncertainties, leading to inconsistent and unreliable predictions. To address these limitations, this paper introduces ProMind-LLM. We investigate an innovative approach integrating objective behavior data as complementary information alongside subjective mental records for robust mental health risk assessment. Specifically, ProMind-LLM incorporates a comprehensive pipeline that includes domain-specific pretraining to tailor the LLM for mental health contexts, a self-refine mechanism to optimize the processing of numerical behavioral data, and causal chain-of-thought reasoning to enhance the reliability and interpretability of its predictions. Evaluations of two real-world datasets, PMData and Globem, demonstrate the effectiveness of our proposed methods, achieving substantial improvements over general LLMs. We anticipate that ProMind-LLM will pave the way for more dependable, interpretable, and scalable mental health case solutions.
Authors: Xinzhe Zheng, Sijie Ji, Jiawei Sun, Renqi Chen, Wei Gao, Mani Srivastava
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.1033",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM
Abstract: Integrating knowledge graphs (KGs) to enhance the reasoning capabilities of large language models (LLMs) is an emerging research challenge in claim verification. While KGs provide structured, semantically rich representations well-suited for reasoning, most existing verification methods rely on unstructured text corpora, limiting their ability to effectively leverage KGs. Additionally, despite possessing strong reasoning abilities, modern LLMs struggle with multi-step modular pipelines and reasoning over KGs without adaptation. To address these challenges, we propose ClaimPKG, an end-to-end framework that seamlessly integrates LLM reasoning with structured knowledge from KGs. Specifically, the main idea of ClaimPKG is to employ a lightweight, specialized LLM to represent the input claim as pseudo-subgraphs, guiding a dedicated subgraph retrieval module to identify relevant KG subgraphs. These retrieved subgraphs are then processed by a general-purpose LLM to produce the final verdict and justification. Extensive experiments on the FactKG dataset demonstrate that ClaimPKG achieves state-of-the-art performance, outperforming strong baselines in this research field by 9%-12% accuracy points across multiple categories. Furthermore, ClaimPKG exhibits zero-shot generalizability to unstructured datasets such as HoVer and FEVEROUS, effectively combining structured knowledge from KGs with LLM reasoning across various LLM backbones.
Authors: Hoang Pham, Thanh-Do Nguyen, Khac-Hoai Nam Bui
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-acl.274",
"pdf_url": "",
"track": "findings",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: Diffusion Models Through a Global Lens: Are They Culturally Inclusive?
Abstract: Text-to-image diffusion models have recently enabled the creation of visually compelling, detailed images from textual prompts. However, their ability to accurately represent various cultural nuances remains an open question. In our work, we introduce CULTDIFF benchmark, evaluating whether state-of-the-art diffusion models can generate culturally specific images spanning ten countries. We show that these models often fail to generate cultural artifacts in architecture, clothing, and food, especially for underrepresented country regions, by conducting a fine-grained analysis of different similarity aspects, revealing significant disparities in cultural relevance, description fidelity, and realism compared to real-world reference images. With the collected human evaluations, we develop a neural-based image-image similarity metric, namely, CULTDIFF-S, to predict human judgment on real and generated images with cultural artifacts. Our work highlights the need for more inclusive generative AI systems and equitable dataset representation over a wide range of cultures.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1503",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: R2D2: Remembering, Replaying and Dynamic Decision Making with a Reflective Agentic Memory
Abstract: The proliferation of web agents necessitates advanced navigation and interaction strategies within complex web environments. Current models often struggle with efficient navigation and action execution due to limited visibility and understanding of web structures. Our proposed R2D2 framework addresses these challenges by integrating two paradigms: Remember and Reflect. The Remember paradigm utilizes a replay buffer that aids agents in reconstructing the web environment dynamically, thus enabling the formulation of a detailed “map” of previously visited pages. This helps in reducing navigational errors and optimizing the decision-making process during web interactions. Conversely, the Reflect paradigm allows agents to learn from past mistakes by providing a mechanism for error analysis and strategy refinement, enhancing overall task performance. We evaluate R2D2 using the WEBARENA benchmark, demonstrating significant improvements over existing methods, including a 50% reduction in navigation errors and a threefold increase in task completion rates. Our findings suggest that a combination of memory-enhanced navigation and reflective learning promisingly advances the capabilities of web agents, potentially benefiting various applications such as automated customer service and personal digital assistants.
Authors: Tenghao Huang, Kinjal Basu, Ibrahim Abdelaziz, Pavan Kapanipathi, Jonathan May, Muhao Chen
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.1464",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: BELLE: A Bi-Level Multi-Agent Reasoning Framework for Multi-Hop Question Answering
Abstract: Multi-hop question answering (QA) involves finding multiple relevant passages and performing step-by-step reasoning to answer complex questions. Previous works on multi-hop QA employ specific methods from different modeling perspectives based on large language models (LLMs), regardless of the question types. In this paper, we first conduct an in-depth analysis of public multi-hop QA benchmarks, dividing the questions into four types and evaluating five types of cutting-edge methods for multi-hop QA: Chain-of-Thought (CoT), Single-step, Iterative-step, Sub-step, and Adaptive-step. We find that different types of multi-hop questions have varying degrees of sensitivity to different types of methods. Thus, we propose a Bi-levEL muLti-agEnt reasoning (BELLE) framework to address multi-hop QA by specifically focusing on the correspondence between question types and methods, where each type of method is regarded as an ”operator” by prompting LLMs differently. The first level of BELLE includes multiple agents that debate to obtain an executive plan of combined ”operators” to address the multi-hop QA task comprehensively. During the debate, in addition to the basic roles of affirmative debater, negative debater, and judge, at the second level, we further leverage fast and slow debaters to monitor whether changes in viewpoints are reasonable. Extensive experiments demonstrate that BELLE significantly outperforms strong baselines in various datasets. Additionally, the model consumption of BELLE is higher cost-effectiveness than that of single models in more complex multi-hop QA scenarios.
Authors: Taolin Zhang, Dongyang Li, Qizhou Chen, Chengyu Wang, Xiaofeng He
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.211",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: LLM-Powered Test Case Generation for Detecting Bugs in Plausible Programs
Abstract: Detecting tricky bugs in plausible programs, those that pass existing test suites yet still contain bugs, remains a significant challenge in software testing. To address this problem, we propose TrickCatcher, an LLM-powered approach to generating test cases for uncovering bugs in plausible programs. TrickCatcher operates in three stages: First, it uses an LLM to generate program variants based on the program under test (PUT) and its specification. Second, it employs an LLM to construct an input generator from the specification for producing test inputs. Finally, these inputs are executed on both the PUT and its program variants to detect inconsistencies in their outputs. We evaluate TrickCatcher on two datasets, TrickyBugs and EvalPlus, which include 366 human-written and 151 AI-generated plausible programs with tricky bugs. TrickCatcher achieves recall, precision, and F1 scores that are 1.80×, 2.65×, and 1.66× those of the state-of-the-art baselines, respectively. Code and data used are available at https://github.com/RinCloud/TrickCatcher/.
Authors: Kaibo Liu, Zhenpeng Chen, Yiyang Liu, Jie M. Zhang, Mark Harman, Yudong Han, Yun Ma, Yihong Dong, Ge Li, Gang Huang
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.20",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: AdaDHP: Fine-Grained Fine-Tuning via Dual Hadamard Product and Adaptive Parameter Selection
Abstract: With the continuously expanding parameters, efficiently adapting large language models to downstream tasks is crucial in resource-limited conditions. Many parameter-efficient fine-tuning methods have emerged to address this challenge. However, they lack flexibility, like LoRA requires manually selecting trainable parameters and rank size, (IA) can only scale the activations along columns, yielding inferior results due to less precise fine-tuning. To address these issues, we propose a novel method named AdaDHP with fewer parameters and finer granularity, which can adaptively select important parameters for each task. Specifically, we introduce two trainable vectors for each parameter and fine-tune the parameters through Hadamard product along both rows and columns. This significantly reduces the number of trainable parameters, with our parameter count capped at the lower limit of LoRA. Moreover, we design an adaptive parameter selection strategy to select important parameters for downstream tasks dynamically. This allows our method to flexibly remove unimportant parameters for downstream tasks. Finally, we demonstrate the superiority of our method on the T5-base model across 17 NLU tasks and on complex mathematical tasks with the Llama series models.
Authors: Han Liu, Changya Li, Xiaotong Zhang, Feng Zhang, Fenglong Ma, Wei Wang, Hong Yu
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.467",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: My Words Imply Your Opinion: Reader Agent-Based Propagation Enhancement for Personalized Implicit Emotion Analysis
Abstract: The subtlety of emotional expressions makes implicit emotion analysis (IEA) particularly sensitive to user-specific characteristics. Current studies personalize emotion analysis by focusing on the author but neglect the impact of the intended reader on implicit emotional feedback. In this paper, we introduce Personalized IEA (PIEA) and present the RAPPIE model, which addresses subjective variability by incorporating reader feedback. In particular, (1) we create reader agents based on large language models to simulate reader feedback, overcoming the issue of “spiral of silence effect” and data incompleteness of real reader reaction. (2) We develop a role-aware multi-view graph learning to model the emotion interactive propagation process in scenarios with sparse reader information. (3) We construct two new PIEA datasets covering English and Chinese social media with detailed user metadata, addressing the text-centric limitation of existing datasets. Extensive experiments show that RAPPIE significantly outperforms state-of-the-art baselines, demonstrating the value of incorporating reader feedback in PIEA.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.787",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: Explicit and Implicit Data Augmentation for Social Event Detection
Abstract: Social event detection involves identifying and categorizing important events from social media, which relies on labeled data, but annotation is costly and labor-intensive. To address this problem, we propose Augmentation framework for Social Event Detection (SED-Aug), a plug-and-play dual augmentation framework, which combines explicit text-based and implicit feature-space augmentation to enhance data diversity and model robustness. The explicit augmentation utilizes LLMs to enhance textual information through five diverse generation strategies. For implicit augmentation, we design five novel perturbation techniques that operate in the feature space on structural fused embeddings. These perturbations are crafted to keep the semantic and relational properties of the embeddings and make them more diverse. Specifically, SED-Aug outperforms the best baseline model by approximately 17.67% on the Twitter2012 dataset and by about 15.57% on the Twitter2018 dataset in terms of the average F1 score.
Authors: Congbo Ma, Yuxia Wang, Jia Wu, Jian Yang, Jing Du, Zitai Qiu, Qing Li, Hu Wang, Preslav Nakov
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.412",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: A Variational Approach for Mitigating Entity Bias in Relation Extraction
Abstract: Mitigating entity bias is a critical challenge in Relation Extraction (RE), where models often rely excessively on entities, resulting in poor generalization. This paper presents a novel approach to address this issue by adapting a Variational Information Bottleneck (VIB) framework. Our method compresses entity-specific information while preserving task-relevant features. It achieves state-of-the-art performance on both general and financial domain RE datasets, excelling in in-domain settings (original test sets) and out-of-domain (modified test sets with type-constrained entity replacements). Our approach offers a robust, interpretable, and theoretically grounded methodology.
Authors: Samuel Mensah, Elena Kochkina, Jabez Magomere, Joy Prakash Sain, Simerjot Kaur, Charese Smiley
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-short.53",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: Measuring Data Diversity for Instruction Tuning: A Systematic Analysis and A Reliable Metric
Abstract: Data diversity is crucial for the instruction tuning of large language models. Existing studies have explored various diversity-aware data selection methods to construct high-quality datasets and enhance model performance. However, the fundamental problem of precisely defining and measuring data diversity remains underexplored, limiting clear guidance for data engineering. To address this, we systematically analyze 11 existing diversity measurement methods by evaluating their correlation with model performance through extensive fine-tuning experiments. Our results indicate that a reliable diversity measure should properly account for both inter-sample differences and the information density in the sample space. Building on this, we propose NovelSum, a new diversity metric based on sample-level “novelty.” Experiments on both simulated and real-world data show that NovelSum accurately captures diversity variations and achieves a 0.97 correlation with instruction-tuned model performance, highlighting its value in guiding data engineering practices. With NovelSum as an optimization objective, we further develop a greedy, diversity-oriented data selection strategy that outperforms existing approaches, validating both the effectiveness and practical significance of our metric.
Authors: Yuming Yang, Yang Nan, Junjie Ye, Shihan Dou, Xiao Wang, Shuo Li, Huijie Lv, Tao Gui, Qi Zhang, Xuanjing Huang
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.908",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: Generative Psycho-Lexical Approach for Constructing Value Systems in Large Language Models
Abstract: Values are core drivers of individual and collective perception, cognition, and behavior. Value systems, such as Schwartz’s Theory of Basic Human Values, delineate the hierarchy and interplay among these values, enabling cross-disciplinary investigations into decision-making and societal dynamics. Recently, the rise of Large Language Models (LLMs) has raised concerns regarding their elusive intrinsic values. Despite growing efforts in evaluating, understanding, and aligning LLM values, a psychologically grounded LLM value system remains underexplored. This study addresses the gap by introducing the Generative Psycho-Lexical Approach (GPLA), a scalable, adaptable, and theoretically informed method for constructing value systems. Leveraging GPLA, we propose a psychologically grounded five-factor value system tailored for LLMs. For systematic validation, we present three benchmarking tasks that integrate psychological principles with cutting-edge AI priorities. Our results reveal that the proposed value system meets standard psychological criteria, better captures LLM values, improves LLM safety prediction, and enhances LLM alignment, when compared to the canonical Schwartz’s values.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.585",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: DioR: Adaptive Cognitive Detection and Contextual Retrieval Optimization for Dynamic Retrieval-Augmented Generation
Abstract: Dynamic Retrieval-augmented Generation (RAG) has shown great success in mitigating hallucinations in large language models (LLMs) during generation. However, existing dynamic RAG methods face significant limitations in two key aspects: 1) Lack of an effective mechanism to control retrieval triggers, and 2) Lack of effective scrutiny of retrieval content. To address these limitations, we propose an innovative dynamic RAG method, DioR (Adaptive Cognitive Detection and Contextual Retrieval Optimization), which consists of two main components: adaptive cognitive detection and contextual retrieval optimization, specifically designed to determine when retrieval is needed and what to retrieve for LLMs is useful. Experimental results demonstrate that DioR achieves superior performance on all tasks, demonstrating the effectiveness of our work.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.148",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: TWIST: Text-encoder Weight-editing for Inserting Secret Trojans in Text-to-Image Models
Abstract: Text-to-image (T2I) models excel at generating high-quality images from text via powerful text encoders but training these encoders demands substantial computational resources. Consequently, many users seek pre-trained text encoders from model plugin-sharing platforms like Civitai and Hugging Face, which introduces an underexplored threat: the potential for adversaries to embed Trojans within these plugins. Existing Trojan attacks often require extensive training data and suffer from poor generalization across different triggers, limiting their effectiveness and scalability. To the best of our knowledge, this paper introduces the first **T**ext-encoder **W**eight-editing method for **I**nserting **S**ecret **T**rojans (**TWIST**). By identifying the *bottleneck MLP layer*—the critical point where minimal edits can dominantly control cross-modal alignment—TWIST achieves training-free and data-free Trojan insertion, which makes it highly efficient and practical. The experimental results across various triggers demonstrate that TWIST attains an average attack success rate of 91%, a 78% improvement over the state-of-the-art (SOTA) method proposed in 2024 and highlights the excellent generalization capability. Moreover, TWIST reduces modified parameters by 8-fold and cuts injection time to 25 seconds. Our findings underscore the security risks associated with text encoders in real-world applications and emphasize the need for more robust defense mechanisms.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.541",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: UTBoost: Rigorous Evaluation of Coding Agents on SWE-Bench
Abstract: The advent of Large Language Models (LLMs) has spurred the development of coding agents for real-world code generation.As a widely used benchmark for evaluating the code generation capabilities of these agents, SWE-Bench uses real-world problems based on GitHub issues and their corresponding pull requests.However, the manually written test cases included in these pull requests are often insufficient, allowing generated patches to pass the tests without resolving the underlying issue.To address this challenge, we introduce UTGenerator, an LLM-driven test case generator that automatically analyzes codebases and dependencies to generate test cases for real-world Python projects.Building on UTGenerator, we propose UTBoost, a comprehensive framework for test case augmentation.In our evaluation, we identified 36 task instances with insufficient test cases and uncovered 345 erroneous patches incorrectly labeled as passed in the original SWE Bench.These corrections, impacting 40.9% of SWE-Bench Lite and 24.4% of SWE-Bench Verified leaderboard entries, yield 18 and 11 ranking changes, respectively.
|
{
"conference": "ACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "acl2025_main.csv",
"url": "https://aclanthology.org/2025.acl-long.189",
"pdf_url": "",
"track": "main",
"conference_group": "acl",
"tags": "ACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model Compression
Abstract: Despite recent efforts in understanding the compression impact on Large Language Models (LLMs) in terms of their downstream task performance and trustworthiness on relatively simpler uni-modal benchmarks (e.g. question answering, common sense reasoning), their detailed study on multi-modal Large Vision Language Models (LVLMs) is yet to be unveiled. Towards mitigating this gap, we present LVLM-Compress-Bench, a framework to first thorough study on the broad impact of compression on the generative performance of LVLMs on multi-modal input driven tasks. In specific, we consider two major classes of compression for autoregressive models, namely KV cache and weight compression, for the dynamically growing intermediate cache and static weights, respectively. We use four LVLM variants of the popular LLaVA framework to present our analysis to integrate various state-of-the-art KV and weight compression methods including uniform, outlier-reduced, and group quantization. With this framework we demonstrate on ten different multi-modal datasets with varied capabilities including recognition, knowledge, language generation, spatial awareness, visual reasoning, hallucination and visual illusion identification, toxicity, stereotypes and bias. In specific, our framework demonstrates the compression impact on both general and ethically critical metrics leveraging a combination of real world and synthetic datasets to encompass diverse societal intersectional attributes. Extensive experimental evaluations yield diverse and intriguing observations on the behavior of LVLMs at different quantization budget of KV and weights, in both maintaining and losing performance as compared to the baseline model with FP16 data format. We believe LVLM-Compress-Bench would help the community to have a deeper insight on the parting impact of compression and the societal impact the compressed models may pose. Code will be released soon.
Authors: Souvik Kundu, Anahita Bhiwandiwalla, Sungduk Yu, Phillip Howard, Tiep Le, Sharath Nittur Sridhar, David Cobbley, Hao Kang, Vasudev Lal
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.84",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Analysis of LLM as a grammatical feature tagger for African American English
Abstract: African American English (AAE) presents unique challenges in natural language processing (NLP) This research systematically compares the performance of available NLP models—rule-based, transformer-based, and large language models (LLMs)—capable of identifying key grammatical features of AAE, namely Habitual Be and Multiple Negation. These features were selected for their distinct grammatical complexity and frequency of occurrence. The evaluation involved sentence-level binary classification tasks, using both zero-shot and few-shot strategies. The analysis reveals that while LLMs show promise compared to the baseline, they are influenced by biases such as recency and unrelated features in the text such as formality. This study highlights the necessity for improved model training and architectural adjustments to better accommodate AAE’s unique linguistic characteristics. Data and code are available.
Authors: Rahul Porwal, Alice Rozet, Jotsna Gowda, Pryce Houck, Kevin Tang, Sarah Moeller
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.431",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: GRAG: Graph Retrieval-Augmented Generation
Abstract: Naive Retrieval-Augmented Generation (RAG) focuses on individual documents during retrieval and, as a result, falls short in handling networked documents which are very popular in many applications such as citation graphs, social media, and knowledge graphs. To overcome this limitation, we introduce Graph Retrieval-Augmented Generation (GRAG), which tackles the fundamental challenges in retrieving textual subgraphs and integrating the joint textual and topological information into Large Language Models (LLMs) to enhance its generation. To enable efficient textual subgraph retrieval, we propose a novel divide-and-conquer strategy that retrieves the optimal subgraph structure in linear time. To achieve graph context-aware generation, incorporate textual graphs into LLMs through two complementary views—the text view and the graph view—enabling LLMs to more effectively comprehend and utilize the graph context. Extensive experiments on graph reasoning benchmarks demonstrate that in scenarios requiring multi-hop reasoning on textual graphs, our GRAG approach significantly outperforms current state-of-the-art RAG methods. Our datasets as well as codes of GRAG are available at https://github.com/HuieL/GRAG.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.232",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Multi-Condition Guided Diffusion Network for Multimodal Emotion Recognition in Conversation
Abstract: Emotion recognition in conversation (ERC) involves identifying emotional labels associated with utterances within a conversation, a task that is essential for developing empathetic robots. Current research emphasizes contextual factors, the speaker’s influence, and extracting complementary information across different modalities. However, it often overlooks the cross-modal noise at the semantic level and the redundant information brought by the features themselves. This study introduces a diffusion-based approach designed to effectively address the challenges posed by redundant information and unexpected noise while robustly capturing shared semantics, thus facilitating the learning of compact and representative features from multimodal data. Specifically, we present the Multi-Condition Guided Diffusion Network (McDiff). McDiff employs a modal prior knowledge extraction strategy to derive the prior distribution for each modality, thereby enhancing the regional attention of each modality and applying the generated prior distribution at each diffusion step. Furthermore, we propose a method to learn the mutual information of each modality through a specific objective constraints approach prior to the forward process, which aims to improve inter-modal interaction and mitigate the effects of noise and redundancy. Comprehensive experiments conducted on two multimodal datasets, IEMOCAP and MELD, demonstrate that McDiff significantly surpasses existing state-of-the-art methodologies, thereby affirming the generalizability and efficacy of the proposed model.
Authors: Wenjin Tian, Xianying Huang, Shihao Zou
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.177",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Task-wrapped Continual Learning in Task-Oriented Dialogue Systems
Abstract: Continual learning is vital for task-oriented dialogue systems (ToDs), and AdapterCL, equipped with residual adapters, has proven effectiveness in this domain. However, its performance is limited by training separate adapters for each task, preventing global knowledge sharing. To address this, we propose **Task-wrapped Continual Learning (TCL)**, a novel framework that employs **Task-Wrapped Adapters (TWAs)**, to simultaneously learn both global and task-specific information through parameter sharing. TCL leverages task-conditioned hypernetworks to transfer global knowledge across tasks, enabling TWAs to start from more informed initialization, efficiently learning task-specific details while reducing model parameters. Additionally, the simple, linear structure of both hypernetworks and TWAs ensure stable training, with task-free inference supported through effective loss utilization. Across 37 ToD domains, TCL consistently outperforms AdapterCL, significantly reducing forgetting. Remarkably, by setting the task embedding dimension to 1, TCL achieves a 4.76% improvement over AdapterCL while using only 46% of the parameters. These findings position TWA as a lightweight, powerful alternative to traditional adapters, offering a promising solution for continual learning in ToDs. The code is availableat https://github.com/cloversjtu/TCL.
Authors: Min Zeng, Haiqin Yang, Xi Chen, Yike Guo
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.174",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Understanding the Role of Mental Models in User Interaction with an Adaptive Dialog Agent
Abstract: Mental models play an important role in whether user interactions with intelligent systems, such as dialog agents, are successful. Adaptive dialog systems present the opportunity to align a dialog agent’s behavior with heterogeneous user expectations. However, there has been little research into what mental models users form when interacting with a task-oriented dialog system, how these models affect users’ interactions, or what role system adaptation can play in this process. This can make it challenging to avoid damage to human-AI partnership. In this work, we collect a new publicly available dataset for exploring user mental models of information seeking dialog systems. We demonstrate that users have a variety of conflicting mental models about such systems, the validity of which directly impacts the success and perception of their interactions. Furthermore, we show that adapting a dialog agent’s behavior to better align with users’ mental models, even when done implicitly, can improve dialog efficiency, success, and user perception of the interaction. This shows that implicit adaptation can be beneficial for task-oriented dialog systems, so long as developers understand the mental models of their users.
Authors: Lindsey Morgan Vanderlyn, Dirk Väth, Thang Vu
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.56",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: SimSMoE: Toward Efficient Training Mixture of Experts via Solving Representational Collapse
Abstract: Sparse mixture of experts (SMoE) have emerged as an effective approach for scaling large language models while keeping a constant computational cost. Regardless of several notable successes of SMoE, effective training such architecture remains elusive due to the representation collapse problem, which in turn harms model performance and causes parameter redundancy. In this work, we present Similarity-based Sparse Mixture of Experts (SimSMoE), a novel similarity of neural network algorithm, that guarantees a solution to address the representation collapse issue between experts given a fixed FLOPs budget. We conduct extensive empirical evaluations on three large language models for both Pre-training and Fine-tuning tasks to illustrate the efficacy, robustness, and scalability of our method. The results demonstrate that SimSMoE significantly enhances existing routing policy and outperforms other SMoE routing methods in performance for the tasks. Our implementation is publicly available at https://github.com/giangdip2410/SimSMoE.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.107",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Uncovering Latent Arguments in Social Media Messaging by Employing LLMs-in-the-Loop Strategy
Abstract: The widespread use of social media has led to a surge in popularity for automated methods of analyzing public opinion. Supervised methods are adept at text categorization, yet the dynamic nature of social media discussions poses a continual challenge for these techniques due to the constant shifting of the focus. On the other hand, traditional unsupervised methods for extracting themes from public discourse, such as topic modeling, often reveal overarching patterns that might not capture specific nuances. Consequently, a significant portion of research into social media discourse still depends on labor-intensive manual coding techniques and a human-in-the-loop approach, which are both time-consuming and costly. In this work, we study the problem of discovering arguments associated with a specific theme. We propose a generic **LLMs-in-the-Loop** strategy that leverages the advanced capabilities of Large Language Models (LLMs) to extract latent arguments from social media messaging. To demonstrate our approach, we apply our framework to contentious topics. We use two publicly available datasets: (1) the climate campaigns dataset of 14k Facebook ads with 25 themes and (2) the COVID-19 vaccine campaigns dataset of 9k Facebook ads with 14 themes. Additionally, we design a downstream task as stance prediction by leveraging talking points in climate debates. Furthermore, we analyze demographic targeting and the adaptation of messaging based on real-world events.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.413",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Lost in the Distance: Large Language Models Struggle to Capture Long-Distance Relational Knowledge
Abstract: Large language models (LLMs) have demonstrated impressive capabilities in handling long contexts, but challenges remain in capturing relational knowledge spread far apart within text. Connecting long-distance knowledge is important for solving tasks as the context length increases: imagine reading a lengthy detective novel where seemingly trivial information introduced early on often becomes essential during the climactic reveal of the culprit. In this study, we expose the ”Lost in the Distance” phenomenon, where LLM performance of capturing the relational knowledge degrades significantly when the relational knowledge is separated by noise, i.e., unrelated sentences to solve a task. Specifically, we design an experiment in which we insert artificial noise between two related elements and observe model performance as the distance between them increases. Our findings show that while LLMs can handle edge noise with little impact, their ability to reason about distant relationships declines sharply as the intervening noise grows. These findings are consistent in both forward-looking prediction and backward-looking prediction settings. We validate this across various models (GPT-4, Gemini-1.5-pro, GPT-4o-mini, Gemini-1.5-flash, Claude-3.5-Sonnet) and tasks (causal reasoning and knowledge extraction). These results reveal a significant limitation in how LLMs process relational knowledge over long contexts. We release our code and data to support further research.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.256",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: DomainSum: A Hierarchical Benchmark for Fine-Grained Domain Shift in Abstractive Text Summarization
Abstract: Most research on abstractive summarization focuses on single-domain applications, often neglecting how domain shifts between documents affect performance and the generalization ability of summarization models. To address this issue, we introduce DomainSum, a hierarchical benchmark designed to capture fine-grained domain shifts in abstractive summarization. We categorize these shifts into three levels: genre, style, and topic, and demonstrate through comprehensive benchmark analysis that they follow a hierarchical structure. Furthermore, we evaluate the domain generalization capabilities of commonly used pre-trained language models (PLMs) and large language models (LLMs) in both in-domain and cross-domain settings. Our benchmark and source code are released at https://github.com/hpzhang94/DomainSum.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.118",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: MoLA: MoE LoRA with Layer-wise Expert Allocation
Abstract: Recent efforts to integrate low-rank adaptation (LoRA) with the Mixture-of-Experts (MoE) have managed to achieve performance comparable to full-parameter fine-tuning by tuning much fewer parameters. Despite promising results, research on improving the efficiency and expert analysis of LoRA with MoE is still in its early stages. Recent studies have shown that experts in the MoE architecture have different strengths and also exhibit some redundancy. Does this statement also apply to parameter-efficient MoE? In this paper, we introduce a novel parameter-efficient MoE method, for Transformer-based models, where each model layer uses a varying number of LoRA experts. We investigate several architectures with varying layer-wise expert configurations. Experiments on six well-known NLP and commonsense QA benchmarks demonstrate that MoLA achieves equal or superior performance compared to all baselines on top of both LLAMA-2, Mistral, and Gemma. We find that allocating more LoRA experts to middle layers further enhances the effectiveness of models with a certain number of experts in total. The redundancy of the experts is more obvious in the lower layers. With much fewer parameters, this allocation strategy outperforms the setting with the same number of experts in every layer. This work can be widely used as a plug-and-play parameter-efficient tuning approach for various applications. The code has been made available at .
Authors: Chongyang Gao, Kezhen Chen, Jinmeng Rao, Ruibo Liu, Baochen Sun, Yawen Zhang, Daiyi Peng, Xiaoyuan Guo, Vs Subrahmanian
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.284",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: 2D-DPO: Scaling Direct Preference Optimization with 2-Dimensional Supervision
Abstract: Recent advancements in Direct Preference Optimization (DPO) have significantly enhanced the alignment of Large Language Models (LLMs) with human preferences, owing to its simplicity and effectiveness. However, existing methods typically optimize a scalar score or ranking reward, thereby overlooking the multi-dimensional nature of human preferences. In this work, we propose to extend the preference of DPO to two dimensions: segments and aspects. We first introduce a 2D supervision dataset called HelpSteer-2D. For the segment dimension, we divide the response into sentences and assign scores to each segment. For the aspect dimension, we meticulously design several criteria covering the response quality rubrics. With the 2-dimensional signals as feedback, we develop a 2D-DPO framework, decomposing the overall objective into multi-segment and multi-aspect objectives. Extensive experiments on popular benchmarks demonstrate that 2D-DPO performs better than methods that optimize for scalar or 1-dimensional preferences.
Authors: Shilong Li, Yancheng He, Hui Huang, Xingyuan Bu, Jiaheng Liu, Hangyu Guo, Weixun Wang, Jihao Gu, Wenbo Su, Bo Zheng
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.455",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Findings
|
Title: Rationale Behind Essay Scores: Enhancing S-LLM’s Multi-Trait Essay Scoring with Rationale Generated by LLMs
Abstract: Existing automated essay scoring (AES) has solely relied on essay text without using explanatory rationales for the scores, thereby forgoing an opportunity to capture the specific aspects evaluated by rubric indicators in a fine-grained manner. This paper introduces Rationale-based Multiple Trait Scoring (RMTS), a novel approach for multi-trait essay scoring that integrates prompt-engineering-based large language models (LLMs) with a fine-tuning-based essay scoring model using a smaller large language model (S-LLM). RMTS uses an LLM-based trait-wise rationale generation system where a separate LLM agent generates trait-specific rationales based on rubric guidelines, which the scoring model uses to accurately predict multi-trait scores. Extensive experiments on benchmark datasets, including ASAP, ASAP++, and Feedback Prize, show that RMTS significantly outperforms state-of-the-art models and vanilla S-LLMs in trait-specific scoring. By assisting quantitative assessment with fine-grained qualitative rationales, RMTS enhances the trait-wise reliability, providing partial explanations about essays. The code is available at .
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_findings.csv",
"url": "https://aclanthology.org/2025.findings-naacl.322",
"pdf_url": "",
"track": "findings",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: Towards Lifelong Dialogue Agents via Timeline-based Memory Management
Abstract: To achieve lifelong human-agent interaction, dialogue agents need to constantly memorize perceived information and properly retrieve it for response generation (RG). While prior studies focus on getting rid of outdated memories to improve retrieval quality, we argue that such memories provide rich, important contextual cues for RG (e.g., changes in user behaviors) in long-term conversations. We present THEANINE, a framework for LLM-based lifelong dialogue agents. THEANINE discards memory removal and manages large-scale memories by linking them based on their temporal and cause-effect relation. Enabled by this linking structure, THEANINE augments RG with memory timelines - series of memories representing the evolution or causality of relevant past events. Along with THEANINE, we introduce TeaFarm, a counterfactual-driven evaluation scheme, addressing the limitation of G-Eval and human efforts when assessing agent performance in integrating past memories into RG. A supplementary video for THEANINE and data for TeaFarm are at https://huggingface.co/spaces/ResearcherScholar/Theanine.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.435",
"pdf_url": "",
"track": "main",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: Beyond Benchmarks: Building a Richer Cross-Document Event Coreference Dataset with Decontextualization
Abstract: Cross-Document Event Coreference (CDEC) annotation is challenging and difficult to scale, resulting in existing datasets being small and lacking diversity. We introduce a new approach leveraging large language models (LLMs) to decontextualize event mentions, by simplifying the document-level annotation task to sentence pairs with enriched context, enabling the creation of Richer EventCorefBank (RECB), a denser and more expressive dataset annotated at faster speed. Decontextualization has been shown to improve annotation speed without compromising quality and to enhance model performance. Our baseline experiment indicates that systems trained on RECB achieve comparable results on the EventCorefBank(ECB+) test set, showing the high quality of our dataset and its generalizability on other CDEC datasets. In addition, our evaluation shows that the strong baseline models are still struggling with RECB comparing to other CDEC datasets, suggesting that the richness and diversity of RECB present significant challenges to current CDEC systems.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.178",
"pdf_url": "",
"track": "main",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection
Abstract: Recent advances on instruction fine-tuning have led to the development of various prompting techniques for large language models, such as explicit reasoning steps. However, the success of techniques depends on various parameters, such as the task, language model, and context provided. Finding an effective prompt is, therefore, often a trial-and-error process. Most existing approaches to automatic prompting aim to optimize individual techniques instead of compositions of techniques and their dependence on the input. To fill this gap, we propose an adaptive prompting approach that predicts the optimal prompt composition ad-hoc for a given input. We apply our approach to social bias detection, a highly context-dependent task that requires semantic understanding. We evaluate it with three large language models on three datasets, comparing compositions to individual techniques and other baselines. The results underline the importance of finding an effective prompt composition. Our approach robustly ensures high detection performance, and is best in several settings. Moreover, first experiments on other tasks support its generalizability.
Authors: Maximilian Spliethöver, Tim Knebler, Fabian Fumagalli, Maximilian Muschalik, Barbara Hammer, Eyke Hüllermeier, Henning Wachsmuth
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.122",
"pdf_url": "",
"track": "main",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: One fish, two fish, but not the whole sea: Alignment reduces language models' conceptual diversity
Abstract: Researchers in social science and psychology have recently proposed using large language models (LLMs) as replacements for humans in behavioral research. In addition to arguments about whether LLMs accurately capture population-level patterns, this has raised questions about whether LLMs capture human-like conceptual diversity. Separately, it is debated whether post-training alignment (RLHF or RLAIF) affects modelsâ internal diversity. Inspired by human studies, we use a new way of measuring the conceptual diversity of synthetically-generated LLM âpopulationsâ by relating the internal variability of simulated individuals to the population-level variability. We use this approach to evaluate non-aligned and aligned LLMs on two domains with rich human behavioral data. While no model reaches human-like diversity, aligned models generally display less diversity than their instruction fine-tuned counterparts. Our findings highlight potential trade-offs between increasing modelsâ value alignment and decreasing the diversity of their conceptual representations.
Authors: Sonia Krishna Murthy, Tomer Ullman, Jennifer Hu
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.561",
"pdf_url": "",
"track": "main",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: On the Analysis and Distillation of Emergent Outlier Properties in Pre-trained Language Models
Abstract: A small subset of dimensions within language Transformersâ representation spaces emerge as âoutliersâ during pretraining, encoding critical knowledge sparsely. We extend previous findings on emergent outliers to Encoder-Decoder Transformers and instruction-finetuned models, and tackle the problem of distilling a student Transformer from a larger teacher Transformer. Knowledge distillation reduces model size and cost by transferring knowledge from a larger teacher to a smaller student, necessitating a trade-off among representation dimensions. We show that emergent outlier dimensions contribute significantly more to zero-shot performance than non-outlier dimensions. Based on this, we propose the Emergent Outlier Focused Distillation (EOFD) method, which prioritizes critical outlier dimensions in distillation using a weighted MSE loss. We empirically demonstrate that EOFD outperforms state-of-the-art distillation methods and generalizes well across Encoder-only BERT, Decoder-only GPT-2, and Encoder-Decoder T5 architectures.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.430",
"pdf_url": "",
"track": "main",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: CBT-Bench: Evaluating Large Language Models on Assisting Cognitive Behavior Therapy
Abstract: There is a significant gap between patient needs and available mental health support today. In this paper, we aim to thoroughly examine the potential of using Large Language Models (LLMs) to assist professional psychotherapy. To this end, we propose a new benchmark, CBT-Bench, for the systematic evaluation of cognitive behavioral therapy (CBT) assistance. We include three levels of tasks in CBT-Bench: **I: Basic CBT knowledge acquisition**, with the task of multiple-choice questions; **II: Cognitive model understanding**, with the tasks of cognitive distortion classification, primary core belief classification, and fine-grained core belief classification; **III: Therapeutic response generation**, with the task of generating responses to patient speech in CBT therapy sessions.These tasks encompass key aspects of CBT that could potentially be enhanced through AI assistance, while also outlining a hierarchy of capability requirements, ranging from basic knowledge recitation to engaging in real therapeutic conversations. We evaluated representative LLMs on our benchmark. Experimental results indicate that while LLMs perform well in reciting CBT knowledge, they fall short in complex real-world scenarios requiring deep analysis of patientsâ cognitive structures and generating effective responses, suggesting potential future work.
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": false,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.196",
"pdf_url": "",
"track": "main",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Which recognition tier (Findings/Main/Outstanding/Best) best fits this paper?
|
[
"Findings",
"Main",
"Outstanding",
"Best"
] |
Main
|
Title: A Novel Computational Modeling Foundation for Automatic Coherence Assessment
Abstract: Coherence is an essential property of well-written texts, that refers to the way textual units relate to one another. In the era of generative AI, coherence assessment is essential for many NLP tasks such as summarization, long-form question-answering, and more.Current NLP approaches for modeling coherence often rely on a proxy task, specifically, . However, such an approach may not capture the full range of factors contributing to coherence.To remedy this, in this work we employ the formal linguistic definition by Reinhart:1980 of what makes a discourse coherent, consisting of three conditions, and , and formalize these conditions as respective computational tasks, which are in turn jointly trained. We evaluate this modeling approach on two human-rated coherence benchmarks: one of automatically-generated stories and one of real-world texts.Our experiments show that jointly training on the proposed tasks leads to better performance on each task compared with task-specific models, and to better performance on assessing coherence overall.Our proposed computational framework thus paves the way for a more advanced, broad-coverage coherence assessment.
Authors: Aviya Maimon
|
{
"conference": "NAACL 2025",
"category": null,
"sheet": null,
"accepted_tags": null,
"authors_included": true,
"year": 2025,
"cutoff_period": "future_2025",
"source_csv": "naacl2025_main.csv",
"url": "https://aclanthology.org/2025.naacl-long.277",
"pdf_url": "",
"track": "main",
"conference_group": "naacl",
"tags": "NAACL 2025"
}
|
Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments
This dataset contains benchmarks for evaluating LLM agents on academic paper analysis tasks that require understanding research trends, citations, and future directions. All evaluation data uses post-training-cutoff (2025) papers to avoid data contamination.
Dataset Description
Paper: Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments
Repository: https://github.com/shan23chen/proof_of_time
This dataset includes:
- Benchmark Tasks (3.8 MB): JSONL files with multiple-choice questions and evaluation samples
- Sandbox Data (66 MB): Historical paper data, faculty publications, and SOTA metrics for agent evaluation
Why "Proof of Time"?
The benchmark suite focuses on temporal reasoning: agents must analyze historical patterns to make predictions about future research directions, award recipients, and citation impact. Tasks require genuine understanding of research trends rather than memorization.
Dataset Structure
Benchmarks Directory (3.8 MB)
Contains 4 task families with 10 evaluation datasets:
Award Prediction (641 KB)
Predict which papers will win best paper awards at top NLP conferences.
pre-cutoff_mcq.jsonl(421 KB): Pre-2025 conference awards (ACL/EMNLP/NAACL 2018-2024)post-cutoff_emnlp.jsonl(29 KB): Post-2025 EMNLP awardspost-cutoff_acl_naacl.jsonl(191 KB): Post-2025 ACL/NAACL awards
Citation Forecasting (2.7 MB)
Predict future citation counts for recently published papers.
multiple_choice.jsonl(1.1 MB): Predict highest-cited paper among choicesranking.jsonl(1.2 MB): Rank papers by predicted citation countsbucket_prediction.jsonl(368 KB): Classify papers into citation ranges (0-1, 1-5, 5-10, 10-50, 50+)
Faculty Future Work (469 KB)
Predict research directions of AI faculty members based on publication history.
professor_field_mcq.jsonl(49 KB): Predict research field for professor's future workprofessor_article_mcq.jsonl(404 KB): Predict which article professor would authorfield_focus_mcq.jsonl(16 KB): Classify research focus by field
SOTA Forecasting (26 KB)
Predict state-of-the-art performance ranges on ML benchmarks.
mcq_dataset.jsonl(26 KB): Predict benchmark performance buckets (0-20, 20-40, 40-60, 60-80, 80-100)
Sandbox Data Directory (66 MB)
Reference data for ReAct agents to query during evaluation:
citation/historical_papers_2021_2024.jsonl(21 MB): Historical papers with citation countsaward/accepted_papers.csv(19 MB): EMNLP accepted papers (2018-2025)faculty/faculty_publications.jsonl(20 MB): Aggregated publications for 76 AI facultyfaculty/faculty_publications.tar.gz(5.9 MB): Individual CSV files per faculty membersota/sota_metrics.json(8.7 KB): Frontier model benchmark scores (October 2025)
Usage
With Inspect AI
from datasets import load_dataset
from inspect_ai import eval
from inspect_ai.dataset import json_dataset
# Load dataset from HuggingFace
ds = load_dataset("AIM-Harvard/proof-of-time")
# Run evaluation with Inspect AI
eval(
task="your_benchmark.py@task_name",
model="openai/gpt-5-mini-2025-08-07",
limit=5
)
Quick Start
# Install dependencies
pip install inspect-ai datasets
# Clone repository with benchmark implementations
git clone https://github.com/shan23chen/proof_of_time.git
cd proof_of_time
# Download dataset
from datasets import load_dataset
ds = load_dataset("AIM-Harvard/proof-of-time")
# Run evaluation
inspect eval benchmarks/award_react/benchmark.py@pre_cutoff_task \
--model openai/gpt-5-mini-2025-08-07 \
--limit 5
Dataset Fields
Each benchmark JSONL file contains samples with:
question: Task prompt for the agentanswer: Correct answer (for evaluation)choices: Multiple choice options (if applicable)metadata: Additional context (paper titles, years, venues, authors, etc.)
Example from award prediction:
{
"question": "Which recognition tier (Findings/Main/Outstanding/Best) best fits the paper?",
"context": "{title}+{abstract}+{author}"
"answer": "A",
"choices": ["Best", "Outstanding", "Main", "Findings"],
}
Benchmark Design
- ReAct Agents: Agents use tools (bash, Python, text editor) to explore sandboxed paper datasets
- Sandboxed Environments: Docker containers with read-only paper data (no internet access)
- Offline Prompt: Custom "Antigravity" prompt inspired by principles of focused exploration
- Multiple Variants: Each task has standard (agent), simple (zero-shot), and no-offline-prompt versions
Supported Models
The benchmark suite has been tested with:
- OpenAI: gpt-5.2, gpt-5.1, gpt-5-mini, gpt-5-nano
- Google: gemini-3-pro, gemini-3-flash, vertex/gemini-2.5-pro, vertex/gemini-2.5-flash
- Anthropic: vertex/claude-opus-4-5, vertex/claude-sonnet-4-5, vertex/claude-haiku-4-5
Data Sources
- Award Predictions: ACL Anthology, EMNLP/ACL/NAACL conference proceedings
- Citation Forecasting: Google Scholar citation counts
- Faculty Predictions: AI faculty CVs and publication records
- SOTA Forecasting: Papers with Code leaderboards
License
- Code: MIT License
- Data: Derived from publicly available academic papers and conference proceedings
Citation
If you use this dataset in your research, please cite:
@misc{ye2026prooftimebenchmarkevaluating,
title={Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments},
author={Bingyang Ye and Shan Chen and Jingxuan Tu and Chen Liu and Zidi Xiong and Samuel Schmidgall and Danielle S. Bitterman},
year={2026},
eprint={2601.07606},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.07606},
}
For the dataset:
@dataset{proof-of-time-dataset-2026,
title={Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments},
author={AIM Harvard},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/datasets/AIM-Harvard/proof-of-time}
}
Additional Resources
- GitHub Repository: https://github.com/shan23chen/proof_of_time
- Documentation: See repository README for detailed usage
- Setup Guide: SETUP.md
- Paper: arXiv
Contact
- Issues: https://github.com/shan23chen/proof_of_time/issues
- Email: aim@seas.harvard.edu
- Organization: AIM Harvard
Updates
- 2026-01-08: Initial release (Tiers 1-2: benchmarks + sandbox data, 69.8 MB total)
- Downloads last month
- 29