- 360∘REA_ Towards A Reusable Experience Accumulation with 360∘ Assessment for Multi-Agent System
- 3MVRD_ Multimodal Multi-task Multi-teacher Visually-Rich Form Document Understanding
- A + B_ A General Generator-Reader Framework for Optimizing LLMs to Unleash Synergy Potential
- A Causal Approach for Counterfactual Reasoning in Narratives
- A Chain-of-Thought Is as Strong as Its Weakest Link_ A Benchmark for Verifiers of Reasoning Chains
- A Chinese Dataset for Evaluating the Safeguards in Large Language Models
- A Community-Centric Perspective for Characterizing and Detecting Anti-Asian Violence-Provoking Speech
- A Comprehensive Evaluation of Quantization Strategies for Large Language Models
- A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models
- A Critical Study of What Code-LLMs (Do Not) Learn
- A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of Transformer Textual Models
- A Data-Driven Guided Decoding Mechanism for Diagnostic Captioning
- A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques
- A Glitch in the Matrix_ Locating and Detecting Language Model Grounding with Fakepedia
- A Graph per Persona_ Reasoning about Subjective Natural Language Descriptions
- A Grounded Preference Model for LLM Alignment
- A Joint Coreference-Aware Approach to Document-Level Target Sentiment Analysis
- A Large Collection of Model-generated Contradictory Responses for Consistency-aware Dialogue Systems
- A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task
- A Meta-Learning Perspective on Transformers for Causal Language Modeling
- A Modular Approach for Multimodal Summarization of TV Shows
- A Multi-Task Embedder For Retrieval Augmented LLMs
- A Multidimensional Framework for Evaluating Lexical Semantic Change with Social Science Applications
- A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation
- A Novel Cartography-Based Curriculum Learning Method Applied on RoNLI_ The First Romanian Natural Language Inference Corpus
- A Semantic Distance Metric Learning approach for Lexical Semantic Change Detection
- A Sentiment Consolidation Framework for Meta-Review Generation
- A Ship of Theseus_ Curious Cases of Paraphrasing in LLM-Generated Texts
- A Shocking Amount of the Web is Machine Translated_ Insights from Multi-Way Parallelism
- A Survey on Modelling Morality for Text Analysis
- A Survey on Predicting the Factuality and the Bias of News Media
- A Tale of Two Revisions_ Summarizing Changes Across Document Versions
- A Two-Agent Game for Zero-shot Relation Triplet Extraction
- A Two-Stage Adaptation of Large Language Models for Text Ranking
- A Unified Generative Framework for Bilingual Euphemism Detection and Identification
- A Unified Joint Approach with Topological Context Learning and Rule Augmentation for Knowledge Graph Completion
- A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation
- A multi-level multi-label text classification dataset of 19th century Ottoman and Russian literary and critical texts
- A synthetic data approach for domain generalization of NLI models
- ABEX_ Data Augmentation for Low-Resource NLU via Expanding Abstract Descriptions
- ACUEval_ Fine-grained Hallucination Evaluation and Correction for Abstractive Summarization
- ADAM_ Dense Retrieval Distillation with Adaptive Dark Examples
- AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models
- AFPQ_ Asymmetric Floating Point Quantization for LLMs
- AFaCTA_ Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators
- AGB-DE_ A Corpus for the Automated Legal Assessment of Clauses in German Consumer Contracts
- AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization
- AI ‘News’ Content Farms Are Easy to Make and Hard to Detect_ A Case Study in Italian
- AIR-Bench_ Benchmarking Large Audio-Language Models via Generative Comprehension
- ALaRM_ Align Language Models via Hierarchical Rewards Modeling