text
string
source
string
the total number of ideas tested (format: improved / successful / tested). MethodAutoRYP Auto2DCls AutoSenCls Max R2Avg R2Ideas (i/s/t) Max Acc Avg Acc Ideas (i/s/t) Max Acc Avg Acc Ideas (i/s/t) Baseline 27.6 27.6 - 81.2 81.2 - 91.0 91.0 - NOVEL SEEK (w/o AE) 34.7 33.0 2/5/10 81.6 81.5 2/5/10 92.4 91.9 6/8/10 NOVEL SEEK 35.4 33.5 4/6/10 83.3 82.2 5/7/10 93.5 92.5 9/9/10 InititalKeyword Queries:LLaMA, LoRA adapter, regression model, reaction yield prediction, SMILES fine-tuningMode: Literature Review1. DRG-LLaMA : Tuning LLaMA Model to Predict Diagnosis-related Group for Hospitalized Patients2. LoRA-Pro: Are Low-Rank Adapters Properly Optimized3. Multimodal Transformer-based Model for Buchwald-Hartwig and Suzuki-MiyauraReaction Yield Prediction4. ReacLLaMA: Merging chemical and textual information in chemical reactivity AI models5. HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture6. Regression with Large Language Models for Materials and Molecular Property Prediction7. AI-Guided Design of MALDI Matrices: Exploring the Electron Transfer Chemical Space for Mass Spectrometric Analysis of Low-Molecular-Weight Compounds8. Generative LLMs in Organic Chemistry: Transforming Esterification Reactions into Natural Language Procedures…Mode:DeepResearch1. Pre-training Transformers for Molecular Property Prediction Using Reaction Prediction2. Demystifying Molecules: Unveiling the Power of AI in Computational Chemistry3. Permutation invariant graph-to-sequence model for template-free retrosynthesis and reaction prediction…New Keyword Queries: AI chemistry, molecular property prediction, reaction modeling, transformer, SMILES optimization 1. Large Language Models to Accelerate Organic Chemistry Synthesis2. Exploring BERT for Reaction Yield Prediction: Evaluating the Impact of Tokenization, Molecular Representation, and Pretraining Data Augmentation.3. Chemist-X: Large Language Model-empowered Agent for Reaction Condition Recommendation in Chemical Synthesis…New Keyword Queries: large language models, chemistry reaction yield optimization, embedding molecular representation (a)(b) Figure 4: Analysis of two modes on survey agent. (i.e., Init Idea 0) denotes an initially generated idea and the child nodes are evolved from the parent node. As ideas continue to evolve, more external knowledge sourced from the survey agent is incorporated into ideas, which enriches the content and enhances the practicality of the ideas. For example, starting with a basic idea such as "adding a graph-derived reaction descriptor as a precondition for attention scores in the transformer architecture", the agent refines and evolves the ideas in an iterative manner. At each step, the evolved idea shows improvements over its predecessor in terms of technical sophistication, novelty, or practical applicability. As illustrated in Fig. 3, the process can involve incorporating more specific chemical descriptors, introducing cross-modality attention mechanisms, or leveraging hierarchical architectures, with each evolution step guided by additional insights from literature or domain knowledge, thus ensuring continuous advancement of the ideas. Analysis on Idea-to-Methodology Phase. The correspondence between an idea and its final code implementation plays a crucial role in assessing the effectiveness of the idea since the idea can be verified once the experiments have been conducted. The goal of the idea-to- methodology process is to generate detailed methodologies so that code can be written based on these comprehensive method descriptions ( e.g., method-level descriptions in research 16 NovelSeek: Starting Point of Innovation Table 9: Comparison with AI-Scientist-V2 and AI-Researcher on AutoRYP and Auto2DCls task. Total cost means the cost of the whole session. For each task,
https://arxiv.org/abs/2505.16938v2
we conduct 10 ex- periments. AI-Scientist-V2 and AI-Researcher demonstrate relatively weak baseline im- provement capabilities, with AI-Scientist-V2 in particular struggling to write code that runs correctly. The primary reason lies in the fact that AI-Scientist-V2’s pipeline utilizes limited task-related information ( e.g.task formulation and type, relevant papers, and commonly used code) when generating new ideas or coding. As a result, their generated ideas tend to be more divergent and difficult to implement. MethodAutoRYP Auto2DCls Max R2Avg R2Total Cost Max Acc Avg Acc Total cost Baseline 27.6 27.6 - 81.2 81.2 - AI-Scientist-V2 (Yamada et al., 2025) - - 15$ - - 10$ AI-Researcher (Lab, 2025) 12.3 - 25$ 80.3 - 32$ NOVEL SEEK 35.4 33.5 3$ 83.3 82.2 3$ papers). As illustrated in Fig. 5, our idea-to-method approach enables the generation of fine-grained methodologies, which facilitates accurate and faithful code implementation. Analysis on Evolutionary Experimental Planning and Execution. To verify the effectiveness of the adaptive evolution (AE), we conduct ablation studies on three tasks, ranging from AI tasks to scientific tasks including AutoRYP , Auto2DCls, and AutoSenCls. As shown in Tab. 8, the performance can be further improved on multiple tasks with adaptive evolution. For example, on the image classification task, both the max accuracy and the mean accuracy can be improved by 1.7% and 0.7%, compared to the setting without AE. This is because our coder agent can automatically analyze the previous results and baseline results and further re-plan the following experiments. Besides, the successful execution rate and the percentage of performance gains will also improve ( e.g., on AutoRYP , the percentage of performance gains is 40% compared to 20% without AE). This is due to with AE, the coder will implement the idea step by step and analyze the experimental phenomena after each stage of the experiments. Improving Baseline in Multi-Dimension. NOVEL SEEK not only can improve the perfor- mance on different tasks, but it also enhance the quality of ideas in other dimensions. For example, as shown in Tab. 7, on few-shot yield prediction task. We find that the results of baseline methods are unstable, for the results of multiple repeated experiments tend to ex- hibit large variance ( e.g., 24.2±4.2 when train-set=60). In contrast, the methods proposed by NOVEL SEEK can improve both the performance and the stability of the results. For example, when the train-set=60, ADAGT proposed by NOVEL SEEK achieves 34.8 average R2across 5 repeated experiments compared to 24.2 achieved by the baseline method. Besides, the variance of the results achieved by ADAGT ( i.e.,±1.1) is much lower than baseline methods (i.e.,±4.2). This phenomenon further shows the quality of ideas and code implementation of N OVEL SEEK. Comparison with AI-Researcher. We evaluate the performance and cost of NOVEL SEEK and AI-Researcher (Lab, 2025) on AutoRYP and Auto2DCls research tasks. To ensure a fair comparison, we supplied AI-Researcher with the same code templates that NOVEL SEEK uses. Both systems employ GPT-4o-2024-08-06 for idea generation and Claude-3-7-Sonnet- 20250219 for code generation. As shown in Tab. 9, NOVEL SEEK outperforms both the baseline methods and AI-Researcher across both
https://arxiv.org/abs/2505.16938v2
tasks, whereas AI-Researcher is unable to improve the provided baselines. One important reason for NOVEL SEEK’s outstanding performance is its ability to generate novel ideas through extensive search and reflection. Moreover, NOVEL SEEK has a complete experimental planning and adaptive evolution mech- anism, thereby enabling it to achieve better performance. In contrast, the idea generated by AI-Researcher is more dependent on user-provided reference papers, limiting its novelty. Moreover, AI-Researcher often ignore the prior information of the existing codebases, which further hinders its performance. In terms of cost, NOVEL SEEK is significantly more eco- nomical than AI-Researcher. For an instance, the economic cost required for NOVEL SEEK is approximately one-sixth that of AI-Researcher on the AutoRYP task. This lower cost allows 17 NovelSeek: Starting Point of Innovation IdeaclassDAFM(nn.Module):def__init__(self, embed_dim, num_heads=8, dropout=0.1):super().__init__()self.embed_dim= embed_dim# Self-attention for each modalityself.self_attn_smiles= SelfAttention(embed_dim, num_heads, dropout)self.self_attn_graph= SelfAttention(embed_dim, num_heads, dropout)self.self_attn_cond= SelfAttention(embed_dim, num_heads, dropout)# Cross-attention between modalitiesself.cross_attn_sg= CrossAttention(embed_dim, embed_dim, num_heads, dropout)... # Trainable weights for hierarchical aggregationself.alpha1 = nn.Parameter(torch.ones(1))...# Layer normalizationself.norm1 = nn.LayerNorm(embed_dim)...defforward(self, smiles_embed, graph_embed, cond_embed):# Local attention within each modalityh_s= self.self_attn_smiles(smiles_embed)... # Apply layer normalizationh_s= self.norm1(h_s+ smiles_embed) # Residual connection... # Cross-modality attentionh_sg= self.cross_attn_sg(h_s, h_g)...# Hierarchical aggregation with trainable weightsweights = F.softmax(torch.stack([self.alpha1, self.alpha2, self.alpha3]), dim=0)h_joint= weights[0] * h_sg+ weights[1] * h_sz+ weights[2] * h_gzreturn h_jointclassDynamicLayerFreezing:def__init__(self, model, initial_threshold=0.01, decay_rate=0.1):...defcompute_gradient_norms(self):forname, param inself.model.named_parameters():ifparam.gradis notNone:self.layer_grads[name] = param.grad.norm().item()defupdate_frozen_layers(self):threshold = self.initial_threshold* math.exp(-self.decay_rate* self.epoch)forname, param inself.model.named_parameters():ifname inself.layer_grads:ifself.layer_grads[name] < threshold:param.requires_grad= Falseelse:param.requires_grad= Trueself.epoch+= 1defstep(self):self.compute_gradient_norms()self.update_frozen_layers()MethodAdaptive Dual-Attention Graph-Transformer with Dynamic Freezing for Chemical Reaction Yield PredictionEnhancingSMILESandgraph-derivedreactiondescriptorintegrationusingahybrid graph-transformer networkthatincorporateshierarchicaltoken-attentionmechanismsatthemolecularsubstructureandreaction-conditionlevels,combinedwithadaptive layer freezing for fine-tuning,willimprovechemicalyieldpredictionbyleveragingboth1Dand2Dmolecularrepresentationswhileminimizingoverfitting.ThismethodrefinestheintegrationofSMILES-basedandgraph-basedreactiondescriptorsbyintroducinga Dual-Attention Fusion Mechanism (DAFM).DAFMsystematicallycombinestokenandgraphembeddingswithreactionconditionfeatures,ensuringamulti-levelattentionstrategyforimprovedinformationflow.Itincorporatesa dynamic freezing mechanismthatdevelopsadecayingthresholdsystemforgradient-basedselectionoftrainablelayers,enhancinggeneralizationinlow-dataregimeswhileaddressingoverfittingissues.Code ###FullAlgorithm1.**Dual-AttentionFusionMechanism(DAFM)**:-**LocalAttentionwithineachModality**:Employself-attentionforintra-modaldependencies:$$H^S=\text{SelfAttention}(E_S),\quadH^G=\text{SelfAttention}(E_G),\quadH^Z=\text{SelfAttention}(Z)$$-**Cross-ModalityAttention**:Fuselocalizedrepresentationswithpairwisecross-attentionamongmodalities.Eachinputmodalityqueriestheothers,ensuringrepresentationexchange:$$H^{SG}=\text{CrossAttention}(H^S,H^G),\quadH^{SZ}=\text{CrossAttention}(H^S,H^Z),\quadH^{GZ}=\text{CrossAttention}(H^G,H^Z).$$-**HierarchicalAggregation**:Aggregatefusedrepresentationswithweightedgatingforfinalrepresentation:$$H_\text{joint}=\alpha_1H^{SG}+\alpha_2H^{SZ}+\alpha_3H^{GZ},$$2.**DynamicLayerFreezing**:-Insteadofstaticlayerfreezing,usegradientmagnitudestoadaptdynamicallywithaformalthresholddecayfunction:$$\tau_t=\tau_0e^{-\lambdat},$$where$\tau_0$istheinitialthreshold,$t$isthetrainingepoch,and$\lambda$controlsdecayrate.-Ateachepoch,computethegradientnorm$G_l$forlayer$l$:$$G_l=\|\nablaW_l\|_2.$$-Freezealllayers$W_l$where$G_l<\tau_t$.Thisensuresonlycriticallayersaretrainedasthedatasetisgraduallylearned. Figure 5: Visual Examples of AutoRYP Task. NOVEL SEEK to conduct broader scientific experiments, thereby accelerating the exploration and validation of innovative research ideas. 4 Case Studies 4.1 Qualitative Results In this section, we present the results from various tasks implemented using N OVEL SEEK. 4.1.1 Visual Examples of Various Tasks We present showcases for three distinct tasks: AutoRYP , AutoMD, and AutoPower, to highlight the innovative methodologies discovered and their applications. These showcases are illustrated in Figs. 5, 6, and 9, respectively. Each task demonstrates a unique approach to solving complex problems, showcasing the potential impact of NOVEL SEEK across different scientific domains. In the AutoRYP task illustrated in Fig. 5, NOVEL SEEK autonomously discovered an in- novative approach called "Adaptive Dual-Attention Graph-Transformer with Dynamic Freezing" for predicting chemical reaction yields. This method effectively integrates SMILES and graph-derived descriptors using a hybrid graph-transformer network, incorporating hierarchical attention mechanisms to enhance accuracy while minimizing overfitting. The approach features a Dual-Attention Fusion Mechanism (DAFM) that systematically com- 18 NovelSeek: Starting Point of Innovation Idea MethodHierarchical Equivariant Directional Graph Encoder for Molecular Energy and Force PredictionDevelopingamolecularmodelingframeworkbasedonSE(3)-equivariant graph neural networkswithhierarchical geometric self-attentionandmulti-hop message enrichmentenablespreciseenergyandforcepredictionsforlargemolecules.Byfocusingondirectionalsubstructuralfeaturesandleveraginganisotropic graph kernels,thisdesigncapturesinteractingatomicpatternsandpropagatesdynamicweightupdatestoalignwithmoleculargeometry'slocalandglobalscales.HEDGE-NetintroducesarefinedSE(3)-equivariant graph neural networkformolecularenergyandforceprediction,focusingonhierarchical geometric aggregation and improved anisotropic message passing.LeveragingaGeometry-EnhancedDirectionalAttention(GEDA) mechanism,itdirectlyintegratesangularanddirectionalfeaturesintoaggregatedsubstructures,whileensuringSE(3)-equivariancethroughoutthepipeline.Themethodenableseffective learning across both atomic and substructural scales,preservingscalabilityandprecisionforcomplexmolecularsystems.Code ###FullAlgorithm**Input**:Moleculargraph$G=(V,E)$,features$\mathbf{h}_i$,positionvectors$\mathbf{d}_{ij}$.**Output**:Energyprediction$E(G)$,atomicforces$\mathbf{F}_i$.1.Initialize$\mathbf{h}_i^{(0)}$forallnodes.2.Foreachlayer$t=1,\dots,T$:a.Computeaugmentedgeometricencodings$\mathbf{g}_{ijk}^{(\mathrm{aug})}$.b.Calculateattentionweights$\alpha_{ij}$usingGEDA.c.Aggregateatomicfeatures$\mathbf{m}_i$andupdateembeddings$\mathbf{h}_i^{(t+1)}$.3.Groupnodesintosubstructuresandcomputesubstructuralembeddings$\mathbf{h}_c$.4.Aggregateglobalfeaturesforenergyprediction$E(G)$:$$E(G)=g\left(\sum_{c\inC}\mathbf{W}_E\mathbf{h}_c\right),$$where$g(\cdot)$isadifferentiablepoolingfunction.5.Backpropagateenergygradientstocomputeforces$\mathbf{F}_i=-\partialE(G)/\partial\mathbf{r}_i$.classHEDGE_MP(MessagePassing):#HEDGE-NetMessagePassingwithGeometry-EnhancedDirectionalAttention(GEDA)def__init__(self,num_heads,hidden_channels,activation,attn_activation,cutoff,vecnorm_type,trainable_vecnorm,last_layer=False):super(HEDGE_MP,self).__init__(aggr="add",node_dim=0)...defreset_parameters(self):...defforward(self,x,vec,edge_index,r_ij,f_ij,d_ij):x=self.layernorm(x)vec=self.vec_layernorm(vec)#Computenodefeaturesq=self.q_proj(x).reshape(-1,self.num_heads,self.head_dim)k=self.k_proj(x).reshape(-1,self.num_heads,self.head_dim)v=self.v_proj(x).reshape(-1,self.num_heads,self.head_dim)#Computedirectionalfeaturesdk=self.act(self.dk_proj(f_ij)).reshape(-1,self.num_heads,self.head_dim)dv=self.act(self.dv_proj(f_ij)).reshape(-1,self.num_heads,self.head_dim)#Computevectorprojectionsvec1,vec2,vec3=torch.split(self.vec_proj(vec),self.hidden_channels,dim=-1)vec_dot=(vec1*vec2).sum(dim=1)#Computesubstructureattentionweightssubstructure_weights=torch.sigmoid(self.substructure_attn(x))#PropagatemessageswithGEDAmechanismx,vec_out,substructure_embeddings=self.propagate(edge_index,q=q,k=k,v=v,dk=dk,dv=dv,vec=vec,r_ij=r_ij,d_ij=d_ij,x=x,size=None,)#Combinewithsubstructureinformationo1,o2,o3=torch.split(self.o_proj(x),self.hidden_channels,dim=1)dx=vec_out+o3ifnotself.last_layer:#Updateedgefeaturesdf_ij=self.edge_updater(edge_index,vec=vec,d_ij=d_ij,f_ij=f_ij)returndx,vec_out,df_ijelse:returndx,vec_out,Nonedefmessage(self,q_i,k_j,v_j,vec_j,dk,dv,r_ij,d_ij,edge_index_i,edge_index_j,x_j):"""Computemessageusingattentionmechanism."""...attn=attn+angular_featuresattn=self.attn_activation(attn)*self.cutoff(r_ij).unsqueeze(1)...returnv_j,vec_j,substructure_embeddingdefedge_update(self,vec_i,vec_j,d_ij,f_ij):"""Updateedgefeatures."""w1=self.vector_rejection(self.w_trg_proj(vec_i),d_ij)w2=self.vector_rejection(self.w_src_proj(vec_j),-d_ij)w_dot=(w1*w2).sum(dim=1)df_ij=self.act(self.f_proj(f_ij))*w_dotreturndf_ijdefaggregate(self,features,index,ptr,dim_size):...defupdate(self,inputs):...VIS_MP_MAP={'Node':ViS_MP_Vertex_Node,'Edge':ViS_MP_Vertex_Edge,'None':ViS_MP,'HEDGE':HEDGE_MP} Figure 6: Visual Examples of AutoMD Task. bines token and graph embeddings with reaction conditions, ensuring effective information flow across different representations. Furthermore, a dynamic layer freezing mechanism, based on gradient magnitudes, optimizes which layers are trained, thereby enhancing generalization in low-data scenarios. Implemented with self-attention and cross-modality attention modules,
https://arxiv.org/abs/2505.16938v2
this system not only combines 1D and 2D molecular representations effectively but also improves prediction accuracy and model adaptability. Consequently, this showcase underscores the method’s potential for advancing research in complex chemical tasks using deep learning. In the AutoMD task illustrated in Fig. 6, a novel framework called "Hierarchical Equivariant Directional Graph Encoder" (HEDGE-Net) has been autonomously discovered for predict- ing molecular energy and forces. This approach utilizes SE(3)-equivariant graph neural networks with hierarchical geometric self-attention and multi-hop message enrichment. By integrating angular and directional features into aggregated substructures, the method captures interacting atomic patterns and propagates dynamic weight updates, aligning with both local and global molecular geometries. The core of this method, the Geometry- Enhanced Directional Attention (GEDA) mechanism, ensures SE(3)-equivariance, enabling precise predictions for complex molecular systems at both atomic and substructural scales. Implemented with advanced message passing techniques, HEDGE-Net effectively combines directional and substructural information, enhancing scalability and precision in molecular modeling. This showcases the method’s potential to advance research in complex molecular tasks using deep learning techniques. 19 NovelSeek: Starting Point of Innovation 1 Method HGANET combining a refined hierarchical graph neural network (HGNN) with density-sensitive neighborhood graph construction and an optimized multi-head self-attentive mechanism tailored for 3D point cloud classification. The approach introduces, … Initial Planning Apply Density-aware kNN algorithm Build hierarchical graph neural network Integrate multi-head self-attentionPerformance Reflection +0.5% Acc improvement Identify attention/fusion as next focus Evolutionary Planning Strengthen attention mechanism Enable cross-layer feature interactionPerformance Reflection +1.2% Acc improvement Continue to explore feature enhancement Evolutionary Planning Introduce adaptive kNN algorithm Introduce the channel attention Apply feature regularizationPerformance Reflection +1.6% Acc improvement Need to consider advanced fusion/loss Evolutionary Planning Add spatial attention mechanism Employ dynamic feature fusion Optimize the center loss functiomPerformance Reflection -0.8% Acc degradation Rethink and simplify model complexity Evolutionary Planning Remove spatial attention and central loss Improve cross-layer feature interaction Use cosine annealing scheduleFinal Performance +2.1% Acc improvement Figure 7: Visual Examples of Experimental Planning and Adaptive Evolution on Auto3DCls task. 4.1.2 Visual Examples of Experimental Planning and Adaptive Evolution To further illustrate the practical utility of our experimental planning and adaptive evolution framework as described in Sec. 2.3.2, we present some concrete examples of its application in the development and optimization for 3D point cloud classification and transcription prediction for perturbation response. Fig. 7 and Fig. 8 visually summarize the stepwise experimental planning and adaptive evolution process that guided the implementation and refinement of our method. 4.2 Human Evaluation Table 10 compares the novelty of ideas generated by our NOVEL SEEK and AI-Scientist- V2 (Yamada et al., 2025), across various research tasks. Each task involves generating 20 ideas, which are evaluated by five qualified reviewers. The assessments focus on four 20 NovelSeek: Starting Point of Innovation Table 10: From the perspectives of soundness, contribution, and overall, we compare the novelty of ideas generated by NOVEL SEEK and AI-Scientist-V2 (Yamada et al., 2025). For each research task, we generate 20 ideas. Each idea is scored by 5 qualified reviewers, and the final score for each task is reported as the average score of all
https://arxiv.org/abs/2505.16938v2
20 ideas. The detailed scores for each idea can be found in the Appendix B.1. Research Task Idea-gen Method Soundness Contribution Overall Confidence Reaction Yield Prediction AI-Scientist-V2 1.42 1.45 3.50 3.50 Reaction Yield Prediction NOVEL SEEK 3.09 2.66 4.35 4.00 2D Semantic Segmentation AI-Scientist-V2 1.84 2.07 2.95 3.64 2D Semantic Segmentation NOVEL SEEK 2.41 2.35 4.05 3.48 2D Image Classification AI-Scientist-V2 2.78 2.82 4.40 3.87 2D Image Classification NOVEL SEEK 3.15 3.10 5.85 3.32 Point Cloud Autonomous Driving AI-Scientist-V2 2.15 2.47 3.10 3.94 Point Cloud Autonomous Driving NOVEL SEEK 2.75 2.95 5.10 4.10 criteria: soundness, contribution, overall rating, and confidence. For each research task, the average scores of the 20 ideas are reported. In the Reaction Yield Prediction task, NOVEL SEEK outperforms AI-Scientist-V2 in all aspects, especially in overall rating and soundness. Similarly, for 2D Semantic Segmentation, N OV- ELSEEK shows better idea generation ability, particularly in soundness and overall rating. In 2D Image Classification and Point Cloud Autonomous Driving, NOVEL SEEK scores higher across all criteria, indicating a consistent advantage over AI-Scientist-V2 in generating more effective and novel ideas. 5 Related Works Recent advances in Large Language Models (LLMs) and agent-based systems have demon- strated significant potential in the field of Autonomous Scientific Research (ASR), enabling progress from creative idea generation to end-to-end research automation. Some studies (Li et al., 2024b; Wang et al., 2023; Zhou et al., 2024) have shown that LLMs are capable of generating novel research ideas, which has sparked widespread discussion in the academic community. For example, Li et al. (2024b) introduce a method that derives research ideas through the analysis of interconnected scholarly works. Beyond idea generation, several studies have examined the use of LLMs for hypothesis formulation (Qi et al., 2023; 2024), such as extracting hypotheses from large-scale web data (Yang et al., 2023), multi-agent framework using LLMs to enhance collaborative hypothesis generation in biomedicine (Qi et al., 2023; 2024), and scientific literature (Wang et al., 2023; Zhou et al., 2024). However, most of these efforts remain at the stage of idea or hypothesis generation, lacking systematic empirical validation of their practical effectiveness. In terms of end-to-end research automation, Lu et al. (2024) introduced the AI Scientist framework, which was among the first to achieve a fully automated pipeline in the machine learning domain, covering problem definition, experimental execution, and result reporting. The subsequent AI Scientist-V2 (Yamada et al., 2025) further enhanced the framework by in- corporating agent tree search, vision-language model feedback, and parallelized experiment execution, leading to the first workshop paper fully generated and peer-reviewed by AI. Similarly, systems such as AI-Researcher (Lab, 2025) and Dolphin (Yuan et al., 2025) have proposed closed-loop, LLM-driven frameworks that automate the entire research process on a range of simple tasks. Human-AI collaboration is gaining traction in ASR. Systems like Agent Labora- tory (Schmidgall et al., 2025) integrate human feedback into multi-stage LLM agent work- flows, automating literature review, experiment execution, and report writing, while allow- ing user input at each step to enhance research quality. AgentRxiv (Schmidgall & Moor, 2025) addresses the collaborative nature of scientific
https://arxiv.org/abs/2505.16938v2
discovery by enabling LLM agent labo- ratories to communicate and build upon each other’s work via a shared preprint server, thus 21 NovelSeek: Starting Point of Innovation facilitating knowledge sharing and collective innovation. Experimental results demonstrate that agent laboratories utilizing AgentRxiv for collaboration achieve greater performance improvements compared to isolated settings. Similarly, AI Co-Scientist (Gottweis et al., 2025), based on Gemini 2.0, employs a multi-agent system with a "generate-debate-evolve" strategy for hypothesis generation, and has demonstrated effectiveness in biomedical do- mains such as drug repurposing, novel target identification, and interpretation of bacterial evolution, with several hypotheses validated through experiments. Despite these advances, most current systems are still evaluated primarily on relatively simple tasks or within narrow scientific domains. However, when applied to more complex, system-level scientific challenges, these approaches often face significant limitations. Key challenges include generating truly novel and scientifically sound research ideas, establish- ing robust closed-loop feedback between experiments and idea generation, and developing systematic evaluation standards to rigorously assess the effectiveness and real-world value of autonomous research systems. 6 Conclusion and Future Works Summary. We have introduced a closed-loop multi-agent framework for the first time, which supports 12 types of scientific research tasks. It has been validated to generate novel ideas and transform these ideas into code that can effectively improve performance. NOV- ELSEEK refines the initially generated ideas through human-interactive feedback enriched with a self-evolutionary path of ideas. It facilitates the transformation from coarse-grained proposals to fine-grained methodologies via an idea-to-methodology construction process. Furthermore, by leveraging multi-round experimental planning and execution, it imple- ments the corresponding theoretical methods, thereby completing the closed-loop process in scientific research—from hypothesis generation to verification. Future Outlook. NOVEL SEEK faces several key technical challenges that need to be ad- dressed in the future: •Knowledge Retrieval: This involves establishing connections and relationships between papers, utilizing deep research techniques to conduct extensive searches across a wide range of papers, and performing meta-analyses on the search results. Additionally, it requires transforming the papers into structured representations such as triples, and utilizing graph networks to uncover relationships between papers, including associations in paper ideas, methodologies, experimental condi- tions, and experimental results. Moreover, retrieval-augmented generation will be employed to alleviate the hallucination issues of LLMs when generating viewpoints or citing references. •Knowledge Understanding and Representation: This involves utilizing VLM/LLM to accurately analyze relevant academic papers, aiming to understand the signif- icance of their core concepts, methodologies, and research findings, while also refining knowledge and formulating hypotheses. Additionally, it focuses on ex- tracting valuable knowledge from a large number of papers, identifying common patterns, trends, and connections, thereby advancing the understanding and repre- sentation of knowledge in the field. •Agent Capability Enhancement: This focuses on improving the ability of AI systems to autonomously perform complex tasks in scientific research. The strength of agents lies in their ability to dynamically adapt, rather than solely relying on historical records to determine subsequent actions. Through self-modification, they can flexibly redefine their initial goals and planning strategies while utilizing feedback, as well as communication logs between agents or between humans and agents, to
https://arxiv.org/abs/2505.16938v2
train and improve themselves. This mechanism should focus on improving their ability to gather feedback from three key sources: the environment, interactions with other agents, and human experts. •Scientific Discovery-related Benchmark Construction: This involves evaluating the value that an idea can bring, rather than simply evaluating its novelty. It also in- 22 NovelSeek: Starting Point of Innovation cludes evaluating whether the methods proposed by AI align with their correspond- ing code implementations and determining whether NOVEL SEEK demonstrates a certain level of generalization ability in broader scientific scenarios. References Cosmas D Arnold, Daniel Gerlach, Christoph Stelzer, Łukasz M Bory´ n, Martina Rath, and Alexander Stark. Genome-wide quantitative enhancer activity maps identified by starr-seq. Science , 339(6123):1074–1077, 2013. Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV) , pp. 801–818, 2018. Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, et al. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. Science China Information Sciences , 67(12):220101, 2024. Stefan Chmiela, Alexandre Tkatchenko, Huziel E Sauceda, Igor Poltavsky, Kristof T Schütt, and Klaus-Robert Müller. Machine learning of accurate energy-conserving molecular force fields. Science advances , 3(5):e1603015, 2017. Bernardo P de Almeida, Franziska Reiter, Michaela Pagani, and Alexander Stark. Deepstarr predicts enhancer activity from dna sequence and enables the de novo design of synthetic enhancers. Nature genetics , 54(5):613–624, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics , pp. 4171–4186, 2019. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PAS- CAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal- network.org/challenges/VOC/voc2012/workshop/index.html, 2012. Paul Gauthier and Aider-AI Contributors. Aider: Ai pair programming in your terminal. https://github.com/Aider-AI/aider , 2023. URL https://github.com/Aider-AI/aider . Accessed: 2025-05-07. Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864 , 2025. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. HKU Data Intelligence Lab. Ai-researcher: Fully-automated scientific discovery with llm agents. https://github.com/HKUDS/AI-Researcher , 2025. URL https://github.com/ HKUDS/AI-Researcher . Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024a. 23 NovelSeek: Starting Point of Innovation Long
https://arxiv.org/abs/2505.16938v2
Li, Weiwen Xu, Jiayan Guo, Ruochen Zhao, Xinxuan Li, Yuqian Yuan, Boqiang Zhang, Yuming Jiang, Yifei Xin, Ronghao Dang, et al. Chain of ideas: Revolutionizing research in novel idea development with llm agents. arXiv preprint arXiv:2410.13185 , 2024b. Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292 , 2024. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathe- matical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. Ruilin Luo, Zhuofan Zheng, Yifan Wang, Yiyao Yu, Xinzhe Ni, Zicheng Lin, Jin Zeng, and Yujiu Yang. Ursa: Understanding and verifying chain-of-thought reasoning in multimodal mathematics. arXiv preprint arXiv:2501.04686 , 2025. Jiageng Mao, Minzhe Niu, Chenhan Jiang, Hanxue Liang, Jingheng Chen, Xiaodan Liang, Yamin Li, Chaoqiang Ye, Wei Zhang, Zhenguo Li, et al. One million scenes for autonomous driving: Once dataset. arXiv preprint arXiv:2106.11037 , 2021. Thomas M Norman, Max A Horlbeck, Joseph M Replogle, Alex Y Ge, Albert Xu, Marco Jost, Luke A Gilbert, and Jonathan S Weissman. Exploring genetic interaction manifolds constructed from rich single-cell phenotypes. Science , 365(6455):786–793, 2019. Damith Perera, Joseph W Tucker, Shalini Brahmbhatt, Christopher J Helal, Ashley Chong, William Farrell, Paul Richardson, and Neal W Sach. A platform for automated nanomole- scale reaction screening and micromole-scale synthesis in flow. Science , 359(6374):429–434, 2018. Biqing Qi, Kaiyan Zhang, Haoxiang Li, Kai Tian, Sihang Zeng, Zhang-Ren Chen, and Bowen Zhou. Large language models are zero shot hypothesis proposers. arXiv preprint arXiv:2311.05965 , 2023. Biqing Qi, Kaiyan Zhang, Kai Tian, Haoxiang Li, Zhang-Ren Chen, Sihang Zeng, Ermo Hua, Hu Jinfang, and Bowen Zhou. Large language models as biomedical hypothesis generators: a comprehensive evaluation. arXiv preprint arXiv:2407.08940 , 2024. Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 652–660, 2017. Yansheng Qiu, Haoquan Zhang, Zhaopan Xu, Ming Li, Diping Song, Zheng Wang, and Kaipeng Zhang. Ai idea bench 2025: Ai research idea generation benchmark. arXiv preprint arXiv:2504.14191 , 2025. Yusuf Roohani, Kexin Huang, and Jure Leskovec. Predicting transcriptional outcomes of novel multigene perturbations with gears. Nature Biotechnology , 42(6):927–935, 2024. Samuel Schmidgall and Michael Moor. Agentrxiv: Towards collaborative autonomous research. arXiv preprint arXiv:2503.18102 , 2025. Samuel Schmidgall, Yusheng Su, Ze Wang, Ximeng Sun, Jialian Wu, Xiaodong Yu, Jiang Liu, Zicheng Liu, and Emad Barsoum. Agent laboratory: Using llm agents as research assistants. arXiv preprint arXiv:2501.04227 , 2025. Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. Can llms generate novel research ideas? a large-scale human study with 100+ nlp researchers. arXiv preprint arXiv:2409.04109 , 2024. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing , pp. 1631–1642, 2013. 24 NovelSeek: Starting
https://arxiv.org/abs/2505.16938v2
Point of Innovation OpenPCDet Development Team. Openpcdet: An open-source toolbox for 3d object detection from point clouds. https://github.com/open-mmlab/OpenPCDet , 2020. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024a. Qingyun Wang, Doug Downey, Heng Ji, and Tom Hope. Scimon: Scientific inspiration machines optimized for novelty. arXiv preprint arXiv:2305.14259 , 2023. Xingyao Wang, Boxuan Li, Yufan Song, Frank F. Xu, Xiangru Tang, Mingchen Zhuge, Jiayi Pan, Yueqi Song, Bowen Li, Jaskirat Singh, Hoang H. Tran, Fuqiang Li, Ren Ma, Mingzhang Zheng, Bill Qian, Yanjun Shao, Niklas Muennighoff, Yizhe Zhang, Binyuan Hui, Junyang Lin, Robert Brennan, Hao Peng, Heng Ji, and Graham Neubig. OpenHands: An Open Platform for AI Software Developers as Generalist Agents, 2024b. URL https: //arxiv.org/abs/2407.16741 . Xingyao Wang, Boxuan Li, Yufan Song, Frank F Xu, Xiangru Tang, Mingchen Zhuge, Jiayi Pan, Yueqi Song, Bowen Li, Jaskirat Singh, et al. Openhands: An open platform for ai software developers as generalist agents. In The Thirteenth International Conference on Learning Representations , 2024c. Yusong Wang, Tong Wang, Shaoning Li, Xinheng He, Mingyu Li, Zun Wang, Nanning Zheng, Bin Shao, and Tie-Yan Liu. Enhancing geometric representations for molecules with equivariant vector-scalar interactive message passing. Nature Communications , 15(1): 313, 2024d. Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 1912–1920, 2015. Yutaro Yamada, Robert Tjarko Lange, Cong Lu, Shengran Hu, Chris Lu, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist-v2: Workshop-level automated scientific discovery via agentic tree search. arXiv preprint arXiv:2504.08066 , 2025. Xiangchao Yan, Shiyang Feng, Jiakang Yuan, Renqiu Xia, Bin Wang, Bo Zhang, and Lei Bai. Surveyforge: On the outline heuristics, memory-driven generation, and multi- dimensional evaluation for automated survey writing. arXiv preprint arXiv:2503.04629 , 2025. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement, 2024b. URL https://arxiv. org/abs/2409.12122 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and Erik Cambria. Large language models for automated open-domain scientific hypotheses discovery. arXiv preprint arXiv:2309.02726 , 2023. Tianwei Yin, Xingyi Zhou, and Philipp Krahenbuhl. Center-based 3d object detection and tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pp. 11784–11793, 2021. Jiakang Yuan, Xiangchao Yan, Botian Shi, Tao Chen, Wanli Ouyang, Bo Zhang, Lei Bai, Yu Qiao, and Bowen Zhou. Dolphin: Closed-loop open-ended auto-research through thinking, practice, and feedback. arXiv preprint arXiv:2501.03916 , 2025. Sergey Zagoruyko. Wide residual networks. arXiv preprint arXiv:1605.07146 , 2016. 25 NovelSeek: Starting Point of Innovation Ailing Zeng, Muxi Chen, Lei
https://arxiv.org/abs/2505.16938v2
Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence , volume 37, pp. 11121–11128, 2023. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision , pp. 11975–11986, 2023. Zhen Zhao, Zhen Huang, Zicheng Wang, Wenqi Huang, and LEI BAI. Senseflow: A physics- informed and self-ensembling iterative framework for power flow estimation, 2024. URL https://openreview.net/forum?id=UKiCFpwcqY . Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. InThe Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual Conference , volume 35, pp. 11106–11115. AAAI Press, 2021. Yangqiaoyu Zhou, Haokun Liu, Tejes Srivastava, Hongyuan Mei, and Chenhao Tan. Hy- pothesis generation with large language models. arXiv preprint arXiv:2404.04326 , 2024. Ray Daniel Zimmerman, Carlos Edmundo Murillo-Sánchez, and Robert John Thomas. Matpower: Steady-state operations, planning, and analysis tools for power systems research and education. IEEE Transactions on power systems , 26(1):12–19, 2010. 26 NovelSeek: Starting Point of Innovation Appendix A Contributions and Acknowledgments 1. Core Contributors Bo Zhang Shiyang Feng Xiangchao Yan Jiakang Yuan 2. Multi-agent System Algorithm Design Self-Evolving Idea : Shiyang Feng Idea-to-Methodology : Xiangchao Yan Coder : Shiyang Feng, Jiakang Yuan, Xi- angchao Yan Multi-round Experimental Plan : Jiakang Yuan, Bo Zhang, Xiangchao Yan, Shiyang Feng 3. Scientific Research Task Integration and Enhancement Reaction Yield Prediction : Xiaohan He, Zhiyin Yu Molecular Dynamics : Zhiyin Yu Power Flow Estimation : Songtao Huang Time Series Forecasting : Songtao Huang Sentiment CLS : Xiangchao Yan, Jiakang Yuan 3D Point CLS : Xiangchao Yan, Jiakang Yuan 2D Image CLS : Xiangchao Yan, Jiakang Yuan 2D Semantic Segmentation : Shiyang Feng 3D Autonomous Driving : Jiakang Yuan Transcription Prediction : Xiaohan He, Peng YeEnhancer Activity Prediction : Zhiyin Yu VLM finetuning : Tianshuo Peng 4. Software Development Zheng Nie, Zhilong Wang, Runmin Ma, Jinyao Liu, Shiyang Feng, Xiangchao Yan 5. Manuscript Preparation Shiyang Feng, Bo Zhang, Xiangchao Yan, Jiakang Yuan, Zhiyin Yu, Songtao Huang, Lei Bai, Xiaohan He, Tianshuo Peng 6. Idea Human Evaluation Specialist Bo Zhang, Peng Ye, Shufei Zhang, Dongzhan Zhou, Xiaosong Wang, Lei Bai 7. Project Management and Product Yilan Zhang, Meng Li, Shaowei Hou, Zhongying Tu 8. Advising Bowen Zhou, Wanli Ouyang, Xiangyu Yue 9. Project Co-lead Lei Bai, bailei@pjlab.org.cn Bo Zhang, zhangbo@pjlab.org.cn B Evaluation Details B.1 Scoring Criteria for Idea Review In Table 10 of the main text, we conducted a human evaluation to assess the novelty of ideas generated by AI-Scientist-V2 (Yamada et al., 2025) and our NOVEL SEEK. The evaluation was carried out across four dimensions: soundness, contribution, overall rating, and confidence. Specifically, considering the evaluation cost, we opted to evaluate four types of research tasks: reaction yield prediction, 2D semantic segmentation, 2D image classification, and point cloud autonomous driving. Each invited researcher was required to have peer-review qualifications for top-tier journals or conferences in the relevant field. For each research task, we generated 20 ideas using both AI-Scientist-V2
https://arxiv.org/abs/2505.16938v2
and NOVEL SEEK, and five experienced researchers were invited to score each idea. In this part, we provide a detailed description of the scoring criteria for each reviewer, as outlined below: Soundness : • 4 excellent 27 NovelSeek: Starting Point of Innovation • 3 good • 2 fair • 1 poor Contribution : • 4 excellent • 3 good • 2 fair • 1 poor Overall : •Rating: 10: Award quality: Technically flawless paper with groundbreaking impact on one or more areas of AI, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. •Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI and excellent impact on multiple areas of AI, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. •Rating: 8: Strong Accept: Technically strong paper with, with novel ideas, excellent impact on at least one area of AI or high-to-excellent impact on multiple areas of AI, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. •Rating: 7: Accept: Technically solid paper, with high impact on at least one sub- area of AI or moderate-to-high impact on more than one area of AI, with good- to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. •Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. •Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. •Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. •Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. •Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. •Rating: 1: Very Strong Reject: For instance, a paper with trivial results or unad- dressed ethical considerations Confidence : •Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. •Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. •Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. 28 NovelSeek: Starting Point of Innovation •Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. •Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were
https://arxiv.org/abs/2505.16938v2
not carefully checked. B.2 Information on the Expert Review Process Qualifications for Human Evaluators : Evaluators must hold a Ph.D. or be Ph.D. candidates with reviewing experience in top-tier AI conferences such as ICLR, ICML, NeurIPS, CVPR, ICCV , and ACL. Steps for Expert Evaluation and Validation : •Before the evaluation begins, evaluators are required to carefully read the scoring guidelines, as outlined in Appendix B.1. •Each evaluator is assigned 20 ideas generated by NOVELSEEK and 20 ideas gen- erated by AI-Scientist-V2 (Yamada et al., 2025). For each idea, evaluators must carefully review the generated content and provide final scores across four dimen- sions: Soundness, Contribution, Overall, and Confidence. •Qualified evaluators are required to spend at least 10 minutes reading each idea. During the evaluation process, they are allowed to conduct relevant literature searches and verify idea redundancy to ensure that the scoring results are objective and representative. C N OVEL SEEK Software Development Fig. 13 shows the front-end interface of the current NOVEL SEEK software platform. Overall, NOVEL SEEK software platform employs a frontend-backend separation design pattern, building a highly scalable distributed service platform. The frontend layer is developed based on the React framework, featuring an advanced visual interaction system. Key innovations include an infinite canvas rendering engine supporting multi-node topology, a collaborative mind mapping component driven by state synchronization, a code editor supporting multiple formats, and a real-time training metrics visualization dashboard. The backend leverages a cloud-native technology stack, utilizing a dynamic container orchestration engine for elastic resource scheduling, a distributed asynchronous task queue for high concurrency support, and a cross-cloud storage gateway for data synchronization across heterogeneous cloud environments. Additionally, a microservice governance system is established using a Service Mesh architecture. The entire system is delivered through containerization, with Kubernetes cluster man- agement enabling self-healing and intelligent scaling, ensuring business continuity while significantly improving resource utilization efficiency. D Visualization Results We further conducted detailed visualization and analysis around the multi-round experi- mental planning and execution (Fig. 8) in NOVEL SEEK, as well as the automated scientific research tasks (Figs. 9, 10, 11 and 12) supported by N OVEL SEEK. Fig. 8 illustrates the process of Experimental Planning and Adaptive Evolution on AutoTPPR. Each block in the figure represents a step in the multi-round experiment planning process, where the complete NOVEL SEEK-generated method is decomposed into multiple logical 29 NovelSeek: Starting Point of Innovation 2 Method The GEARS_LocalRegularization framework improves upon the existing GEARS methodology by introducing biologically-grounded, local graph regularization that explicitly connects spectral graph penalties to biological domain knowledge, such as chromatin interactions and ... Initial Planning Implement local graph regularization Add perturbation-aware embedding alignment Add regularization terms in loss function Performance Reflection 0.038 MSE improvement Explore adaptive regularization and feature selection Evolutionary Planning Implement adaptive regularization strength Improve sophisticated transformation Focus on biologically relevant featuresPerformance Reflection 0.040 MSE improvement Adaptive regularization needs tuning Evolutionary Planning Simplify regularization mechanism Introduce curriculum learning Optimize balance between local regularization and perturbation alignmentPerformance Reflection 0.045 MSE improvement Simplified regularization and curriculum learning improved stability and generalization Evolutionary Planning Implement multi-scale local
https://arxiv.org/abs/2505.16938v2
regularization Add feature-wise attention mechanism Optimize perturbation embedding alignment Performance Reflection 0.047 MSE improvement Multi-scale regularization and enhanced attention further improve performance Evolutionary Planning Implement hierarchical regularization Add adaptive lr for different components Introduce knowledge-guided attention using prior biological knowledgeFinal Performance 0.051 MSE improvement Figure 8: Visual Examples of Experimental Planning and Adaptive Evolution on AutoTPPR task. steps. This allows for task decomposition during the experimental validation phase, thereby facilitating more significant benchmark results. Furthermore, in the AutoPower task illustrated in Fig. 9, the "Adaptive Hierarchical Graph Transformer" (AHGT) introduces significant advancements for power flow estimation in energy systems. This approach features two key innovations: the Enhanced Edge-Node Hierarchical Pooling (EENHPool) mechanism, which integrates global and local features to retain crucial graph structures while reducing ambiguities, and the Stability-Regularized Temporal Graph Transformer (SRT-GT), designed to capture temporal dynamics while maintaining training stability. These components together enhance the model’s robustness and accuracy, validated on IEEE benchmarks under scenarios involving renewable energy and grid perturbations. The AHGT method outputs precise voltage magnitude and angle predictions, assessed using metrics such as MAE, RMSE, and the Graph Perturbation Robustness Index (GPRI). This approach showcases the potential for improved power system modeling through advanced graph transformer techniques. 30 NovelSeek: Starting Point of Innovation Idea#-----EnhancedEdge-NodeHierarchicalPooling(EENHPool)classEENHPool(nn.Module):def__init__(self,in_dim,edge_dim,hidden_dim=None):super(EENHPool,self).__init__()#-----Stability-RegularizedTemporalGraphTransformer(SRT-GT)classSRT_GT(nn.Module):def__init__(self,in_dim,hidden_dim,num_timesteps,dropout=0.1):super(SRT_GT,self).__init__()...defforward(self,x_dict,edge_index_dict,edge_attr_dict,local_features,timestep):"""Applytemporalgraphtransformerupdatewithimprovedstability"""...defget_temporal_regularization_loss(self,lambda_reg=0.001):"""Computetemporalregularizationlosstoenforcesmoothness"""classHybridBlock(nn.Module):def__init__(self,...):super(HybridBlock,self).__init__()...#EnhancedEdge-NodeHierarchicalPoolingself.eenhpool=EENHPool(in_dim=emb_dim_in,edge_dim=edge_dim)#Stability-RegularizedTemporalGraphTransformerself.srt_gt=SRT_GT(in_dim=emb_dim_in,hidden_dim=emb_dim_in,num_timesteps=num_timesteps,dropout=dropout_ratio)...classPFNet(nn.Module):def__init__(self,...):super(PFNet,self).__init__()...forchannel_in,channel_outinzip(hidden_block_layers[:-1],hidden_block_layers[1:]):self.blocks.append(HybridBlock(...))...classIterGCN(nn.Module):def__init__(self,...):super(IterGCN,self).__init__()...self.net=PFNet(...)...defforward(self,batch,flag_return_losses=False,flag_use_ema_infer=False,num_loop_infer=0):...MethodAdaptive Hierarchical Graph Transformer with Enhanced Pooling and Temporal Stability for Power Flow EstimationTheAdaptiveHierarchicalGraphTransformer(AHGT)introducestwokeyinnovationstoimprovepowerflowestimationinenergysystems:(1)aclarifyingedge-nodeadaptivepoolingmechanism,Enhanced Edge-Node Hierarchical Pooling (EENHPool), whichincorporatesglobalandlocalfeaturestoretainimportantstructureandreduceambiguitiesincoarsenedgraphgeneration,and(2)aStability-Regularized Temporal Graph Transformer (SRT-GT) tocapturetemporaldynamicswhilemaintainingstabilityforefficienttrainingoverpowersystemdatasets.ThemodelisvalidatedonIEEEbenchmarks,incorporatingrenewableenergyscenariosandperturbedgridnetworks,evaluatedwithMAE,RMSE,andadetaileddefinitionoftheproposedrobustnessmetric,GraphPerturbationRobustnessIndex(GPRI).AdaptivTheAdaptive Hierarchical Graph Transformer improves the Structured Adaptive Graph Transformer (SAGT) by addressing critical issues related to pooling clarity, temporal dynamics, and robustness for power grid estimation tasks. Key components include Enhanced Edge-Node Hierarchical Pooling (EENHPool) and Stability-Regularized Temporal Graph Transformer (SRT-GT). Below is the detailed method description.Code ### Full AlgorithmAlgorithm: Adaptive Hierarchical Graph Transformer (AHGT)Input: Graph $G = (\mathcal{V}, \mathcal{E}, \mathbf{X}, \mathbf{E})$, node types, timesteps $T$.Output: Voltage magnitudes $\hat{V}_m$, angles $\hat{V}_a$.1. Initialize $\mathbf{X}^{(0)} \leftarrow\text{NodeEncoder}(\mathbf{X}, \text{node types})$.2. For each timestep $t = 1, \ldots, T$:a. Apply EENHPool: Determine node importance and lift local features.b. Form coarsened graph $G^{(t)}$ using lifted features.c. Compute edge-node attention for temporal graph.d. Update node embeddings using SRT-GT with stability constraints.3. Decode final node embeddings $\mathbf{X}^{(T)}$ to predict $\hat{V}_m, \hat{V}_a$.4. Return predictions $\hat{V}_m, \hat{V}_a$. Figure 9: Visual Examples of AutoPower Task Idea#Psycholinguisticfeatureextraction-keycomponentfornoisegenerationclassPsycholinguisticFeatures:def__init__(self,lexicon_path=None):"""Initializepsycholinguisticfeatureextractor"""...defget_importance_score(self,token:str)->float:"""Calculatepsycholinguisticimportancescoreforatoken"""…defcompute_importance_for_text(self,text:str)->List[float]:"""Computeimportancescoresforeachtokenintext"""…#Linguisticrule-basedtransformationsclassLinguisticRules:def__init__(self):"""Initializelinguisticruleprocessor"""#Sarcasmpatternsfordetectionself.sarcasm_patterns=[r'(so|really)(great|nice|good)']...defdetect_sarcasm(self,text:str)->bool:"""Detectsarcasmpatternsintext"""...defapply_rule_transformations(self,token_embeddings,text,tokenizer):"""Applyrule-basedtransformationstotokenembeddings"""transformed_embeddings=token_embeddings.clone()...returntransformed_embeddings#MainhybridnoiseaugmentationclassclassHybridNoiseAugmentation:def__init__(self,sigma=0.1,alpha=0.5,gamma=0.1,psycholinguistic_features=None,linguistic_rules=None):…defapply_psycholinguistic_noise(self,token_embeddings,texts,tokenizer):"""Applypsycholinguistic-basednoisetotokenembeddings"""batch_size,seq_len,hidden_dim=token_embeddings.shapenoised_embeddings=token_embeddings.clone()...returnnoised_embeddingsdefgenerate_hybrid_embeddings(self,token_embeddings,texts,tokenizer):"""Generatehybridembeddingscombiningpsycholinguisticnoiseandrule-basedtransformations"""…defgenerate_psycholinguistic_alignment_matrix(self,texts,seq_len,device):"""Generatepsycholinguisticalignmentmatrixforattentionmechanism"""batch_size=len(texts)H=torch.zeros((batch_size,seq_len,seq_len),device=device)fori,textinenumerate(texts):#Calculateimportancescoresforeachtokenimportance_scores=self.psycholinguistic_features.compute_importance_for_text(text)…returnHMethodHybrid Noise Augmentation with Psycholinguistic and Linguistic Rule Integration for AdversariallyRobust Sentiment AnalysisTransformer-based sentiment analysis models demonstrate measurable performance gains and adversarial robustness on SST-2 through the use of hybrid noise augmentation. This approach combines psycholinguistically-grounded neural text generation with linguistic feature extraction rules for sarcasm, negation, and polysemy. Novel curated datasets explicitly covering these linguistic phenomena and adversarial benchmarks (e.g., TextFlint) are leveraged for validation against F1 score improvements, error rates by phenomenon, and robustness metrics measured under low-resource and noisy conditions.Thismethodrefinesandextendstransformer-basedsentimentanalysisontheSST-2datasetbyintroducingamathematicallyformalizedandalgorithmicallydetailedhybrid noise augmentation approach.Therefinementintegrates psycholinguistically-grounded neural text generationwithrule-basedhandlingofsarcasm,negation,andpolysemythroughaunifiedframework.TheapproachusesadversarialbenchmarkslikeTextFlintforrobustness evaluationundernoisy and low-resource conditions,promotingreproducibilityandpracticalfeasibility.Code ###FullAlgorithm1.**DataPreprocessing**-Input:Trainingdataset(D),psycholinguisticfeatures(P),linguisticrules(L)-ForeachtokeninD:-Computepsycholinguisticimportancescores(S)coveringvalence,arousal,anddominance-NormalizeStoappropriatescalingrangesfornoisegeneration-Identifytokenssusceptibletosarcasm,negation,andpolysemyphenomena2.**HybridNoiseGeneration**-Foreachtokenembeddinge:-Generatepsycholinguisticneuralnoisecomponent:$$e'=e+\mathcal{N}(0,\sigma^2\cdotS)$$whereSreflectspsycholinguisticimportancescores-Applylinguisticrule-basedtransformations:$$e''=R_{\text{rule}}\cdote$$usingappropriaterulematricesforsarcasm,negation,orpolysemy-Computefinalhybridembedding:$$e_\text{aug}=\alphae'+(1-\alpha)e''$$withα∈[0,1]controllingthebalance3.**TransformerIntegration**-Foreachbatchduringtraining:-Replaceoriginaltokenembeddingswithhybridembeddingse_aug-Modifyattentionmechanismtoincorporatepsycholinguisticalignment:$$A_{\text{aug}}=\text{softmax}\left(\frac{QK^\top}{\sqrt{d_k}}+\gamma\cdotH\right)$$whereHemphasizeslinguisticphenomenarelevance-Propagateaugmentedrepresentationsthroughtransformerlayers Figure 10: Visual Examples of AutoSenCls Task 31 NovelSeek: Starting Point of Innovation Ideadefknn(x,k):"""Computek-nearestneighborsforeachpointinthepointcloud"""...defget_graph_feature(x,k=20,idx=None,dim9=False):"""Constructedgefeaturesforeachpoint"""...defdensity_adaptive_knn(x,k_min=10,k_max=30,density_threshold=0.1,use_xyz_only=True):"""Enhanceddensity-adaptivek-nearestneighborswithimproveddensityestimation""“...classGraphConvolution(nn.Module):…classGraphAttention(nn.Module):"""EnhancedGraphAttentionLayerwithresidualconnections"""def__init__(self,in_channels,out_channels,num_heads=8,dropout=0.1):super(GraphAttention,self).__init__()...defforward(self,x):batch_size,_,num_points=x.size()residual=x#Multi-headattentionquery=self.query_conv(x).view(batch_size,self.num_heads,self.head_dim,num_points)...#Computeattentionscoresattention=torch.matmul(query.permute(0,1,3,2),key)#[B,num_heads,N,N]...#Applyattentiontovaluesout=torch.matmul(attention,value.permute(0,1,3,2))#[B,num_heads,N,head_dim]...returnoutclassHierarchicalGraphLayer(nn.Module):"""OptimizedHierarchicalGraphLayerwithefficientfeatureinteraction"""def__init__(self,in_channels,out_channels,k=20,pool_ratio=0.25):...defforward(self,x,adaptive_knn=False):...classHGANet(nn.Module):...defforward(self,x,gts=None):"""Args:x:inputpointcloud,[B,C,N]gts:groundtruthlabels,[B]Returns:loss:lossvalueifgtsisnotNonelogits:classificationlogits,[B,num_classes]"""...classChannelAttention(nn.Module):"""ChannelAttentionModule"""...MethodHierarchical Graph-Attentive Network for Efficient 3D Point Cloud ClassificationDevelopingalightweighthierarchical graph neural networkequippedwithoptimizedlocalneighborhood graph construction andself-attentive feature alignment for3Dpointcloudclassification,aimingtobalancecomputationalefficiencyandrobustfeaturerepresentationleveraginghierarchical adaptation topointdensityandglobalcontext.Thisframeworkincorporatesdensity-sensitive graph construction thatdynamicallyadjustsneighborhoodrelationshipsbasedonlocalpointdistributions,coupledwithmulti-levelgraphpartitioningtocapturebothfine-graineddetailsandglobalstructures.Theoptimized multi-head self-attention mechanism enableseffectivecross-levelfeaturealignment,creatingaframeworkthatefficientlyprocessesirregularpointcloudswhilemaintainingrobustnesstodensityvariations.Anenhancedmethodcombiningarefinedhierarchical graph neural network (HGNN)withdensity-sensitive neighborhood graph construction andanoptimized multi-head self-attentive mechanismtailoredfor3Dpointcloudclassification.Theapproachintroducesexplicitmathematicalmodelingofgraphconstruction,hierarchical
https://arxiv.org/abs/2505.16938v2
adaptation, andattentionmechanisms,aimedatimprovingcomputationalefficiencyandfeaturerobustness.Thehierarchicalgraphconstructionexplicitlymodelslocal-to-globalrelationshipsinadensity-adaptivemannerwhiletheself-attentionmechanismisredefinedtoalignpoint-levelandgraph-levelfeatureseffectively.Code ###FullAlgorithm1.**Density-SensitiveLocalNeighborhoodGraphConstruction**:Foreachinputpointcloud$X=\{x_1,x_2,...,x_n\}$,define$k$-nearestneighbors(kNN)graph$G=(V,E)$,where$V$representspoints,andedges$E$areweightedbasedonadistancemetric$d(x_i,x_j)$.Toincorporatedensityvariation,adaptivelydetermine$k$foreachpointunderathreshold$\epsilon$basedonlocaldensity,$\rho(x_i)=\frac{1}{|N(x_i)|}\sum_{j\inN(x_i)}d(x_i,x_j)$.2.**HierarchicalGraphPartitions**:Thepointsetispartitionedintomultiplelevels,$\{L_1,...,L_h\}$,where$L_1$retainsfinerdetailsandhigherlevels(e.g.,$L_h$)aggregateglobalfeaturesthroughgraphcoarseningusingvoxelizationorfarthest-point-samplingmechanisms.3.**GraphConvolutionandPooling**:Ateachlevel$L_l$,applyspectralgraphconvolutiononfeatures$H^{(l)}$:$$H'^{(l)}=\sigma(D^{-\frac{1}{2}}AD^{-\frac{1}{2}}H^{(l)}W^{(l)}),$$4.**AttentiveFeatureAlignment**:Forfeaturematrix$H^{(l)}$,computeself-attentionscores$\alpha$:$$\alpha_{ij}=\text{softmax}\left(\frac{(H^{(l)}W_Q)(H^{(l)}W_K)^T}{\sqrt{d_k}}\right),$$5.**Multi-HeadAggregation**:Multi-headattentionlayersproducefeatures$Z$:$$Z=\text{Concat}(\text{head}_1,\text{head}_2,...,\text{head}_m)W_O,$$ Figure 11: Visual Examples of Auto3DCls Task 32 NovelSeek: Starting Point of Innovation IdeaclassAdaptiveAugmentation:def__init__(self,alpha=0.5,beta=0.5,gamma=2.0):self.alpha=alphaself.beta=betaself.gamma=gammaself.device=torch.device("cuda"iftorch.cuda.is_available()else"cpu")defcompute_variance(self,x):#xshape:[B,C,H,W]#Computevarianceacrosschannelsforeachspatiallocationvar=torch.var(x,dim=1,keepdim=True)#[B,1,H,W]returnvar.mean(dim=[1,2,3])#[B]defcompute_entropy(self,probs):#probsshape:[B,C]whereCisnumberofclassesprobs=torch.clamp(probs,min=1e-8,max=1.0)...defget_geometric_strength(self,x,model=None,probs=None):var=self.compute_variance(x)#Ifmodelpredictionsareprovided,usethemforentropycalculation...defget_mixup_params(self,y,num_classes=100):#GenerateMixUpparametersbasedonlabelentropyy_onehot=F.one_hot(y,num_classes=num_classes).float()...defapply_mixup(self,x,y,num_classes=100):"""ApplyMixUpaugmentationwithadaptivecoefficient"""lam,index=self.get_mixup_params(y,num_classes)...classTemporalConsistencyRegularization:def__init__(self,memory_size=5,decay_rate=2.0,consistency_weight=0.1):self.memory_size=memory_sizeself.decay_rate=decay_rateself.consistency_weight=consistency_weightself.prediction_history={}#Storepastpredictionsforeachsampledefcompute_decay_weights(self):weights=torch.exp(-torch.arange(1,self.memory_size+1)/self.decay_rate)returnweights/weights.sum()defupdate_history(self,indices,predictions):"""Updatepredictionhistoryforeachsample"""fori,idxinenumerate(indices):idx=idx.item()...defget_aggregated_predictions(self,indices):"""Getaggregatedpredictionsforeachsampleusingdecayweightsỹ_i=Σ(ω_k·ŷ_i^(t-k))"""weights=self.compute_decay_weights().to(indices.device)...defcompute_consistency_loss(self,current_preds,indices):#Computeconsistencylossbetweencurrentandaggregatedpastpredictions...MethodHierarchical Adaptive Regularization and Consistency Network for Robust Image ClassificationDevelopasynergistichierarchicalframeworkforimprovingimageclassificationthatintegrates(a)adaptive, data-distribution-driven augmentation mechanisms combiningMixUpwithdynamicallycontrolledgeometrictransformationstoenhancedatarobustness,(b)adecayed temporal consistency regularization methodthatminimizesstochasticnoiseinpseudo-labelsusingexponentiallyweightedpastpredictions,and(c)auxiliary losses that explicitly optimize feature group consistency inspiredbyhumanvisualpatterndifferentiation,withadditionalgroundinginlossfunctioncalibration.Thisframeworkwillberigorouslyevaluatedusingablationstudiesfocusingoncompatibilityandinteractionamongaugmentation,regularization,andlossmechanisms,alongsidebenchmarkingagainstResNetandVisionTransformermodelsonCIFAR-100.HARCNetcombineshierarchical adaptive augmentation withmathematically grounded regularizationmechanismsinspiredbyhumanvisualprocessingtoimproverobustnessinimageclassificationtasks.Themethodintegrates(1)anadaptive augmentation mechanismthatdynamicallymodulatesgeometrictransformationsbasedondatadistribution,and(2)adecayed temporal consistency regularization frameworkunderpinnedbyformalmathematicalformulations,ensuringsmootherpseudo-labelingandimprovedconvergence.ThesecomponentscollaboratesynergisticallytoachieverobustclassificationperformanceonCIFAR-100.Code ###FullAlgorithm1.**AdaptiveAugmentationPhase**:InputimagesarepreprocessedusingdynamicallytunedMixUpandgeometrictransformationsbasedontheirvarianceandentropy.Let$S_{g}$representthegeometricaugmentationstrength,whichisupdatedasfollows:$$S_{g}(x_i)=\alpha\cdot\text{Var}(x_i)+\beta\cdot\text{Entropy}(x_i)$$2.**PredictionandTemporalAggregation**:Foreachbatch,thenetworkevaluatespredictionsandrefinespseudo-labelsbyaggregatingpastoutputsweightedwiththeexponentialdecaymechanism.3.**TotalLossOptimization**:Thetotaltraininglossintegratesprimaryclassificationloss$\mathcal{L}_{cls}$,consistencyregularization$\mathcal{L}_{consistency}$,andregularizedauxiliarylosses:$$\mathcal{L}=\mathcal{L}_{cls}+\lambda_{consistency}\mathcal{L}_{consistency}+\lambda_{auxiliary}\mathcal{L}_{auxiliary}$$4.**OptimizerParameters**:WeemploySGDwithmomentum(0.9)andweightdecay($5\times10^{-4}$).Thestepsizesfor$\lambda_{consistency}$and$\lambda_{auxiliary}$aredeterminedviagridsearchoverthevalidationset. Figure 12: Visual Examples of Auto2DCls Task Figure 13: NOVEL SEEK software platform includes features such as the user entry interface, task selection interface, idea-tree visualization and human-computer interaction interface, code generation, and auto-debug interface. In the near future, we plan to support additional functionalities, including custom dataset uploads and academic idea thinking modes. 33
https://arxiv.org/abs/2505.16938v2
arXiv:2505.16944v1 [cs.AI] 22 May 2025AGENT IF: Benchmarking Instruction Following of Large Language Models in Agentic Scenarios Yunjia Qi1∗, Hao Peng1∗, Xiaozhi Wang1, Amy Xin1, Youfeng Liu2, Bin Xu1,Lei Hou1,Juanzi Li1 1Tsinghua University2Zhipu AI {qyj23, peng-h24}@mails.tsinghua.edu.cn Abstract Large Language Models (LLMs) have demonstrated advanced capabilities in real- world agentic applications. Growing research efforts aim to develop LLM-based agents to address practical demands, introducing a new challenge: agentic scenarios often involve lengthy instructions with complex constraints, such as extended sys- tem prompts and detailed tool specifications. While adherence to such instructions is crucial for agentic applications, whether LLMs can reliably follow them remains underexplored. In this paper, we introduce AGENT IF, the first benchmark for systematically evaluating LLM instruction following ability in agentic scenarios. AGENT IFfeatures three key characteristics: (1) Realistic, constructed from 50 real-world agentic applications. (2) Long, averaging 1,723words with a maximum of15,630words. (3) Complex, averaging 11.9constraints per instruction, cover- ing diverse constraint types, such as tool specifications and condition constraints. To construct AGENT IF, we collect 707human-annotated instructions across 50 agentic tasks from industrial application agents and open-source agentic systems. For each instruction, we annotate the associated constraints and corresponding evaluation metrics, including code-based evaluation, LLM-based evaluation, and hybrid code-LLM evaluation. We use AGENT IFto systematically evaluate existing advanced LLMs. We observe that current models generally perform poorly, espe- cially in handling complex constraint structures and tool specifications. We further conduct error analysis and analytical experiments on instruction length and meta constraints, providing some findings about the failure modes of existing LLMs. We have released the code2and data3to facilitate future research. 1 Introduction Large language models (LLMs) have demonstrated strong capabilities in real-world agentic applica- tions [ 19]. Growing studies focus on developing LLM-based agents to address practical demands, such as Web Agents [ 7,8], Education Agents [ 4], GUI Agents [ 31,32], and PPT Agents [ 34]. While these agentic scenarios expand the application scope of LLMs, they also pose a new challenge: agentic tasks usually involve long and complex instructions, such as extended system prompts and detailed tool specifications. Correctly following such instructions is a prerequisite for solving these tasks and reflects a fundamental capability of LLMs. However, whether LLMs can effectively follow instructions in real-world agentic scenarios remains underexplored. Previous work on benchmarking instruction following of LLMs has mainly focused on ∗Equal contribution. 2https://github.com/THU-KEG/AgentIF 3https://huggingface.co/datasets/THU-KEG/AgentIF Preprint. (a) 59.8 41.4 87.0 66.162.545.7Vanilla Condition Example FormattingSemanticT ool o1-mini GPT-4o Qwen3-32B DeepSeek-R1 Claude-3.5 LLaMA-3.1-70B-Inst. (b) Figure 1: (a)The length distribution of instructions across AGENT IF(log-scale). (b)Success rates of several representative LLMs on different constraint dimensions (detailed descriptions are in § 3.2). relatively short instructions, which are usually synthetically generated. For example, the widely used benchmark IFEval [ 36] is synthetically constructed with various constraint types, such as formatting, and has an average instruction length of only 45words. Subsequent studies have expanded the instruction scope to include more constraint types [ 14,23,33,29], system prompts [ 22], multi-turn conversations [ 10], and action constraints [ 16]. Nonetheless, the instructions in these benchmarks are typically short and usually synthetically generated, resulting in a gap
https://arxiv.org/abs/2505.16944v1
from real-world agentic applications. Figure 2 illustrates an instruction from a real-world agentic scenario. We can observe that the instruction is long with complex structures and constraint types, such as condition constraints, example constraints, and tool specifications, posing novel and significant challenges for LLMs. As existing instruction-following benchmarks typically lack coverage of such agentic instructions, it is necessary to systematically evaluate LLMs’ ability to handle them. Considering the above concerns, we propose AGENT IF, the first benchmark to evaluate instruction following of LLMs in real-world agentic scenarios. Specifically, we first collect 50agentic tasks from industrial applications and open-source agentic systems. For each task, we manually annotate around 20user queries, each combined with the corresponding agentic system prompt to form an instruction. We then extract all the constraints from each instruction and investigate their types. As illustrated in Figure 2, we classify the constraints into three main types: (1) formatting constraints, which specify the structure or presentation of the output; (2) semantic constraints, which require semantic understanding to check, e.g., language style; (3) tool constraints, which involve adherence to tool specifications, e.g., the parameter format of a function. We also classify the representation types of these constraints into three types: (1) vanilla, which means the constraints is described directly in plain text; (2) condition, where constraints are triggered under certain conditions, e.g., “if the output exceeds 100 words, include the keyword paper ”; (3) example, which is similar to in-context learning [ 3], where the model is expected to follow the structure shown in examples. Finally, AGENT IFincludes 707high-quality manually annotated instructions, with each instruction containing an average of 11.9constraints. These real-world agentic instructions, with extended length (Figure 1(a)) and complex constraints, present significant challenges to existing LLMs. For the evaluation on AGENT IF, we annotate the evaluation method for each constraint, consisting of three paradigms: (1) code verification, which examines constraint satisfaction through Python code; (2) LLM verification, which assesses constraint satisfaction using large language models; (3) hybrid verification, which uses a combination of code and LLM verification. For example, when evaluating the constraint “The abstract should be no less than 100 words”, the method first uses an LLM to extract the abstract section, followed by code verification to assess word count compliance. We conduct comprehensive experiments to evaluate current advanced LLMs on AGENT IF. Specifi- cally, we evaluate several representative thinking LLMs, non-thinking LLMs, and LLMs optimized for instruction-following. As shown in Figure 1(b), all LLMs perform poorly and even the best- performing model only follows fewer than 30% of the instructions perfectly. We further conduct error 2 You are a top-tier code expert. Your objective: use specified predefined functions to write structured, high-quality Python code for <task>.Task Description1. Task Structure-<task>: The user-provided task requiring code. When implementing, consider preceding code and execution history.-If <task> has multiple objectives, complete only the first.-Output comments and code within <code></code> tags.Call the given predefined function at most once.-If outputting file links, save files to/mnt/data.2. Available ResourcesUse the following predefined function(s):search: {'query': {'description': 'Search query', 'type': 'str', 'required': 'True’}, 'recency_days': {'description': 'Recency of search results
https://arxiv.org/abs/2505.16944v1
in days', 'type': 'int', 'required': 'False’}}Example: Refer to the following example to solve the problem. Note that your response should follow the same format:input:<task>I will use the search function to query "Beijing today temperature" to get today's temperature information for Beijing.</task>output:<code>'''1. Use the search function to query "Beijing today temperature" and retrieve relevant search results.2. Print the search results for subsequent analysis. '''search_result= search(query="Beijing today temperature")print(search_result)</code>Query: <task>I need the latest information on Taylor Swift, including her profession, achievements, recent activities, related images, and videos. To obtain this information, I will use the search function to perform a web search and extract relevant details.</task>[C1][C2][C3][C4][C5][C7] [C8][C6]Constraint Presentation Type Constraint TypeVanilla ConstraintConditionConstraintExample Constraint[C1] [C2] [C3] [C4] [C5] [C8][C6][C7]SemanticToolFormatting Figure 2: An example instruction of A GENT IF. analysis and find that the most challenging condition and tool constraints introduce new challenges. We also find the performance declines as instruction length increases in AGENT IF. Additionally, we identify a novel category of constraints, meta constraints, which reflects underlying prioritization issues. In conclusion, advanced LLMs still struggle to follow real-world agentic instructions. 2 Related Work Instruction following is a fundamental capability of LLMs, referring to following user instructions, including task completion and adherence to user requirements. The most widely used instruction- following benchmark is IFEval [ 37], which is the first to formalize the task as multi-constraint compliance, such as requirements on output length and format. For example, an instruction in IFEval “Write a long email, with at least 500 words. The email must include the keywords ‘correlated’ and ‘experiencing”’ includes constraints on output length, required keywords, which can be efficiently and accurately verified using Python code. Subsequent work has expanded instruction scope in several directions: (1) More constraint types [ 14,23,33,29], which include constraints requiring semantic understanding (e.g., style) and adopt LLMs for evaluation. (2) Multilingual [ 33,9,10]. (3) Multi-turn [ 10,9], which assesses instruction following in multi-turn dialogues, such as requiring the model to revise its response in the last round. (4) Code [ 30], which evaluates instruction-following capability in code generation. Notably, there are also two studies closely related to agentic scenarios. SysBench [ 22] evaluates compliance with system prompts, which are mined from realistic user logs. However, these prompts are typically short and lack tool usage, still leaving a gap from realistic agentic scenarios. AgentOrca [ 16] assesses adherence to operational constraints and routines of LLMs. They primarily focus on compliance with function invocation and do not involve the complex system prompts or constraint types typical of real-world agents. In conclusion, existing instruction-following benchmarks overlook the evaluation of instruction compliance in realistic agentic scenarios. In realistic agentic scenarios, as shown in Figure 2, instructions are typically long with complex constraint types, structures, and tool specifications. As LLM-based agents are increasingly deployed across various domains [ 7,8,4,31,32,34], accurate adherence to agentic instructions becomes essential. To address this need, we introduce AGENT IF, the first instruction-following benchmark for agentic scenarios. AGENT IFcomprises data from real-world industrial applications and open-source agentic workflows, with comprehensive human annotations. Each instruction in AGENT IFcontains 1,700tokens and 14constraints on average,
https://arxiv.org/abs/2505.16944v1
which presents significant challenges to current LLMs. 3 A GENT IF This section presents a detailed introduction to AGENT IF, including 4 parts: the constraint taxonomy (§ 3.1), the dataset construction process (§ 3.2), the statistics of AGENT IF(§ 3.3), and the evaluation protocol (§ 3.4). Figure 3 illustrates the dataset construction and evaluation workflow of A GENT IF. 3 STEP1: InstructionCollectionSTEP2: ConstraintAnnotationSTEP2: ConstraintAnnotationopen-source agentic applicationsindustrial agent-based frameworksYou are a top-tier coder.Your objective: …a1. Task Structure-<task>: Call the given predefined function at most once. …a2. Available ResourcesUse the following predefined function(s): …aExample: Refer to the …aQuery: ...Instruction SegmentationIntroductionFunctionExamplesQueryTask Description"constraint": "Call the given predefined function at most once.","source_from": "Task Description","constraint presentation paradigm": "vanilla","type": "tool"Block-wise Constraint Extraction Cross-block ValidationIntroductionFunctionExamplesQueryExamine other blocks for additional informationrelated to that constraint"constraint": "Call the given predefined function at most once. The predefined function: […]","source_from": "Task Description","Function Call","constraint presentation paradigm": " vanilla ","type": " tool " STEP3: EvaluationGenerationConstraint + ResponseConditionCheckConstraintCheckRuleHybridLLMConditionVanillaExampleSkip Current ConstraintNoYes defcheck_following(response):...returnTruereturnFalseDoes the response …? Yes/NoExtracts the constraint-relevant span of the responsedefcheck_following(span): ...Final Score [0 or 1] Task Description ConstraintDetermineParadigmChosen Eval MethodEvaluation GenerationEvaluation MethodFigure 3: The data construction process and evaluation workflow of AGENT IF. The detailed descrip- tions of different constraint types are presented in § 3.1. 3.1 Constraint Taxonomy To comprehensively evaluate LLMs’ ability to follow complex instructions in agentic settings, we investigate about 100instructions from real-world scenarios and construct a constraint taxonomy, which classify constraints along two dimensions: constraint type, such as whether the constraint requires semantic understanding for verification, and constraint representation type, such as whether constraint is conditionally triggered. More details and examples are provided in Appendix C. Constraint Type The constraint type refers to the specific evaluation aspect, such as format or style. Following prior work [ 14,29], we categorize constraints into two commonly used types and introduce a new type, that is tool constraints, specific to agentic scenarios. Formatting constraints specify the structure or presentation of the output. These include requirements about syntax (e.g., JSON or Markdown), layout (e.g., bullet points, tables, or paragraph length), symbol conventions (e.g., using backticks for filenames), and step-by-step formatting (e.g., explaining a principle in three steps). Semantic constraints focus on the semantic meaning and informativeness of the content, including requirements for depth, completeness (e.g., inclusion of keywords or references), and adherence to a specific style or tone. Tool constraints are newly introduced specifically for agentic scenarios, requiring the model to invoke tools according to given specifications, such as adhering to the correct parameter types, avoiding internet access, or restricting tool usage to a predefined set of functions. Constraint Presentation Type The constraint presentation type refers to how constraints are presented in text. We categorize this into three forms. Vanilla constraints are described explicitly in plain text (e.g., include the keyword paper ).Conditional constraints are triggered only under specific conditions, which may be derived from the input (e.g., responding only if certain keywords appear) or from the model’s own output behavior (e.g., applying markdown rules only when markdown is used). Example constraints are not explicitly stated but implied through few-shot examples, like in-context learning [3], requiring the
https://arxiv.org/abs/2505.16944v1
model to infer and follow constraints from given output examples, which requires analogical reasoning and inductive capabilities. 3.2 Dataset Construction As shown in Figure 3, AGENT IFis constructed through a semi-automated pipeline consisting of three main steps: agentic instruction collection, constraint extraction, constraint evaluation design. More details of the data construction process and human annotation are shown in appendix D. Instruction Collection We focus on real-world agentic instructions, constructing our dataset from two sources: open-source agentic applications and industrial agent-based frameworks. We collect instructions based on two key principles: (1) Realistic : Each instruction should reflect practical, real-world agentic tasks. (2) Complex : Instructions should involve complex constraints, structures, 4 Benchmark #Inst. Len. #Cons.Data ResourceConstraint Type Evaluation Method Tool Conditional Example Code-based LLM-based IFEval [37] 541 36 1 .5Synthetic ✗ ✗ ✗ ✓ ✗ FollowBench [14] 820 253 3 .0Synthetic ✗ ✗ ✓ ✓ ✓ InfoBench [23] 500 38 4 .5Synthetic ✗ ✗ ✗ ✗ ✓ SysBench [22] 500 521 2 .4 Realistic ✗ ✗ ✗ ✗ ✓ ComplexBench [29] 1,150 448 4 .2Synthetic ✗ ✓ ✗ ✓ ✓ AgentOrca [16] 663 1 ,144 - Synthetic ✓ ✓ ✗ ✓ ✗ Multi-IF [10] 4,501 48 7 .1Synthetic ✗ ✗ ✗ ✓ ✗ AGENT IF (ours) 707 1 ,723 11 .9 Realistic ✓ ✓ ✓ ✓ ✓ Table 1: The statistics of AGENT IFand previous instruction-following benchmarks. The statistics include dataset size (#Inst.), average instruction length (#Len.), average number of constraints per instruction (#Cons.), data resource, constraint types, and supported evaluation methods. and tool specifications that pose significant challenges for LLMs. Specifically, we first collect 40 agents from GitHub4, including well-known agentic applications such as Cursor and Manus, along with 10 agents from industrial agentic workflows, which supports around 200daily active users, handles approximately 300requests per day, and has delivered a total of 120,000services. The 50 collected agents include only system prompt, which contain task specifications, goals, and tool descriptions, without any user queries, ensuring no risk of user data leakage. Next, we use GPT-4o to generate about 20 user queries for each agent based on their system prompts. We then apply heuristic rules and similarity-based filtering to remove redundant queries. Finally, we employ human annotators to rewrite each generated query, ensuring consistency with real user cases. As a result, we obtain 707high-quality instructions with an average length of 1,723words. Constraint Annotation As shown in Figure 3, we first use an LLM to automatically extract constraints from the instructions. As the original instructions are long with complex structure, directly extracting constraints poses a challenge for LLMs. To address this, we design a block-wise annotation method. Specifically, we segment each instruction into self-contained semantic blocks (e.g., task description, tool configuration, output specification), ensuring that the content is not truncated. We then use GPT-4o to extract relevant constraints from each block. Some constraints may span multiple blocks, for example in Figure 3, the complete tool specifications are distributed across multiple blocks. Therefore, we perform cross-block validation to add this information to ensure the completeness of each constraint. Finally, we employ human annotators to validate each
https://arxiv.org/abs/2505.16944v1
constraint to verify its completeness and consistency with the original instruction. As a result, we obtain 8,415high-quality constraints in total, with an average of 11.9constraints per instruction. Evaluation Generation Finally, we annotate each constraint with its corresponding evaluation method. Following prior work [ 14,23,29], we adopt a hybrid evaluation framework that combines LLM-based and code-based evaluation. We adopt different evaluation methods for different types of constraints as described in § 3.1. Specifically, as shown in Figure 3, we define three evaluation modes based on constraint types: (1) Code evaluation , which is used for constraints that can be verified through simple and deterministic Python code (e.g., keyword presence, formatting patterns). (2) LLM evaluation , which is applied to open-ended or subjective constraints requiring semantic understanding. In these cases, we adopt an LLM for evaluation. (3) Hybrid evaluation , which is used for more complex cases, where the LLM first identifies relevant segments in the response (e.g., extracting JSON for tool calls), followed by code-based validation. Notably, for conditional constraints, we annotate the evaluation process to first check whether the condition is met before performing constraint evaluation. We use GPT-4o to determine the evaluation method for each constraint and to generate the corresponding evaluation script. We then manually review all annotations to ensure the correctness of the generated evaluation methods and revise them as needed. 3.3 Data Statistics The statistics of AGENT IFand other related benchmarks are summarized in Table 1. Compared to existing instruction-following benchmarks, AGENT IFfeatures three key characteristics: (1) Realistic . 4https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools https://github.com/Shubhamsaboo/awesome-llm-apps 5 ModelsDimension TypeISR CSR Vanilla Condition Example Formatting Semantic Tool [T]o1-mini 59.8 37 .5 80 .8 66 .1 59 .1 43 .2 26 .9 59 .8 [N]GPT-4o 58.0 35 .1 80 .8 65 .8 56 .5 43 .2 26 .4 58 .5 [N]Qwen3-32B 57.5 41 .1 80 .6 57 .7 62 .5 45 .7 24 .9 58 .4 [T]QwQ-32B 57.5 35 .6 82 .7 61 .4 59 .4 43 .2 27 .2 58 .1 [T]DeepSeek-R1 56.1 41 .4 87 .0 61 .4 58 .9 44 .4 22 .2 57 .9 [T]GLM-Z1-32B 56.7 37 .9 83 .6 60 .2 59 .6 43 .1 23 .8 57 .8 [N]DeepSeek-V3 54.9 41 .5 84 .5 59 .3 58 .9 40 .8 21 .9 56 .7 [N]Claude-3-5-Sonnet 57.3 36 .9 69 .2 61 .5 56 .0 43 .3 24 .9 56 .6 [N]Meta-Llama-3.1-70B-Instruct 55.1 35 .0 84 .3 61 .6 55 .6 42 .8 20 .9 56 .3 [T]DeepSeek-R1-Distill-Qwen-32B 54.5 39 .6 73 .1 55 .7 57 .2 45 .2 20 .7 55 .1 [T]DeepSeek-R1-Distill-Llama-70B 55.4 37 .7 69 .2 56 .5 56 .6 44 .1 19 .9 55 .0 [N]Meta-Llama-3.1-8B-Instruct 53.5 36 .6 71 .4 55 .6 54 .8 43 .5 19 .9 53 .6 [S]Crab-DPO-7B 48.3 24 .3 57 .5 48 .8 47 .4 41 .9 10 .1 47 .2 [N]Mistral-7B-Instruct-v0.3 47.9 29 .2 53 .8 47 .0 48 .6 39 .8 11 .5 46 .8 [S]Conifer-DPO-7B 45.6 27 .0 50 .5 42 .0 46 .9 41 .8 10 .7 44 .3 Table 2: Success rates (%) of various
https://arxiv.org/abs/2505.16944v1
proprietary and open-source LLMs on AGENT IF, sorted by CSR in descending order. [N] denotes non-thinking models, [T] denotes thinking models, and [S] denotes models explicitly designed for instruction following by the academic community. As described in § 3.4, CSR indicates the proportion of correctly followed individual constraints, and ISR presents the proportion of instructions in which all constraints are satisfied. AGENT IFis derived from real-world agentic scenarios which reflects real use cases. (2) Long . Instructions in AGENT IFare significantly longer than those in prior benchmarks, with an average length of 1,723words. We also illustrate the length distribution of AGENT IFin Figure 1(a). We can observe that a substantial portion of instructions even exceeds 3,000words, posing a significant challenge for existing LLMs. (3) Complex . Each instruction contains an average of 11.9constraints, with a good coverage of various constraint types such as tool, condition, and example constraints. 3.4 Evaluation Protocol Figure 3 illustrates our evaluation methodology. The process first determines whether each constraint requires verification. Condition constraints that are not triggered are exclued from verification. We then adopt the annotated corresponding evaluation method. For LLM-based and hybrid evalua- tions, we employ gpt-4o-2024-11-20 . For evaluation metrics, following prior work [ 22,37], AGENT IFadopts two metrics for evaluation: constraint success rate (CSR) and instruction success rate(ISR) . CSR measures the proportion of individual constraints that are correctly satisfied by the model’s response. For a given instruction i,Ciis the number of constraints associated with it, and ci,jindicates whether the j-th constraint in instruction iis satisfied. ISR measures the proportion of instructions for which all constraints are satisfied. Supposing Nis the number of instruction: CSR=PN i=1PCi j=11[ci,j=satisfied ] PN i=1Ci;ISR=PN i=11hVCi j=1(ci,j=satisfied )i N 4 Experiments In this section, we introduce the experiments and empirical analyses on AGENT IF, including experi- mental setup (§ 4.1), main results on AGENT IF(§ 4.2), error analysis (§ 4.3), analysis on instruction length (§ 4.4) and meta constraints (§ 4.5). 4.1 Experimental Setup We evaluate various advanced LLMs on AGENT IF, including non-thinking models: GPT-4o [ 11], DeepSeek-V3 [ 17], Claude 3.5 Sonnet [ 1] , LLaMA 3.1 Series [ 5], Qwen3 [ 26], and Mistral [ 13]; thinking models: o1-mini [ 12], QwQ 32B [ 25], GLM-Z1 32B [ 27], DeepSeek-R1 [ 6], and DeepSeek- R1 distilled models [ 6]; and academic models developed specifically for instruction following, including Crab [ 21] and Conifer [ 24]. For all models, we set the sampling temperature to 0. For reasoning models, we remove intermediate reasoning tokens and evaluate only the final response. 6 4.2 Main Results All experimental results are shown in Table 2. We can observe that: (1) All models demonstrate suboptimal performance. Even the best-performing model, o1-mini, achieves only a CSR of 59.8. ISR results are even lower, with the highest reaching just 27.2. Compared to their performance on the commonly used benchmark IFEval [ 37], all models exhibit a dramatic drop, for example, GPT-4o drops from 87.0to58.5. This indicates that existing models are still far from perfectly following constraints in agentic scenarios. AGENT IFposes a significant
https://arxiv.org/abs/2505.16944v1
challenge to existing LLMs, highlighting the need for further research in instruction following under real-world agentic scenarios. As instruction following is a prerequisite ability for building reliable LLM-based agents, we argue that prior to developing agents, it is crucial to evaluate the fundamental instruction-following abilities of LLMs to inform effective prompt and workflow design. (2) For specific constraint types, models perform much lower on the condition and tool constraints. Compared to vanilla constraints, condition constraints require an additional step to determine whether the constraint should be triggered. Thus, the low success rate on condition constraints may be that the model struggle in correctly determining whether the condition is triggered. As condition-based constraints account for approximately 42.6%of real-world applications and are usually overlooked in existing instruction-following research [ 21,24], we advocate for increased efforts to handle conditional instructions. For tool constraints, failures may stem from issues such as missing required parameters or failing to invoke the specified tools. Detailed error analysis is provided in § 4.3. The primary reason may be that the models fails in handling specification-heavy tasks [ 20,2] and tool usage typically involves complex specifications. As tool usage is necessary in agentic applications, we call for more attention to tool specification adherence to enable more reliable agent behavior. For example constraints, models perform relatively better. It suggests that they can effectively infer and meet requirements when provided with in-context examples, indicating that providing concrete examples in prompt design can facilitate better understanding and imitation of desired behaviors. (3) Across different models, we find that thinking models generally perform better, suggesting that complex instruction following also requires reasoning capabilities and that test-time scaling also benefits. In contrast, models specifically trained for instruction following by the academic community perform worse. This may be due to their primary focus on constructing SFT datasets for training base models [ 21,24]. Given that industry models, such as Llama 3, already demonstrate strong performance due to using large-scale SFT data, we encourage the academic community to explore more advanced approaches, such as reinforcement learning [ 15], which has recently proven effective for enhancing model capabilities [6] but remains under-explored in instruction following. In conclusion, AGENT IFposes significant challenges for existing models, particularly on constraints commonly used in agentic applications, such as tool constraints. We call for increased research efforts to improve instruction following in agentic scenarios. 4.3 Error Analysis As shown in Figure 2, LLMs perform particularly poorly on two types of common constraints in agentic scenarios: condition and tool constraints. Therefore, we conduct a detailed error analysis on these cases. Specifically, we analyze the errors of the four representative LLMs, including o1-mini, GPT-4o, QwQ-32B, and DeepSeek-R1, and manually investigate their error types. Analysis on Condition Constraint We identify two main types of failure in following condition constraints: (1) incorrect condition checking, where the LLM fails to determine whether a condition is triggered; and (2) instruction following failure, where the LLM fails to follow the constraint even when the condition is triggered. We conduct a controlled experiment to assess the relative proportion of the two causes.
https://arxiv.org/abs/2505.16944v1
Specifically, we select all the failed conditional constraints from each investigated model, remove their conditional components, and then convert them into vanilla constraints that must be met while keeping all other elements unchanged. If the model then succeeds, it indicates an error in condition checking; if it still fails, it suggests a general failure to follow the constraint. The results are shown in Figure 4(a). We can observe that a substantial portion (above 30%) of errors are due to incorrect condition checks, suggesting that condition constraints introduce new challenges in determining whether a constraint is triggered. Since existing work on instruction following usually overlooks such conditional constraints, we advocate constructing targeted post-training data. 7 o1-mini GPT-4o QwQ-32B DeepSeek-R11020304050607080Proportion (%)Incorrect Condition Checking Instruction Following Failure(a) o1-mini GPT-4o QwQ-32B DeepSeek-R101020304050607080Proportion (%)Disallowed T ool Usage Omission of Required T oolsT ool Name Errors Parameter Errors (b) Figure 4: Error proportions (%) on condition and tool constraints. Figure (a) shows the errors in handling condition constraints, including condition check failure, where the model fails to recognize the condition, and constraint following failure. Figure (b) shows the errors from tool constraints, including disallowed tool usage (utilizing explicitly prohibited tools), omission of required tools (failing to employ required tools), tool name errors (invoking non-existent or incorrect tools), and parameter errors (applying incorrect or illegal arguments). [0,200] (200,1000] (1000,3000] (3000,6000] (6000,10000] Instruction Length020406080100Success Rate (%) Average CSR Average ISR (a) [0,10] (10,20] (20,40] (40,80] (80,160] Constraint Count020406080100Success Rate (%) Average CSR Average ISR (b) Figure 5: Success rates on instructions with varying length or constraint counts. Gray lines show results of the top 6 models in Figure 2, and the colored lines present their average. Analysis on Tool Constraint We conduct error analysis on tool constraint violations. Specifically, we sample 50tool usage errors from each investigated model and identify four primary error cate- gories: disallowed tool usage, omission of required tools, tool name errors, and parameter errors. The error proportions are shown in Figure 4(b). We observe that disallowed tool usage and omission of required tools constitute the primary errors, suggesting that tool invocation decision-making remains challenging for models. A portion of errors also stems from non-compliance with tool usage specifications, exhibited as incorrect tool names or parameters. Notably, we observe an interesting phenomenon: thinking models more frequently neglect required tools. The reason may be that the thinking models may tend to rely more on their internal knowledge. We encourage the research community to conduct further investigations into these specific underlying causative mechanisms. 4.4 Analysis of Instruction Length and Constraint Quantity While prior work has shown that long texts pose significant challenges for LLMs, most studies have focused on long textual contexts within the query [ 2], such as long document question answering. Little work investigates the challenges arising from long instructions, which may be due to the lack of evaluation data with lengthy instructions. AGENT IFprovides such an evaluation platform, with instructions averaging 1,723words and containing about 11.9constraints each. We analyze model performance on instructions of varying lengths and constraint counts in AGENT IF. We bucket the data
https://arxiv.org/abs/2505.16944v1
by instruction length or number of constraints, then compute the success rates within each bucket. The results are shown in Figure 5. We observe that model performance generally declines with increasing instruction length or constraint count, indicating that longer instructions or those with 8 Meta ConstraintConstraint SelectionWhen a task lists several constraints, you are required tosatisfy only the first constraint and may ignore the others.91.4%Constraint PrioritizationIf there is any conflict between adhering to the time-boxed agenda and covering all suggested talking points or discussion topics, prioritizemaintaining the time-boxed structure.1.0%Constraint DetailingYou should address both“analyze emotional state” and “identify patterns” in separate sentences, using non-overlapping language and observations to avoid redundancy.7.5% (a) o1-mini GPT-4o QwQ-32B DeepSeek-R1020406080100Success Rate (%)Detailing Selection Prioritization (b) Figure 6: Figure (a) illustrates three types of meta constraints and examples. Most meta constraints fall within the Constraint Selection category, which requires models to follow one specific constraint. Figure (b) presents the success rates of different investigated models on each meta constraint type. more constraints are inherently more difficult, which is consistent with the intuition. Notably, when instruction length exceeds 6,000words, the ISR scores of all models are nearly 0. This indicates that overly long instructions are rarely followed perfectly and should be avoided in practice. Instead, one can explore decomposing tasks into several sub-tasks with several shorter instructions to alleviate instruction following failures [ 35]. We call for more research efforts to enhance models’ ability to follow long instructions. As discussed in [ 20], the primary reason LLMs fail on specification-heavy instructions is the in-context learning limitation. A promising direction is collecting post-training data with long instructions, which remains underexplored due to the scarcity of such data. One potential source is manuals5, such as camera manuals, which can serve as long instructions and be used to automatically construct question-answer pairs for post-training. We leave this for future work. 4.5 Analysis of Meta Constraints We observe a prevalent type of constraint in AGENT IF, which we refer to as meta constraints . Unlike regular constraints that apply directly to the model’s response, meta constraints govern other constraints. We find approximately 25% of instructions in AGENT IFinclude meta constraints. As shown in Figure 6(a), we categorize them into three types: (1) constraint selection, where the meta constraint requires the model to follow only a specific constraint; (2) constraint detailing, where it adds further requirements to an existing constraint; and (3) constraint prioritization, where it defines the relative priority among multiple constraints. Figure 6(b) illustrates the success rates of different meta constraints. We can observe that the models generally perform the worst on constraint selection. One possible reason is that the meta constraint may conflict with the original constraints, which confuses LLMs. Future work may explore giving meta constraints higher priority to improve compliance [ 28], while carefully mitigating potential safety risks such as prompt injection attacks [18]. 5 Conclusion In this paper, we present AGENT IF, the first instruction-following benchmark for agentic scenarios. AGENT IFcomprises 707instructions across 50real-world agentic applications. Each instruction has an average length of 1,717tokens and includes approximately 11.9constraints, covering
https://arxiv.org/abs/2505.16944v1
a diverse range such as condition and tool constraints. We evaluate various representative and advanced LLMs onAGENT IFand find that current models generally perform poorly, and the best model perfectly follows fewer than 30% of the instructions, which suggests that AGENT IFposes a significant challenge. We further conduct analytical experiments to investigate the failure modes. We find that condition and tool constraints introduce new challenges. We also observe performance degradation as instruction length increases, which aligns with the intuition that longer instructions are more difficult. We encourage more research efforts to enhance instruction-following capabilities in agentic scenarios. 5https://manymanuals.com/ 9 References [1] Anthropic. Introducing claude 3.5 sonnet, October 2024. Accessed: 2025-05-15. [2]Yushi Bai, Shangqing Tu, Jiajie Zhang, Hao Peng, Xiaozhi Wang, Xin Lv, Shulin Cao, Jiazheng Xu, Lei Hou, Yuxiao Dong, et al. Longbench v2: Towards deeper understanding and reasoning on realistic long-context multitasks. arXiv preprint arXiv:2412.15204 , 2024. [3]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In Proceedings of NeurIPs , pages 1877–1901, 2020. [4]Zhendong Chu, Shen Wang, Jian Xie, Tinghui Zhu, Yibo Yan, Jinheng Ye, Aoxiao Zhong, Xuming Hu, Jing Liang, Philip S Yu, et al. Llm agents for education: Advances and applications. arXiv preprint arXiv:2503.11733 , 2025. [5]Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv e-prints , pages arXiv–2407, 2024. [6]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [7]Izzeddin Gur, Hiroki Furuta, Austin V Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. A real-world webagent with planning, long context understanding, and program synthesis. In Proceedings of ICLR , 2024. [8]Hongliang He, Wenlin Yao, Kaixin Ma, Wenhao Yu, Yong Dai, Hongming Zhang, Zhenzhong Lan, and Dong Yu. Webvoyager: Building an end-to-end web agent with large multimodal models. In Proceedings of ACL , pages 6864–6890, 2024. [9]Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Jiaqing Liang, and Yanghua Xiao. Can large language models understand real-world complex instruc- tions? In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 18188–18196, 2024. [10] Yun He, Di Jin, Chaoqi Wang, Chloe Bi, Karishma Mandyam, Hejia Zhang, Chen Zhu, Ning Li, Tengyu Xu, Hongjiang Lv, et al. Multi-if: Benchmarking llms on multi-turn and multilingual instructions following. arXiv preprint arXiv:2410.15553 , 2024. [11] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [12] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [13] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra
https://arxiv.org/abs/2505.16944v1
Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. [14] Yuxin Jiang, Yufei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, and Wei Wang. Followbench: A multi-level fine-grained constraints following benchmark for large language models. arXiv preprint arXiv:2310.20410 , 2023. [15] Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. T \" ulu 3: Pushing frontiers in open language model post-training. arXiv preprint arXiv:2411.15124 , 2024. 10 [16] Zekun Li, Shinda Huang, Jiangtian Wang, Nathan Zhang, Antonis Antoniades, Wenyue Hua, Kaijie Zhu, Sirui Zeng, William Yang Wang, and Xifeng Yan. Agentorca: A dual-system framework to evaluate language agents on operational routine and constraint adherence. arXiv preprint arXiv:2503.08669 , 2025. [17] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [18] Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Zihao Wang, Xiaofeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, et al. Prompt injection attack against llm-integrated applications. arXiv preprint arXiv:2306.05499 , 2023. [19] Junyu Luo, Weizhi Zhang, Ye Yuan, Yusheng Zhao, Junwei Yang, Yiyang Gu, Bohan Wu, Binqi Chen, Ziyue Qiao, Qingqing Long, et al. Large language model agent: A survey on methodology, applications and challenges. arXiv preprint arXiv:2503.21460 , 2025. [20] Hao Peng, Xiaozhi Wang, Jianhui Chen, Weikai Li, Yunjia Qi, Zimu Wang, Zhili Wu, Kaisheng Zeng, Bin Xu, Lei Hou, et al. When does in-context learning fall short and why? a study on specification-heavy tasks. arXiv preprint arXiv:2311.08993 , 2023. [21] Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, and Juanzi Li. Constraint back- translation improves complex instruction following of large language models. arXiv preprint arXiv:2410.24175 , 2024. [22] Yanzhao Qin, Tao Zhang, Yanjun Shen, Wenjing Luo, Haoze Sun, Yan Zhang, Yujing Qiao, Weipeng Chen, Zenan Zhou, Wentao Zhang, et al. Sysbench: Can large language models follow system messages? arXiv preprint arXiv:2408.10943 , 2024. [23] Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuan- sheng Wu, Fei Liu, Pengfei Liu, and Dong Yu. Infobench: Evaluating instruction following ability in large language models. arXiv preprint arXiv:2401.03601 , 2024. [24] Haoran Sun, Lixin Liu, Junjie Li, Fengyu Wang, Baohua Dong, Ran Lin, and Ruohui Huang. Conifer: Improving complex constrained instruction-following ability of large language models. arXiv preprint arXiv:2404.02823 , 2024. [25] Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. Accessed: 2025-05-15. [26] Qwen Team. Qwen3: Think deeper, act faster, April 2025. Accessed: 2025-05-15. [27] THUDM. Glm-z1-32b-0414: A deep reasoning model, April 2025. Accessed: 2025-05-15. [28] Eric Wallace, Kai Xiao, Reimar Leike, Lilian Weng, Johannes Heidecke, and Alex Beutel. The instruction hierarchy: Training llms to prioritize privileged instructions. arXiv preprint arXiv:2404.13208 , 2024. [29] Bosi Wen, Pei Ke, Xiaotao Gu,
https://arxiv.org/abs/2505.16944v1
Lindong Wu, Hao Huang, Jinfeng Zhou, Wenchuang Li, Binxin Hu, Wendy Gao, Jiaxing Xu, et al. Benchmarking complex instruction-following with multiple constraints composition. Advances in Neural Information Processing Systems , 37:137610– 137645, 2024. [30] Kaiwen Yan, Hongcheng Guo, Xuanqing Shi, Jingyi Xu, Yaonan Gu, and Zhoujun Li. Codeif: Benchmarking the instruction-following capabilities of large language models for code genera- tion. arXiv preprint arXiv:2502.19166 , 2025. [31] Chaoyun Zhang, Shilin He, Jiaxu Qian, Bowen Li, Liqun Li, Si Qin, Yu Kang, Minghua Ma, Guyue Liu, Qingwei Lin, et al. Large language model-brained gui agents: A survey. arXiv preprint arXiv:2411.18279 , 2024. [32] Chi Zhang, Zhao Yang, Jiaxuan Liu, Yanda Li, Yucheng Han, Xin Chen, Zebiao Huang, Bin Fu, and Gang Yu. Appagent: Multimodal agents as smartphone users. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems , pages 1–20, 2025. 11 [33] Tao Zhang, Yanjun Shen, Wenjing Luo, Yan Zhang, Hao Liang, Fan Yang, Mingan Lin, Yujing Qiao, Weipeng Chen, Bin Cui, et al. Cfbench: A comprehensive constraints-following benchmark for llms. arXiv preprint arXiv:2408.01122 , 2024. [34] Hao Zheng, Xinyan Guan, Hao Kong, Jia Zheng, Weixiang Zhou, Hongyu Lin, Yaojie Lu, Ben He, Xianpei Han, and Le Sun. Pptagent: Generating and evaluating presentations beyond text-to-slides. arXiv preprint arXiv:2501.03936 , 2025. [35] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, et al. Least-to-most prompting enables complex reasoning in large language models. In Proceedings of ICLR , 2023. [36] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. Instruction-following evaluation for large language models. arXiv preprint arXiv:2311.07911 , 2023. [37] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. Instruction-following evaluation for large language models. arXiv preprint arXiv:2311.07911 , 2023. 12 Appendices A Limitations We discuss the limitations as follows: (1) Although the construction of AGENT IFis semi-automated, it still requires substantial manual verification, which limits its direct generalization to generating a large scale of data. In the future, we plan to explore automated methods for constructing post-training data to enhance instruction-following capabilities. (2) AGENT IFincludes instructions only in Chinese and English, lacking broader multilingual coverage, which may limit its broader usage. We encourage the community to extend the dataset to support more languages. (3) All experiments are conducted in a zero-shot setting, and we do not explore prompt engineering techniques such as in-context demonstrations. As this work focuses on dataset construction and model evaluation, we leave prompt engineering for future work. B Ethical Considerations We discuss ethical concerns and broader impacts as follows: (1) Intellectual Property. The agents obtained from GitHub repositories are shared under GPL-3.06or Apache-2.0 licenses7. We strictly adhere to the respective licensing terms, and all data are used solely for academic research. For industrial agents, we obtained approval from internal review boards to use and release the data under the GPL-3.0 license. These industrial applications target enterprise users in China, and we only collect statistics on active users
https://arxiv.org/abs/2505.16944v1
and service requests and do not conduct user behavior analysis. Notably, the agents include only system prompts without any user queries to prevent information leakage. Instead, all user queries in AGENT IFare created by hired annotators. AGENT IFwill be released under the GPL-3.0 license. (2) Broader Impacts. This work aims to construct a benchmark for evaluating instruction-following in agentic scenarios. Leaderboard rankings on AGENT IFshould not be used for adversarial comparisons or interpreted as evidence of misconduct in other research efforts. The benchmark data should not be incorporated into the training process of LLMs to avoid potential contamination or leakage. (3) Controlling Potential Risks . All data have been subjected to rigorous safety checks and are well anonymized to eliminate sensitive content. (4) Human Annotation. We employ eight annotators (gender-balanced) for data annotation and verification. All annotators are fairly paid based on workload and are informed of the intended use of the data before annotation. No personal information of annotators is involved in the dataset. (5) LLM Usage. We used GPT-4o and Claude 3.7 Sonnet for paraphrasing some sentences of this paper. C Details of Constraint Taxonomy C.1 Constraint Type This section provides a detailed overview of our constraint taxonomy. Figure 7 illustrates the distribution of constraints across different categories. Formatting Constraints Formatting Constraints dictate the structural form and presentation of model outputs. They ensure responses conform to a specific format, layout, or visual style, vital for machine readability or subsequent processing. Examples include: (1) Syntax Formatting: requiring outputs in formats like JSON, XML, or Markdown. (2) Layout Structure: specifying bullet points, tables, or paragraph length. (3) Symbol Conventions: enforcing specific symbols like date patterns or currency symbols. (4) Instructional Structure: mandating specific response organizations, such as “explain the concept in three steps.” Semantic Constraints Semantic Constraints govern the factual correctness, informativeness, and intended meaning of model outputs. They ensure content aligns with task requirements and includes essential semantic elements. Common types are: (1) Content Targeting: restricting the output’s topic or focus. (2) Information Completeness: requiring specific elements, such as time, location, or 6https://www.gnu.org/licenses/gpl-3.0.en.html 7https://www.apache.org/licenses/LICENSE-2.0 13 people. (3) Keyword Presence: mandating the inclusion of particular phrases or terms. (4) Stylistic Pequirements: dictating aspects like “use language understandable to children.” Tool Constraints Tool Constraints restrict the computational or external resources a model can use when generating responses. These are crucial for simulating specific environments or adhering to usage restrictions. Examples include: (1) Tool Usage: restricting callable functions or external tools, e.g., “only built-in Python functions may be used.” (2) Computational Limitations: imposing resource constraints, such asdo not use GPU acceleration.” C.2 Constraint Presentation Type Vanilla 69.8% Condition 19.6%Example 10.6% T ool 15.0% Formatting 38.5%Semantic 46.5% Figure 7: Distribution of constraint types. The inner ring shows the breakdown of con- straint types into Semantic, Formatting, and Tool categories. The outer ring further catego- rizes constraints based on their presentation type, including Vanilla, Condition, and Exam- ple. Semantic constraints and vanilla activa- tion are the most prevalent in the dataset.The constraint presentation type refers to how con- straints are conveyed to the model within the
https://arxiv.org/abs/2505.16944v1
instruc- tion. We categorize this type into three distinct forms: Vanilla Constraints are stated directly and uncon- ditionally in the prompt. These constraints are always in effect, regardless of the input content or model be- havior. For example, “Answer in Chinese”, i.e., the model must always respond in Chinese, regardless of the question. Condition Constraints are activated only when specific conditions are met, either from the user input or the model’s output. For instance, a constraint like “flag the response if it contains sensitive content” depends on input triggers, while “apply markdown formatting only when markdown is used” relies on the model’s behavioral context. Example Constraints are not explicitly stated but implied through in-context demonstrations. For example, providing an example written in Shake- spearean English implicitly requires the model to generate responses in a similar style, or showing few- shot outputs in structured JSON format signals that the same structure should be followed. D Detailed Information about Dataset Construction D.1 Prompts for Automatic Annotation Table 3 provides the prompt template for instruction collection. Templates for constraint annotation are found in Tables 4 and 5. In the evaluation generation phrase, the template for generating conditional checks is in Table 6. Finally, Tables 8, 10, and 9 detail the prompt templates for generating three types of evaluation. D.2 Detailed Information about Human Annotation We invited graduate students from the Department of Computer Science to participate in the data annotation process, and fairly compensated them based on pre-agreed salaries and workloads. All employment was formalized through contracts and conducted in full compliance with local regulations. During the annotation process, we conducted three rounds of sampled review and feedback to iteratively refine and finalize the high-quality annotation results. 14 You are given two pieces of information: 1. A Task Description summarizing what the agent is designed to do. 2.Input Variable Annotations listing the input variables, each with a brief explanation. Your task is to: •Generate multiple sets of variable content that are rich, detailed, and expanded. Each set must include specific, meaningful, and realistic information, maximizing substance while staying coherent. •Ensure strong diversity across examples, including both speaking styles (storytelling, poetic, formal, humorous, dramatic, etc.) and imagined scenarios (different realistic scenes that meaningfully shape the content). • Match the meaning of each variable based on its comment from Input Variable Annotations. • Use the Task Description to guide the overall theme and content of the generated examples. Input: {input} Output Format: Return a JSON array where: • Each item is a JSON object (a dict) corresponding to one complete example. • The keys of each object exactly match the variable names listed in the Input Variable Annotations. Example structure: [ { "variable_name_1": "<filled content>", "variable_name_2": "<filled content>" }, { "variable_name_1": "<filled content>", "variable_name_2": "<filled content>" } ] Table 3: Prompt for query generation in instruction collection. E Experimental Details Evaluation Models. We conduct our evaluation using a diverse set of language models, in- cluding o1-mini, GPT-4o (2024-11-20), Qwen3-32B, QwQ-32B, DeepSeek-R1, GLM-Z1-32B (0414), DeepSeek-v3 (2024-03-25), Claude 3.5 Sonnet (2024-10-22), Meta-LLaMA-3.1-70B-Instruct, DeepSeek-R1-Distill-Qwen-32B, DeepSeek-R1-Distill-LLaMA-70B, Meta-LLaMA-3.1-8B-Instruct, Mistral-Crab-DPO-7B, Mistral-7B-Instruct-v0.3, and Mistral-Conifer-DPO-7B.
https://arxiv.org/abs/2505.16944v1
Experimental Hyperparameters. In all experiments, we set the temperature to 0to ensure repro- ducibility. The maximum number of generated tokens is set to 32,000. For models with a context length shorter than 32,000tokens, we set their maximum allowable context length to match the max-token limit. Experiment Cost. We use gpt-4o-2024-11-20 as the evaluator throughout our experiments. Each evaluation round costs approximately $20. 15 You are given a system prompt . Your job is to extract atomic constraints that apply specifically to the expected response generated by the model, as dictated by the system prompt. Please follow the instructions below precisely: •Only extract constraints that apply to the response , not those describing or constraining the input variables or instructions to the user. • Read the system prompt line by line and extract the smallest possible atomic constraint units from any content that imposes rules, structure, or expectations on the response. • This includes: –Explicit instructions such as: “Your task is to. . . ”, “You must. . . ”, “Please do the following. . . ” –Numbered lists of required actions –Formatting, style, or output structure expectations –Content rules or restrictions –Demonstrations or examples that implicitly define how the model should respond (e.g., few-shot examples, response templates, or stylistic samples). In these cases, extract the implied constraint as faithfully as possible (e.g., "Respond in JSON format", "Follow the narrative style shown"). • For each constraint, determine whether it is: –"vanilla" : The rule applies to all responses regardless of input or task branch. –"conditional" : The rule only applies in certain contexts (e.g., a certain kind of task or response type). –"example" : The rule is not explicitly stated, but implied from a given example or demonstra- tion in the prompt. •Sometimes the system prompt gives a rule or behavior without explicitly stating the condition, but it’s only meant to apply in a certain type of response. In such cases, you should infer the missing condition and rewrite the constraint in the form If [condition], then [rule]. These should be marked as "conditional" even if no “if” appears in the prompt. •When the prompt contains an example , you should recognize that it implies certain response expecta- tions, and you are expected to infer the corresponding constraints accordingly. •Donot rewrite or generalize the constraint. Extract the exact wording from the prompt wherever possible. • Return your answer as a list of dictionaries, where each dictionary contains: –desc : the extracted constraint (verbatim from the prompt, or inferred if conditional) –dimension : one of "unconditional" ,"conditional" , or"example" • If the system prompt contains no constraints on the response, return an empty list: [] Example Output: [ { "desc": "Always use \"you\" and \"your\" when addressing the user.", "dimension": "vanilla" }, { "desc": "If no symptoms are reported, explain why further screening is still necessary.", "dimension": "conditional" } ] Input System Prompt: {prompt} Table 4: Prompt for extracting atom-level constraints from block instructions in constraint annotation. 16 You are given a single constraint. Your task is to classify this constraint into one or
https://arxiv.org/abs/2505.16944v1
more of the following categories. Prefer to choose only one category unless you believe the constraint clearly fits multiple types . After classification, provide a brief explanation supporting your decision. Constraint Categories: 1.formatting —Controls the structure or presentation format of the output. Examples include: • Syntax format (e.g., JSON, XML, Markdown) • Layout and structure (e.g., bullet points, tables, paragraph length) • Symbol and notation norms (e.g., date format “YYYY-MM-DD”, currency symbol “¥”) • Interaction steps (e.g., “explain the principle in three steps”) Example: “Present the result using LaTeX math notation” 2.semantic —Ensures the output content is meaningful, accurate, and complete. Examples include: • Factual accuracy (e.g., no fabricated data) • Logical consistency • Information completeness (must include specified elements) • Keyword requirements (must contain specified terms) • Style or tone (e.g., “explain in child-friendly language”) • Neutrality of position (e.g., “avoid emotionally charged language”) • Terminology standards (e.g., “use ISO names for chemicals”) Example: “The answer must include the event’s time, location, and key figures” 3.Tool —Limits the usage of computational resources or external dependencies. Examples include: • Data source limitations (e.g., “only use data from after 2020”) • Computational restrictions (e.g., “no GPU acceleration”) • Tool/library restrictions (e.g., “use only built-in Python functions”) Example: “Do not access any online resources during analysis” Output Format: { "type_list": ["Your chosen category or categories"], "explanation": "Your reasoning for choosing this classification" } Input Constraint: {constraint} Table 5: Prompt for classifying constraints into formatting, semantic, or tool categories in constraint annotation. 17 You are a helpful assistant that specializes in constraint verification. Your task is to process a conditional constraint and produce two outputs: 1. A yes/no question that can be used to verify whether the condition is satisfied. 2.The main constraint that should be enforced if the condition is true. This should exclude the conditional part and be expressed as a standalone, unconditional constraint. Please follow these instructions: • If the condition refers to the input query , the question should focus on analyzing the input. • If the condition refers to the response , the question should focus on analyzing the response. •Keep the extracted main constraint faithful to the original meaning, but remove the conditional clause (e.g., remove “If...” or “When...”). Return your output as a JSON dictionary with the following keys: { "condition_check_question": "{Your yes/no question}? Please answer YES/ NO directly and do not enter anything else.", "main_constraint": "{The unconditional constraint to verify if the condition is true.}" } Note: The constraint itself is the primary basis for generation. The instruction paragraph is provided only as auxiliary context, and should be used only when the constraint alone is ambiguous or underspecified. Input Here is the full instruction paragraph where the constraint appears: {instruction} The constraint to verify: {constraint} Table 6: Prompt for decomposing a conditional constraint into a condition-checking question and a standalone constraint in evaluation generation. You are an expert at analyzing natural language constraints and determining how they can be verified. Your task is to classify a given constraint based on whether it can be validated: 1.code —
https://arxiv.org/abs/2505.16944v1
Directly by code 2.llm_assisted_code — By code after extracting needed content via LLM 3.llm — Only by using LLM to semantically assess it Please follow these guidelines: •If the constraint can be validated by simple logic (e.g., length, presence, format) and the content is directly accessible from the response, classify it as code . •If the constraint requires extracting a specific section from the response (e.g., “intro”, “conclusion”, “step 1”) before performing validation (e.g., counting words), classify it as llm_assisted_code . •If the constraint requires open-ended, semantic, or subjective understanding (e.g., logical correctness, relevance, tone, or fact-checking), classify it as llm. Return your answer only as a JSON dictionary in the following format: { "constraint_type": "code" | "llm_assisted_code" | "llm", "explanation": "Your brief reasoning here" } Note: The constraint itself is the primary basis for classification. The instruction paragraph is provided only as auxiliary context, and should be used only when the constraint alone is ambiguous or underspecified. Input Here is the full instruction paragraph where the constraint appears: {instruction} The constraint to classify: {constraint} Table 7: Prompt for classifying constraints by their verifiability type: directly by code, LLM-assisted code, or purely by LLM. 18 You are a helpful assistant that specializes in verifying whether model responses comply with specific constraints. Your task is to generate a yes/no question that can be used to determine whether a model response satisfies a given constraint. This question should be phrased so that an LLM (or a human evaluator) could answer it just by reading the model’s response. Please follow these rules: 1. The question should be clear, specific, and binary — it should be answerable with “yes” or “no”. 2. It must refer explicitly to what the constraint is checking (e.g., structure, length, tone, factuality). 3.If the constraint refers to a specific section (e.g., “intro”, “step 1”, “conclusion”), include that in the question. Return your answer in the following format: { "validation_question": "{Your full yes/no question here}? Please answer YES/NO directly and do not enter anything else." } Note: The constraint itself is the primary basis for generation. The instruction paragraph is provided only as auxiliary context, and should be used only when the constraint alone is ambiguous or underspecified. Instruction Paragraph (Context): This is the full instruction paragraph that provides context for the constraint: {instruction} Constraint to Verify: {constraint} Table 8: Prompt for generating a yes/no validation question from a constraint. You are a helpful assistant that specializes in generating extraction instructions to support constraint verification. Your task is to generate a concise and precise instruction that tells an LLM what specific part of the response needs to be extracted , so that the extracted content can later be verified by code. Please follow these guidelines: 1.The instruction should clearly specify what to extract from the response (e.g., “Extract the introduction part”, “Extract the function used”, “Extract the final answer sentence”). 2. Base your output on the constraint provided below. 3.You may refer to the instruction paragraph only when the constraint is ambiguous and requires context. 4. Return your output as a JSON
https://arxiv.org/abs/2505.16944v1
dictionary in the following format: { "extraction_instruction": "{your generated extraction instruction}. Return the extracted content verbatim from the response. If multiple segments are found, return them as a Python-style list of strings. If nothing is found, return None." } Input: Instruction Paragraph: Here is the full instruction paragraph where the constraint appears: {instruction} Constraint to support: {constraint} Helper analysis: {helper_analysis} Table 9: Prompt for generating extraction instructions from a constraint to support code-based verification in hybrid evaluation. 19 You are tasked with implementing a Python function named check_following that determines whether a given response satisfies a specified constraint . The function must return True if the constraint is satisfied andFalse otherwise. Requirements: • The function accepts only one parameter: response , which is a Python string. •The function must return a boolean value ( True orFalse ) based on whether the response adheres to the constraint. • The function must not include any I/O operations, such as input() orArgumentParser . •The Python code for constraint verification should be designed to be generalizable, e.g., using regular expressions or other suitable techniques. • Only return the exact Python code, with no additional explanations. Note: The constraint itself is the primary basis for classification. The instruction paragraph is provided only as auxiliary context , and should be used only when the constraint alone is ambiguous or underspecified. Instruction Paragraph: Here is the full instruction paragraph where the constraint appears: {instruction} Specific Constraint to Verify: {constraint} Table 10: Prompt for generating a Python function that verifies whether a response satisfies a specified constraint. 20 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: In Section 1 and Abstract. Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: In Appendix A. Guidelines: •The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. •The authors should reflect
https://arxiv.org/abs/2505.16944v1
on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] 21 Justification: This paper does not involve theoretical assumptions and proofs. Guidelines: • The answer NA means that the paper does not include theoretical results. •All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: In Appendix E. Guidelines: • The answer NA means that the paper does not include experiments. •If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is
https://arxiv.org/abs/2505.16944v1
a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. •While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? 22 Answer: [Yes] Justification: In Abstract. Guidelines: • The answer NA means that paper does not include experiments requiring code. •Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. •While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). •The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). •Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to
https://arxiv.org/abs/2505.16944v1
data and code is permitted. 6.Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: In Appendix E. Guidelines: • The answer NA means that the paper does not include experiments. •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. •The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: In Appendix E, we explain that all sampling temperatures are set to 0, elimi- nating randomness. Guidelines: • The answer NA means that the paper does not include experiments. •The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). •The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) • The assumptions made should be given (e.g., Normally distributed errors). 23 •It should be clear whether the error bar is the standard deviation or the standard error of the mean. •It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). •If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments compute resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: In Appendix E. Guidelines: • The answer NA means that the paper does not include experiments. •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in
https://arxiv.org/abs/2505.16944v1
every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: In Appendix B. Guidelines: •The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. •If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. •The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: In Appendix B. Guidelines: • The answer NA means that there is no societal impact of the work performed. •If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. •Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. 24 •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. •If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [Yes] Justification: In Appendix B. Guidelines: • The answer NA means that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account
https://arxiv.org/abs/2505.16944v1
and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: In Appendix B. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. •The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. •For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. •If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. •For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. 25 •If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: In the provided links in Abstract. Guidelines: • The answer NA means that the paper does not release new assets. •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. •The paper should discuss whether and how consent was obtained from people whose asset is used. •At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [Yes] Justification: In Appendix B. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [Yes] Justification: In Appendix B. Guidelines: •The answer NA means that the paper
https://arxiv.org/abs/2505.16944v1
does not involve crowdsourcing nor research with human subjects. •Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. •For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 16.Declaration of LLM usage 26 Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [Yes] Justification: In Appendix B. Guidelines: •The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. •Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/ LLM) for what should or should not be described. 27
https://arxiv.org/abs/2505.16944v1
On Multilingual Encoder Language Model Compression for Low-Resource Languages Daniil Gurgurov1,2Michal Gregor3Josef van Genabith1,2Simon Ostermann1,2,4 1Saarland University 2German Research Center for Artificial Intelligence (DFKI) 3Kempelen Institute of Intelligent Technologies (KInIT) 4Centre for European Research in Trusted AI (CERTAIN) {daniil.gurgurov, josef.van_genabith, simon.ostermann}@dfki.de, michal.gregor@kinit.sk Abstract In this paper, we combine two-step knowledge distillation, structured pruning, truncation, and vocabulary trimming for extremely compress- ing multilingual encoder-only language mod- els for low-resource languages. Our novel ap- proach systematically combines existing tech- niques and takes them to the extreme, reducing layer depth, feed-forward hidden size, and in- termediate layer embedding size to create sig- nificantly smaller monolingual models while re- taining essential language-specific knowledge. We achieve compression rates of up to 92% with only a marginal performance drop of 2–10% in four downstream tasks, including sentiment analysis, topic classification, named entity recognition, and part-of-speech tagging, across three low-resource languages. Notably, the performance degradation correlates with the amount of language-specific data in the teacher model, with larger datasets resulting in smaller performance losses. Additionally, we conduct extensive ablation studies to identify best prac- tices for multilingual model compression using these techniques. 1 Introduction Small multilingual encoder language models (LMs), such as mBERT (Devlin et al., 2019), XLM- R (Conneau et al., 2020), and Glot-500m (Imani et al., 2023), have demonstrated strong perfor- mance across a diverse range of low-resource lan- guages (Hu et al., 2020; Asai et al., 2024), often outperforming large-scale proprietary models on various sequential tasks (Adelani et al., 2024; Gur- gurov et al., 2025). However, even these relatively compact multilingual models may still be exces- sively large for use in individual languages due to redundant capacity and expensive inference (Singh and Lefever, 2022; Cruz, 2025). To address this, we propose a novel combina- tion of model compression approaches for trans- forming multilingual encoder-only models into Figure 1: Overview of our multilingual model compres- sion methodology. We use (1)knowledge distillation to reduce layers, (2)structured pruning to eliminate redun- dant feed-forward network width, and (3)hidden size reduction and another round of knowledge distillation from the previous student model. Finally, (4)vocabulary trimming is applied to retain language-specific tokens. maximally small, efficient, language-specific al- ternatives while retaining competitive performance. Our methodology integrates knowledge distillation (Hinton et al., 2015), structured pruning (Kim and Hassan, 2020; Hou et al., 2020), weight truncation, and vocabulary trimming (Abdaoui et al., 2020; Ushio et al., 2023) to systematically reduce model size by compressing the depth (number of layers), feed-forward intermediate width, hidden size, and the tokenizer vocabulary. Our experiments demon- strate that this pipeline achieves compression rates of up to 92%, with only a minor performance drop of 2–10% on downstream tasks such as sentiment analysis, topic classification, named entity recog- nition, and part-of-speech tagging. Notably, the extent of degradation depends more on the strength of the teacher model than on the compression itself. Beyond compression, we investigate the impact of using multilingual versus monolingual teacher models, evaluate different initialization strategies for knowledge distillation, and analyze additionalarXiv:2505.16956v1 [cs.CL] 22 May 2025 compression variables. Our findings contribute to the development of highly efficient environmen- tally
https://arxiv.org/abs/2505.16956v1
friendly models (Strubell et al., 2020) for low- resource languages and explores how strongly mod- els can be compressed. 2 Methodology In this section, we present our multilingual model compression strategy, illustrated in Figure 1. Our approach combines several existing compression techniques in a novel way that, to the best of our knowledge, has not been explored in this combina- tion within the multilingual context. 2.1 Layer Reduction via Knowledge Distillation We reduce the number of transformer layers in the teacher model by half to obtain an initial compact student model (Sanh et al., 2020). The student is initialized with the layers of the teacher and trained using a combination of Masked Language Model- ing (MLM) (Devlin et al., 2019) and Mean Squared Error (MSE) loss for knowledge distillation (Hin- ton et al., 2015). Both losses are weighted equally (0.5). The teacher is a multilingual encoder fine- tuned on the target language (see Section 4). 2.2 Width Reduction via Structured Pruning We apply structured pruning (Kim and Hassan, 2020) to reduce the intermediate size of the feed- forward layers from 3072 to 2048. Neuron impor- tance is estimated using first-order gradient infor- mation accumulated from forward and backward passes over MLM validation data. At each layer, neurons are ranked by their absolute gradient val- ues, and the least important ones are removed based on a target pruning ratio. 2.3 Hidden Size Compression with Secondary Knowledge Distillation We compress the hidden embedding dimension from 768 to either 312, 456, or 564 via trunca- tion, retaining the first kdimensions. A second round of knowledge distillation is then performed, using the width-reduced model from the previous step as the new teacher (Wang et al., 2023). 2.4 Vocabulary Reduction We reduce the vocabulary size by selecting the top 40,000 most frequent tokens from a target-language corpus, along with their corresponding embeddings(Ushio et al., 2023). This ensures that the resulting model retains only language-specific tokens. 3 Experiments Below, we describe the datasets, languages, tasks, and baseline systems used in our evaluation. 3.1 Knowledge Distillation Data We use GlotCC (Kargaran et al., 2025), a large- scale multilingual corpus derived mainly from CommonCrawl (Wenzek et al., 2020), as the pri- mary dataset for both stages of knowledge distilla- tion. Data distributions for the selected languages are reported in Appendix F. We use GlotCC for training, and the FLORES-200 development set (Team et al., 2022) for validation during training. 3.2 Languages and Tasks We evaluate our models on four tasks: Topic Clas- sification (TC), Sentiment Analysis (SA), Named Entity Recognition (NER), and Part-of-Speech Tagging (POS), covering three low-resource lan- guages—Maltese, Slovak, and Swahili (Joshi et al., 2020). For TC, we use the 7-class SIB-200 dataset (Adelani et al., 2024), and for SA, we compile binary sentiment datasets from multiple sources (Dingli and Sant, 2016; Cortis and Davis, 2019; Pecar et al., 2019; Muhammad et al., 2023a,b). For NER, we use WikiANN (Pan et al., 2017), and for POS, we use Universal Dependencies v2.15 (de Marneffe et al., 2021) and MasakhaPOS (Dione et al., 2023). For all tasks, we train Sequential
https://arxiv.org/abs/2505.16956v1
Bottleneck task adapters (Pfeiffer et al., 2020) with fixed hyperparameters (see Appendix H). Per- formance is measured using macro-averaged F1 (Sokolova et al., 2006) for TC and SA, and "seqe- val" F1 (Nakayama, 2018) for NER and POS. 3.3 Models and Baselines We compress two encoder-only multilingual models–mBERT (Devlin et al., 2019) and XLM- R-base (Conneau et al., 2020)–adapted to target languages through fine-tuning on language-specific data, and compare the reduced models to two base- lines: (1) the original, non-adapted models, and (2) language-adapted versions. In both cases, we train an identical task adapter using the same task- specific datasets as for the compressed models. 4 Findings Our key findings are outlined below. 4.1 Distillation Distilling knowledge from a multilingual teacher into a monolingual student model is less effective than using a target-language adapted teacher, as ev- idenced by the differences in validation accuracies shown in Figure 2. This discrepancy possibly stems from the multilingual teacher’s broad cross-lingual representations, which are not directly aligned with the requirements of a monolingual student. In con- trast, monolingual teachers provide more targeted, language-specific representations, resulting in bet- ter student performance. (a) Maltese (mBERT) (b) Maltese (XLM-R) (c) Slovak (mBERT) (d) Slovak (XLM-R) (e) Swahili (mBERT) (f) Swahili (XLM-R) Figure 2: First-step KD validation accuracies for mBERT and XLM-R with models initialized using the lastklayers. mBERT- and XLM-R-mt, sk, sw refer to models adapted to the target language; distilled de- notes models trained with distillation loss, while student refers to identically trained models without distillation loss. The best accuracy is in all cases achieved when distilling from a target-language adapted model. Distillation loss: We compare KL divergence and MSE as distillation loss functions, and observe that MSE leads to better and faster convergence (Appendix A), in line with prior work (Kim et al., 2021; Nityasya et al., 2022). 4.2 Weight Initialization Weight initialization plays a crucial role in train- ing the student model, with knowledge distilla- tion providing only a marginal additional perfor-mance improvement (Figure 2). This partly aligns with the findings of Wibowo et al. (2024), who explored distilling multilingual abilities for multi- lingual tasks, whereas our focus is on monolingual distillation. Training a student-sized model initial- ized with teacher weights, but without knowledge distillation, results in only a slight performance drop compared to a fully distilled model. Initialization Strategies: Among various ini- tialization strategies, initializing the student with the last klayers for mBERT and every other layer (stride) for XLM-R consistently outperforms alter- natives such as using the first klayers and com- bining first and last layers (Appendix B). Random initialization performs significantly worse, empha- sizing the importance of weight reuse (Sun et al., 2019; Singh and Lefever, 2022). 4.3 Pruning and Truncation Distilled models can be compressed further using structured pruning, hidden size reduction, and vo- cabulary trimming, while maintaining competitive performance. Intermediate size reduction: Reducing the in- termediate size of feed-forward layers from 3072 to 2048 via structured pruning results in negligi- ble performance loss (Table 1). However, more aggressive reductions degrade quality significantly, making 2048 a practical lower bound. We do
https://arxiv.org/abs/2505.16956v1
not prune attention heads, as removing even a minimal number (e.g., three) causes severe degradation. Hidden size reduction: We reduce the hidden embedding size to 564, 456, and 312,1truncat- ing it to the first kdimensions. Training is per- formed under the supervision of the student from the previous stage. We find that using the orig- inal teacher leads to worse results, possibly due to the bigger knowledge gap (Wang et al., 2023). We also tested SVD-based dimensionality reduc- tion but found truncation to be more effective (see Appendix C). Vocabulary trimming: Restricting the vocabu- lary to the top 40K most frequent tokens for each target language introduces no measurable perfor- mance loss compared to the previous step, while further improving efficiency. Reducing below 40K works for some languages but does not generalize well across all cases (Appendix E), consistent with Ushio et al. (2023). 1The hidden size must be divisible by the number of atten- tion heads. Model Configuration Red. SizeXLM-R-baseAvg↑ mt sk sw TC↑ SA↑NER↑POS↑ TC↑ SA↑NER↑POS↑ TC↑ SA↑NER↑POS↑ Base 0% 279M 68.12 56.04 54.29 89.94 88.11 95.55 91.09 97.31 78.44 81.46 84.64 89.38 81.20 Base-[mt, sk, sw] 0% 279M 84.96 76.20 69.18 95.41 86.23 94.79 91.01 97.09 87.52 84.08 82.67 89.22 86.53 KD layer red. ×2 -15% 236M 84.05 77.20 63.47 94.25 86.33 92.88 90.12 96.26 82.89 81.29 82.85 89.22 85.07 inter. layer red. →2048 -20% 226M 84.74 78.63 60.12 94.16 86.14 93.35 89.98 96.05 82.39 82.70 83.62 89.54 85.12 * KD hid. size red. →564 -40% 163M 83.43 74.86 52.97 93.73 84.89 92.74 89.09 96.78 85.84 80.96 80.80 89.42 83.79 vocab. red. →40k -85% 45M 84.11 72.39 60.92 93.00 85.29 92.92 89.25 96.42 85.71 80.94 81.96 89.12 84.34 * KD hid. size red. →456 -53% 131M 78.48 69.91 62.54 92.72 85.99 92.99 88.30 96.28 83.06 79.25 80.69 88.94 83.26 vocab. red. →40k -87% 35M 78.47 70.68 63.25 92.51 86.10 92.91 88.35 96.29 82.49 79.02 80.19 89.00 83.27 * KD hid. size red. →312 -68% 89M 66.94 70.05 35.73 87.58 83.98 90.93 88.02 95.51 76.37 80.11 80.66 88.27 78.68 vocab. red. →40k -92% 23M 67.22 71.35 37.06 87.49 84.00 90.52 88.20 95.56 77.95 80.47 79.18 88.01 78.92 Table 1: Performance comparison before and after model reduction for XLM-R-base across Maltese (mt), Slovak (sk), and Swahili (sw) for TC, SA, NER, and POS. Red. shows the parameter reduction compared to the base model. All scores are averaged over three runs with different random seeds. The results for mBERT are in Appendix J. 4.4 Downstream Performance Our results show that model compression through knowledge distillation, structured pruning, and vo- cabulary reduction leads to only a modest perfor- mance drop (Table 1). We show that the extent of degradation varies by language and is influenced not just by the amount of distillation data, but also by the strength of the teacher model, which de- pends on the fine-tuning data size (Appendix F). Maltese, with the smallest fine-tuning dataset, exhibits the largest performance drops, suggesting weaker knowledge transfer from less-trained teach- ers and potential reliance on multilingual knowl- edge. In contrast, Slovak—benefiting from the
https://arxiv.org/abs/2505.16956v1
largest fine-tuning dataset—remains robust even under the maximum compression, indicating that a stronger teacher model facilitates better compres- sion outcomes. Swahili, with a moderate dataset size, follows an intermediate pattern. The results support a relationship between fine-tuning data vol- ume and the model’s resilience to compression. Adapter capacity: We additionally experiment with varying the reduction factor rto adjust task adapter capacity (see Appendix I and Figure 10). We find that r= 16 performs well for larger mod- els, while smaller models (e.g., with hidden sizes 564, 456 and 312) benefit from reduced rvalues (e.g., r= 2), resulting in modest performance gains. The results for smaller models in Tables 1 and 6 are reported using this lower reduction factor. 5 Related Work In knowledge distillation, a smaller student model is trained to replicate the behavior of a larger teacher model (Hinton et al., 2015), often combin- ing MLM loss with teacher supervision (Sun et al.,2019; Sanh et al., 2020). DistilBERT (Sanh et al., 2020) reduces model size by selecting every other layer from BERT (Devlin et al., 2019) and distills on large corpora using dynamic masking. Patient distillation further improves results by matching intermediate representations (Sun et al., 2019). Recent work has explored distilling multilin- gual models into compact monolingual models. Singh and Lefever (2022) train student models for languages such as Swahili and Slovenian us- ing a composite loss (distillation, cosine, MLM), and show that distilled models often outperform mBERT while using a reduced vocabulary (Ab- daoui et al., 2020). Ansell et al. (2023) introduce a two-phase bilingual distillation pipeline, com- bining general-purpose and task-specific guidance with sparse fine-tuning, outperforming multilingual baselines. Other studies emphasize the role of initialization. Wibowo et al. (2024) show that copying teacher weights is more effective than random initialization in the context of multilingual distillation, and that MSE outperforms KL divergence for distillation. Cruz (2025) similarly distill mBERT for Tagalog and highlight the nuanced impact of embedding initialization. 6 Conclusion We present an effective compression pipeline for multilingual encoder models designed for low- resource languages. By integrating staged knowl- edge distillation, structured pruning, hidden size truncation, and vocabulary reduction, we compress models by up to 92% while maintaining competi- tive performance - typically within 2–10% of the original - on four downstream tasks. Limitations and Future Work Future research should explore optimal strategies for transferring intermediate hidden size knowl- edge, possibly by applying structured pruning be- fore distillation and identifying the most important components to keep instead of simply truncating them. Additionally, identifying language-specific layers and neurons and leveraging them for more targeted pruning could enhance compression ef- ficiency while maintaining performance. Lastly, more advanced knowledge distillation methods should be experimented with. Acknowledgements This research was supported by DisAI - Improv- ing scientific excellence and creativity in combat- ing disinformation with artificial intelligence and language technologies, a Horizon Europe-funded project under GA No. 101079164, and by the Ger- man Ministry of Education and Research (BMBF) as part of the project TRAILS (01IW24005). References Amine Abdaoui, Camille Pradel, and Grégoire Sigel. 2020. Load what you need: Smaller versions
https://arxiv.org/abs/2505.16956v1
of mutililingual BERT. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing , pages 119–123, Online. Association for Computational Linguistics. David Ifeoluwa Adelani, Hannah Liu, Xiaoyu Shen, Nikita Vassilyev, Jesujoba O. Alabi, Yanke Mao, Hao- nan Gao, and En-Shiun Annie Lee. 2024. SIB-200: A simple, inclusive, and big evaluation dataset for topic classification in 200+ languages and dialects. InProceedings of the 18th Conference of the Euro- pean Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 226–245, St. Julian’s, Malta. Association for Computational Linguistics. Alan Ansell, Edoardo Maria Ponti, Anna Korhonen, and Ivan Vuli ´c. 2023. Distilling efficient language- specific models for cross-lingual transfer. Akari Asai, Sneha Kudugunta, Xinyan Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, and Hannaneh Hajishirzi. 2024. BUFFET: Benchmarking large language models for few-shot cross-lingual transfer. In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 1771–1800, Mexico City, Mexico. As- sociation for Computational Linguistics.Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 8440– 8451, Online. Association for Computational Lin- guistics. Keith Cortis and Brian Davis. 2019. A social opin- ion gold standard for the Malta government budget 2018. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019) , pages 364–369, Hong Kong, China. Association for Computational Linguistics. Jan Christian Blaise Cruz. 2025. Extracting general-use transformers for low-resource languages via knowl- edge distillation. In Proceedings of the First Work- shop on Language Models for Low-Resource Lan- guages , pages 219–224, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Marie-Catherine de Marneffe, Christopher D. Man- ning, Joakim Nivre, and Daniel Zeman. 2021. Uni- versal Dependencies. Computational Linguistics , 47(2):255–308. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Alexiei Dingli and Nicole Sant. 2016. Sentiment analy- sis on maltese using machine learning. In Proceed- ings of The Tenth International Conference on Ad- vances in Semantic Processing (SEMAPRO 2016) , pages 21–25. Cheikh M. Bamba Dione, David Ifeoluwa Adelani, Peter Nabende, Jesujoba Alabi, Thapelo Sindane, Happy Buzaaba, Shamsuddeen Hassan Muhammad, Chris Chinenye Emezue, Perez Ogayo, Anuoluwapo Aremu, Catherine Gitau, Derguene Mbaye, Jonathan Mukiibi, Blessing Sibanda, Bonaventure F. P. Dos- sou, Andiswa Bukula, Rooweither Mabuya, Allah- sera Auguste Tapo, Edwin Munkoh-Buabeng, Vic- toire Memdjokam Koagne, Fatoumata Ouoba Ka- bore, Amelia Taylor, Godson Kalipe, Tebogo Macucwa, Vukosi Marivate, Tajuddeen Gwadabe, Mboning Tchiaze Elvis, Ikechukwu Onyenwe, Gra- tien Atindogbe, Tolulope Adelani, Idris Akinade, Olanrewaju Samuel, Marien Nahimana, Théogène Musabeyezu, Emile Niyomutabazi, Ester Chimhenga, Kudzai Gotosa, Patrick Mizha, Apelete
https://arxiv.org/abs/2505.16956v1
Agbolo, Sey- dou Traore, Chinedu Uchechukwu, Aliyu Yusuf, Muhammad Abdullahi, and Dietrich Klakow. 2023. MasakhaPOS: Part-of-speech tagging for typolog- ically diverse African languages. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10883–10900, Toronto, Canada. Association for Computational Linguistics. Daniil Gurgurov, Ivan Vykopal, Josef van Genabith, and Simon Ostermann. 2025. Small models, big im- pact: Efficient corpus and graph-based adaptation of small multilingual language models for low-resource languages. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2020. Dynabert: Dynamic bert with adaptive width and depth. Advances in Neural Information Processing Systems , 33:9782–9793. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In Proceedings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learning Research , pages 4411–4421. PMLR. Ayyoob Imani, Peiqin Lin, Amir Hossein Kargaran, Silvia Severini, Masoud Jalili Sabet, Nora Kass- ner, Chunlan Ma, Helmut Schmid, André Martins, François Yvon, and Hinrich Schütze. 2023. Glot500: Scaling multilingual corpora and language models to 500 languages. In Proceedings of the 61st Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 1082–1117, Toronto, Canada. Association for Computational Lin- guistics. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 6282–6293, Online. Association for Computational Linguistics. Amir Hossein Kargaran, François Yvon, and Hinrich Schütze. 2025. Glotcc: An open broad-coverage commoncrawl corpus and pipeline for minority lan- guages. Taehyeon Kim, Jaehoon Oh, NakYil Kim, Sangwook Cho, and Se-Young Yun. 2021. Comparing kullback- leibler divergence and mean squared error loss in knowledge distillation. Young Jin Kim and Hany Hassan. 2020. FastFormers: Highly efficient transformer models for natural lan- guage understanding. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing , pages 149–158, Online. Association for Computational Linguistics.Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Abinew Ali Ayele, Nedjma Ousidhoum, David Ife- oluwa Adelani, Seid Muhie Yimam, Ibrahim Sa’id Ahmad, Meriem Beloucif, Saif Mohammad, Sebas- tian Ruder, Oumaima Hourrane, Pavel Brazdil, Fel- ermino D’ario M’ario Ant’onio Ali, Davis Davis, Salomey Osei, Bello Shehu Bello, Falalu Ibrahim, Tajuddeen Gwadabe, Samuel Rutunda, Tadesse Be- lay, Wendimu Baye Messelle, Hailu Beshada Balcha, Sisay Adugna Chala, Hagos Tesfahun Gebremichael, Bernard Opoku, and Steven Arthur. 2023a. Afrisenti: A twitter sentiment analysis benchmark for african languages. Shamsuddeen Hassan Muhammad, Idris Abdulmu- min, Seid Muhie Yimam, David Ifeoluwa Adelani, Ibrahim Sa’id Ahmad, Nedjma Ousidhoum, Abinew Ayele, Saif M Mohammad, and Meriem Beloucif. 2023b. Semeval-2023 task 12: Sentiment analysis for african languages (afrisenti-semeval). arXiv preprint arXiv:2304.06845 . Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval. Made Nindyatama Nityasya, Haryo Akbarianto Wi- bowo, Rendi Chevi, Radityo Eko Prasojo, and
https://arxiv.org/abs/2505.16956v1
Al- ham Fikri Aji. 2022. Which student is best? a comprehensive knowledge distillation exam for task- specific bert models. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Noth- man, Kevin Knight, and Heng Ji. 2017. Cross-lingual name tagging and linking for 282 languages. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 1946–1958, Vancouver, Canada. As- sociation for Computational Linguistics. Samuel Pecar, Marian Simko, and Maria Bielikova. 2019. Improving sentiment classification in Slovak language. In Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing , pages 114–119, Florence, Italy. Association for Computa- tional Linguistics. Jonas Pfeiffer, Ivan Vuli ´c, Iryna Gurevych, and Se- bastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 7654–7673, Online. Association for Computa- tional Linguistics. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Pranaydeep Singh and Els Lefever. 2022. When the stu- dent becomes the master: Learning better and smaller monolingual models from mBERT. In Proceedings of the 29th International Conference on Computa- tional Linguistics , pages 4434–4441, Gyeongju, Re- public of Korea. International Committee on Compu- tational Linguistics. Marina Sokolova, Nathalie Japkowicz, and Stan Sz- pakowicz. 2006. Beyond accuracy, f-score and roc: a family of discriminant measures for performance evaluation. In Australasian joint conference on artifi- cial intelligence , pages 1015–1021. Springer. Emma Strubell, Ananya Ganesh, and Andrew Mc- Callum. 2020. Energy and policy considerations for modern deep learning research. Proceedings of the AAAI Conference on Artificial Intelligence , 34(09):13693–13696. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model com- pression. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Hef- fernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Bar- rault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human- centered machine translation. Asahi Ushio, Yi Zhou, and Jose Camacho-Collados. 2023. Efficient multilingual language model com- pression through vocabulary trimming. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 14725–14739, Singapore. Asso- ciation for Computational Linguistics. Maorong Wang, Hao Yu, Ling Xiao, and Toshihiko Ya- masaki. 2023. Bridging the capacity gap for online knowledge distillation. In 2023 IEEE 6th Interna- tional Conference on Multimedia Information Pro- cessing and Retrieval (MIPR) , pages 1–4. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzmán, Ar- mand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the Twelfth Lan- guage Resources and Evaluation Conference , pages 4003–4012, Marseille, France. European Language Resources Association. Haryo
https://arxiv.org/abs/2505.16956v1
Akbarianto Wibowo, Thamar Solorio, and Al- ham Fikri Aji. 2024. The privileged students: On the value of initialization in multilingual knowledge distillation.A KL Divergence vs MSE for Knowledge Distillation (a) Maltese (b) Slovak (c) Swahili Figure 3: MSE vs. KD validation accuracy for mBERT with the models initialized using the last klayers. (a) Maltese (b) Slovak (c) Swahili Figure 4: MSE vs. KD validation accuracy for XLM-R with the models initialized using the last klayers. B Initialization Strategies for Knowledge Distillation (a) Maltese (b) Slovak (c) Swahili Figure 5: Validation accuracy for various initialization strategies for mBERT. (a) Maltese (b) Slovak (c) Swahili Figure 6: Validation accuracy for various initialization strategies for XLM-R.C SVD vs. Truncation for Hidden Size Reduction Figure 7: Validation accuracy comparing SVD and first-ktruncation initializations for mBERT models with reduced hidden size ( k=312). Results show per- formance with two distillation approaches: using a layer-compressed model (Small teacher) or the original language-adapted model (Big teacher). Student models were trained with identical initialization methods but without knowledge distillation. D Alpha Parameter in Knowledge Distillation (a) mBERT (b) XLM-R Figure 8: Validation accuracy curves showing the im- pact of the alpha parameter on knowledge distillation performance for mBERT and XLM-R on Maltese with the last kand stride initialization strategies for the two models respectively. We find that the αparameter does not have a significant impact on mBERT during pre-training, withα= 0.5yielding consistently good results. For XLM-R, higher values of α(i.e., 0.6 and 0.8), which reduce the strength of the distillation effect, show slightly improved validation accuracy trends compared to lower values. In our experiments, we adopt the default setting of α= 0.5, leaving a more comprehensive exploration of optimal values across different languages, dataset sizes, and model architectures to future work. E Vocabulary Reduction Analysis (a) Maltese (b) Slovak (c) Swahili Figure 9: Impact of vocabulary reduction on TC perfor- mance for mBERT models reduced to a hidden size of 312. F Knowledge Distillation Data Sizes Language KD Data Size (MB) FT Data Size (MB) Maltese (mt) 238 188 Slovak (sk) 535 1032 Swahili (sw) 402 332 Table 2: Dataset sizes for knowledge distillation (KD) and monolingual fine-tuning (FT) for each language. The language-adapted models are sourced from Gur- gurov et al. (2025), and the FT data sizes are as reported by them.G Downstream Task Data Sizes Language Train Validation Test Text Classification (TC) Maltese (mt) 701 99 204 Slovak (sk) 701 99 204 Swahili (sw) 701 99 204 Sentiment Analysis (SA) Maltese (mt) 595 85 171 Slovak (sk) 3560 522 1042 Swahili (sw) 738 185 304 Named Entity Recognition (NER) Maltese (mt) 100 100 100 Slovak (sk) 20000 10000 10000 Swahili (sw) 1000 1000 1000 Part of Speech Tagging (POS) Maltese (mt) 1123 433 518 Slovak (sk) 8483 1060 1061 Swahili (sw) 675 134 539 Table 3: Fine-tuning data sizes for each task (Text Clas- sification, Sentiment Analysis, Named Entity Recogni- tion, Part of Speech Tagging) showing train, validation, and test splits across Maltese, Slovak, and Swahili. H Downstream Task Hyperparameters Hyperparameter TC SA NER POS Learning rate 1e-4
https://arxiv.org/abs/2505.16956v1
1e-4 3e-4 3e-4 Batch size 16 16 64 64 Epochs 20 20 100 100 Maximum length 256 256 512 512 Table 4: Hyperparameters for task adapter fine-tuning across Text Classification (TC), Sentiment Analysis (SA), and Named Entity Recognition (NER) tasks. I Adapter Trainable Parameter Counts To examine whether the constrained task adapter capacity, as shown in Table 5, impacts downstream performance in compressed models, we vary the reduction factor r, thereby increasing adapter size (see Figure 10). We train task adapters on top of both full adapted models and hidden-size re- duced models (564, 456, and 312). For the smallest models (456 and 312), we observe that increasing adapter capacity ( r=2) leads to improved perfor- mance. However, this increase is unnecessary for larger mBERT variants (full and 564), while still beneficial for all small XLM-R models. These results suggest that for smaller models, increas- ing adapter capacity can yield modest performance Model ConfigurationTask Adapter Size mBERT XLM-R Base 894,528 894,528 Base-[mt, sk, sw] 894,528 894,528 KD layer red. ×2 447,264 447,264 inter. layer red. →2048 447,264 447,264 * KD hid. size red. →564 240,474 240,474 vocab. red. →40k 240,474 240,474 * KD hid. size red. →456 156,120 156,120 vocab. red. →40k 156,120 156,120 * KD hid. size red. →312 73,122 73,122 vocab. red. →40k 73,122 73,122 Table 5: Task adapter parameter sizes across differ- ent model compression configurations for mBERT and XLM-R with the default reduction factor of 16 . When the hidden size is reduced, adapter input/output dimen- sions decrease proportionally. When the layer count is reduced, fewer adapters are added to the model. All other parameters use the default settings for the Sequen- tial Bottleneck adapter as implemented in AdapterHub. gains. Tables 1 and 6 report results using the de- fault reduction rate of 16. (a) mBERT (b) XLM-R Figure 10: Performance of models on TC for Maltese with varying adapter capacity for mBERT and XLM-R. J Downstream Results for mBERT Model Configuration Reduction SizemBERTAvg. mt sk sw TC↑ SA↑NER↑POS↑ TC↑ SA↑NER↑POS↑ TC↑ SA↑NER↑POS↑ Base 0% 179M 68.73 65.82 59.96 89.04 85.31 91.98 91.44 97.01 69.60 64.62 83.81 87.61 79.58 Base-[mt, sk, sw] 0% 179M 84.91 73.55 65.00 93.96 86.26 91.87 90.42 96.87 86.68 81.28 82.49 88.69 85.17 KD layer red. ×2 -25% 135M 80.14 73.91 58.96 93.17 85.42 90.38 87.42 96.87 82.78 77.27 80.66 88.42 82.95 inter. layer red. →2048 -30% 126M 79.03 74.74 58.13 92.74 85.25 90.23 88.48 96.67 83.21 75.90 79.76 88.54 82.72 * KD hid. size red. →564 -50% 90M 79.47 70.22 61.12 92.64 83.36 90.48 88.07 96.30 83.50 76.10 79.74 88.42 82.45 vocab. red. →40k -75% 45M 80.17 70.76 61.07 92.46 83.50 90.72 87.73 96.26 84.34 76.01 80.26 88.63 82.66 * KD hid. size red. →456 -60% 71M 80.23 70.05 57.18 92.13 83.94 90.40 87.49 95.86 85.08 78.57 80.28 88.25 82.45 vocab. red. →40k -80% 35M 80.95 69.72 55.87 92.00 84.22 90.39 87.38 95.95 83.03 78.50 79.79 88.37 82.18 * KD hid. size red. →312 -73% 48M 73.14 71.98 39.50 90.33 80.91 90.40 86.48 95.48 81.75 76.48 79.56 87.72 79.48 vocab. red. →40k -87%
https://arxiv.org/abs/2505.16956v1
MedFrameQA: A Multi-Image Medical VQA Benchmark for Clinical Reasoning Suhao Yu1∗Haojin Wang2∗Juncheng Wu3∗Cihang Xie3Yuyin Zhou3 1University of Pennsylvania2University of Illinois Urbana-Champaign3UC Santa Cruz Project Page :https://ucsc-vlaa.github.io/MedFrameQA/ Benchmark :https://huggingface.co/datasets/SuhaoYu1020/MedFrameQA Abstract Existing medical VQA benchmarks mostly focus on single-image analysis, yet clinicians almost always compare a series of images before reaching a diagnosis. To better approximate this workflow, we introduce MEDFRAME QA—the first benchmark that explicitly evaluates multi-image reasoning in medical VQA. To build MEDFRAME QA both at scale and in high-quality, we develop 1) an au- tomated pipeline that extracts temporally coherent frames from medical videos and constructs VQA items whose content evolves logically across images, and 2) a multiple-stage filtering strategy, including model-based and manual review, to preserve data clarity, difficulty, and medical relevance. The resulting dataset comprises 2,851 VQA pairs (gathered from 9,237 high-quality frames in 3,420 videos), covering nine human body systems and 43 organs; every question is ac- companied by two to five images. We comprehensively benchmark ten advanced Multimodal LLMs—both proprietary and open source, with and without explicit reasoning modules—on MEDFRAME QA. The evaluation challengingly reveals that all models perform poorly, with most accuracies below 50%, and accuracy fluctuates as the number of images per question increases. Error analysis further shows that models frequently ignore salient findings, mis-aggregate evidence across images, and propagate early mistakes through their reasoning chains; results also vary substantially across body systems, organs, and modalities. We hope this work can catalyze research on clinically grounded, multi-image reasoning and accelerate progress toward more capable diagnostic AI systems. 1 Introduction Multimodal Large Language Models (MLLMs) have quickly emerged as a powerful paradigm for enabling advanced AI systems in clinical and medical domains [ 1–7]. Yet the benchmarks used to measure their progress capture only a fraction of real clinical practice. Nearly every established medical visual-question-answering (VQA) dataset— e.g., [8–15] —asks questions about a single image presented in isolation. The left panel of Figure 1 shows a typical SLAKE [ 12] example whose answer requires nothing more than basic object recognition in one frame. In everyday care, however, clinicians rarely rely on a lone snapshot; they routinely compare multiple images taken from different views, modalities, or time points before making a diagnosis. Only recently has the vision community begun to tackle multi-image VQA. A handful of new benchmarks—such as [ 16,17,18]—include questions that reference more than one picture. Yet their tasks still fall short of the integrative reasoning medicine demands, as the images from these benchmarks are typically treated as separate clues rather than as innately complementary pieces of a ∗Equal contribution. Preprint.arXiv:2505.16964v1 [cs.CV] 22 May 2025 Single-FrameMulti-Frame # Rate : 1.43Multi-Frame # Rate : 3.24 Multi-Frame ReasonMulti-Frame Reason SLAKE What modality is used to take this image ? Answer : X-Ray Which part of the body does this image belong to? Answer : Chest Are there abnormalities in this image ? Answer : Yes What is the largest organ in the picture ? Answer : Lung What diseases are included in the pictur e? Answer : Atelectasis Where is/are the abnormality located? Answer : Right Lung,
https://arxiv.org/abs/2505.16964v1
Lower Left Is the lung healthy? Answer : No MedXpertQAMedFrameQA (Ours) Q : A 39-year-old man presents with a history of an index finger injury sustained 6 months ago that has not improved despite eight weeks of splinting. Radiographs from the time of injury (Figure A) and a recent follow-up (Figure B) are provided. Which of the following statements is accurate regarding open reduction and screw fixation for this type of injury? Options: A. Range of motion of the DIP joint in the affected finger is usually less than 10 degrees post-operatively B. High risk of symptomatic implant C, Immobilization of the distal interphalangeal joint is required for 2 weeks post-operatively D. High rates of post-operative infection are common E. Open reduction via an approach through the nail bed leads to significant post- operative nail deformity Answer : B High risk of symptomatic implant Q : Based on the findings visible across the first, second, and third images , which combination of injuries most likely explains the observed left-to- right subfalcine shift and generalized brain swelling? Options: A. Large left anterior temporal hematoma and left holohemispheric subdural hematoma.", B. Left uncal herniation and trapping of the right lateral ventricle.", C. Traumatic subarachnoid hemorrhage in the Sylvian fissures and anterior interhemispheric fissure.", D. Complete loss of the perimesencephalic cisterns and hemorrhage in the pons.", E. Large right temporal acute subdural hematoma and downward transtentorial herniation.", F. Effacement of the cisterns and multiple left frontal and temporal contusions." Answer : A Reason: The combination of a large left anterior temporal intraparenchymal hematoma ( seen clearly in images 1 and 2 ) and a holohemispheric subdural hematoma ( seen best in image 3 ) creates a significant mass effect, resulting in midline shift to the right and generalized brain swelling. This shift is evidenced by the displaced septum pellucidum and compressed lateral ventricles. The hematomas cause increased intracranial pressure and structural compression, which explains both the subfalcine herniation and widespread cerebral edema. No other option provides this specific bilateral mass effect pattern with such pronounced midline displacement. Single ImageSimple QuestionShallow Multi-Frame UseComplex QuestionSurface LevelExpert LevelComplex& Comprehensive QuestionExpert& Holistic LevelDeep & Structured Muti-Frame AnalysisDetailed Reasoning ProcessDirect Answering & Recognition OnlyAnswer Without ExplanationCross-Image Inference & Evidence-Based ConclusionFigure 1: Comparison of medical VQA benchmarks. MEDFRAME QAintroduces multi-image, clinically grounded questions that require comprehensive reasoning across all images. Unlike prior benchmarks such as SLAKE [ 12] and MedXpertQA [ 18], it emphasizes diagnostic complexity, expert-level knowledge, and explicit reasoning chains. single, coherent scenario. The MedXpertQA example in the middle panel of Figure 1 illustrates this gap: the two images share no obvious physiological connection or causal chain, so it is possible for a model to still answer correctly without genuinely synthesizing information from both. Consequently, success on such datasets therefore says little about a system’s ability to perform the integrative, cross-image reasoning required for real diagnostic practice. To bridge this gap, we introduce MEDFRAME QA, the first benchmark explicitly designed to test multi-image reasoning in medical VQA. To construct logically coherent and clinically grounded questions, we leverage YouTube—a widely
https://arxiv.org/abs/2505.16964v1
accessible platform for medical education [ 19,20,21]— which provides a rich repository of medical and clinical videos with temporally and semantically connected visual content. Building on this insight and drawing inspiration from the prior work [ 22], we propose a VQA generation pipeline that automatically constructs multi-image VQA questions from keyframes extracted from 3,420 medical videos, spanning 9 human body systems and 43 organs across diverse anatomical regions. We first curated videos ranging from 5 minutes to 2 hours using 114 combinatorial search queries across imaging modalities and clinical findings. Keyframes were then extracted and rigorously filtered by GPT-4o for image quality, medical relevance, informativeness, and privacy. Audio narrations were transcribed, temporally aligned to frames within a defined margin, and refined by GPT-4o for clinical accuracy. Consecutive frame-caption pairs with a shared clinical focus were merged into multi-frame clips to preserve narrative continuity. GPT-4o then generated multiple-choice VQA items for each clip, requiring advanced clinical reasoning and multi-image synthesis. A final two-stage filtering process—automated difficulty filtering via state-of-the-art MLLMs and manual quality control—ensured a challenging, high-quality VQA benchmark tailored for medical imaging content. This data curation pipeline yields MEDFRAME QA, which consists of 2,851 challenging multi- image VQA questions requiring complex and comprehensive reasoning across multiple images. As illustrated in the right panel of Figure 1, each item in MEDFRAME QAbundles a natural-language query with multiple frames, eliminating single-image shortcuts. Moreover, to enable transparent evaluation, we also provide gold-standard rationales derived from the source video transcripts, explicitly linking each image to the answer. 2 Table 1: Comparison of MEDFRAME QAwith Existing Benchmarks. MEDFRAME QAsupports multi-image reasoning within real-world clinical video scenarios and paired reasoning across frames. The paired reasoning in M EDFRAME QA is derived from the transcripts from original video clips. Benchmark # Images # Questions # Rate Multi-Image Real World ScenariosPaired Reasoning Across Multi Images VQA-RAD [8] 315 3515 0.09 ✗ ✗ ✗ VQA-Med-2019 [9] 500 500 1.00 ✗ ✗ ✗ VQA-Med-2021 [10] 500 500 1.00 ✗ ✗ ✗ PathVQA [11] 858 6,719 0.13 ✗ ✗ ✗ SLAKE-En [12] 96 1,061 0.09 ✗ ✗ ✗ PMC-VQA [13] 29,021 33,430 0.87 ✗ ✗ ✗ OmniMedVQA [14] 118,010 127,995 0.92 ✗ ✗ ✗ GMAI-MMBench [15] 21,180 21,281 1.00 ✗ ✗ ✗ MMMU (H&M) [16] 1,994 1,752 1.14 ✓ ✗ ✓ MMMU-Pro (H&M) [17] 431 346 1.25 ✓ ✗ ✓ MedXpertQA MM [18] 2852 2000 1.43 ✓ ✓ ✗ MEDFRAME QA 9237 2851 3.24 ✓ ✓ ✓ We benchmark ten state-of-the-art MLLMs, including both open-source and proprietary models, as well as the latest reasoning-oriented MLLMs, and find that their performance on MEDFRAME QA remains mostly below 50% accuracy. To summarize, our key contributions are as follows: •We introduce MEDFRAME QA, a challenging benchmark dedicated to multi-image medical VQA, explicitly designed to evaluate the reasoning capabilities of MLLMs across multiple clinically relevant images, similar to real-world diagnostic scenarios. •We develop a scalable pipeline that automatically constructs multi-image, clinically grounded VQA questions from medical education videos, using GPT-4o for image filtering, caption alignment, question generation, and quality control. •We benchmark ten state-of-the-art MLLMs on MEDFRAME
https://arxiv.org/abs/2505.16964v1
QAand find that their accuracies mostly fall below 50% with substantial performance across different body systems, organs, and modalities, highlighting the substantial gap between current model capabilities and real-world diagnostic requirements. 2 Related Work Reasoning Multimodal Large Language Models With advances in models and benchmarks, inter- est in the reasoning capabilities of MLLMs has grown [ 23–26]. Recent MLLMs now support medical reasoning tasks like clinical decision-making, chain-of-thought generation, and diagnostic inference [27]. Llava-Med [ 3] and GPT-4V [ 28] show generalist abilities in radiology and biomedical VQA but often lack interpretable reasoning. MedCoT [ 29] addresses this with a multi-expert prompting frame- work that improves rationale quality and accuracy. MedVLM-R1 [ 30] applies reinforcement learning to encourage plausible rationales without ground truth, improving radiology QA. Med-Gemini [ 5] combines domain-adapted prompting with long-context modeling for complex cross-modal inference. These advancements in applying MLLMs to medical reasoning tasks underscore the critical need for rigorous benchmarks that effectively evaluate their reasoning capabilities. Multimodal Medical Benchmarks Existing benchmarks for evaluating MLLMs in the medical domain remain limited in scope. Most notably, the majority focus on single-image question answering tasks. For example, VQA-RAD [ 8], VQA-Med-2019 [ 9], VQA-Med-2021 [ 10], and SLAKE [ 12] primarily target single-question VQA tasks within the radiology domain, while Path-VQA [ 11] is dedicated exclusively to pathology. With the rapid advancement of MLLMs, more generalized benchmarks such as PMC-VQA [ 13], OmniMedVQA [ 14], and GMAI-MMBench [ 15] have been introduced to assess broader model capabilities across diverse medical fields. However, these benchmarks remain limited, as they primarily focus on single-image VQA tasks—falling short of reflecting the demands of real-world medical applications. Recent efforts such as MMMU (H&M) [16], MMMU-Pro (H&M) [ 17], and MedXpertQA MM [ 18] have incorporated multi-image VQA tasks. Nonetheless, their construction overlooks the critical need for clinical reasoning across multiple images—a core requirement in real-world diagnostic settings. We provide a comprehensive comparison of M EDFRAME QA with existing benchmarks in Table 1. 3 (a) Medical Video Collection Modality Disease+Key W ords Medical V ideo :Medical Frame Extraction : For normal bone marrow ....Pair Generation (b) Frame-Caption Pairing Text Recognition : : If correlated to (c) Multi-frame Merging (d) Question-Answer Generation: For normal bone marrow ...: : In the abnormal case ...: For normal bone marrow .... In the abnormal ...Merged CaptionQA with reasoning Merged FramesQuestion Answer ReasoningMulti-Image VQA Data Figure 2: Our data generation pipeline. (a) Medical Video Collection: Collecting 3,420 medical videos via clinical search queries (Section 3.1). (b) Frame-Caption Pairing: Extracting keyframes and aligning with transcribed captions. (Section 3.2) (c) Multi-Frame Merging: Merging clinically related frame-caption pairs into multi-frame clips. (Section 3.3)(d) Question-Answer Generation: Generating multi-image VQA from the multi-frame clips. (Section 3.4) Video Data For Medical Benchmarking Recent studies have advanced the use of video data for medical dataset construction and benchmarking. Speech recognition models like Whisper [ 31] have made it easier to extract data from videos [ 32,33]. Quilt-1M [ 22] collected one million paired image-text samples from histopathology YouTube videos. MedVidQA [ 34] and NurViD [ 35] target
https://arxiv.org/abs/2505.16964v1
instructional and nursing procedures. A VOS [ 36] includes 1,997 annotated open-surgery videos across 23 procedures and 50 countries. MIMIC-IV-ECHO [ 37] provides echocardiography videos for cardiac imaging research. Cotaract-1K [ 38] consists of 1,000 videos of cataract surgeries conducted in the eye clinic from 2021 to 2023. Despite advancements in video dataset construction, limited attention has been paid to leveraging video data for benchmarking MLLMs in the medical domain. YouTube’s rich medical content [ 19,20,21] offers natural reasoning chains for multi-frame VQA evaluation. To this end, we utilize YouTube videos and design a VQA generation pipeline that automatically constructs multi-image VQA questions, aiming to assess the reasoning capabilities of MLLMs across complex multi-image scenarios. 3 MedFrameQA Benchmark 3.1 Medical Video Collection As the first step in building MEDFRAME QA, we assemble a large pool of clinically relevant videos from YouTube (illustrated in Figure 2(a)). Specifically, we curate 114 carefully designed search queries, each formed by pairing a common imaging modality ( e.g. MRI, X-Ray, CT, and radiograph) with a frequently encountered disease or finding ( e.g. brain tumor, pneumonia, chest, and bone fracture). This combinatorial list gives broad coverage of routine diagnostic scenarios; the full set of keywords is provided in Appendix C. Then, for every query, we retrieve the top results and discard clips shorter than 5 minutes or longer than 2 hours. The remaining corpus comprises 1,971 high-resolution, narration-rich medical videos that serve as the raw material for M EDFRAME QA. 3.2 Frame-Caption Pairing Medical Frame Extraction. To process the raw video collected, the first task is to identify the corresponding medical frames. Following [ 22], we run FFmpeg (https://ffmpeg.org/) to extract key- frames—those delineating the scene boundaries and often indicating significant visual transitions— and record the corresponding temporal span of each segment (fstart, fend). Each candidate frame is then evaluated by GPT-4o [ 39] under four criteria: (1) image quality , evaluating the clarity and medical relevance of the frame; (2) prominence of medical content , determining if the frame predominantly consists of medical imagery; (3) informative content , checking if the frame is understandable and 4 holds significant information; and (4) privacy , ensuring the frame excludes unrelated human faces, such as those of presenters in video conferences. Note that only frames satisfying all four requirements are retained. More details about the frame filtering criteria can be found in Appendix E.1. This filtering step leaves us with a sequence of qualified key-frames and their temporal spans: SF= [F1,···Fm], D F= [ f1 start, f1 end ,···(fm start, fm end)], (1) where mis the number of extracted medical frames. SFandDFare the sequence of frames and times, respectively. Text Recognition. We next transcribe the audio track with Whisper [ 31]. The model returns a sequence of ntext snippets and their time stamps: ST= [T1,···Tn], D T= [ t1 start, t1 end ,···(tn start, tn end)], (2) Pair Generation. Our third task now is to pair the medical frame with the corresponding caption. Intuitively, each frame can be simply paired with the text snippets that emerge concurrently with it during
https://arxiv.org/abs/2505.16964v1
the same time interval. However, narration in medical videos can lag behind or precede the exact moment a frame is shown. To associate each frame ( Fi) with all relevant speech, we define a symmetric margin ( ∆) seconds around the frame’s interval and gather every transcript whose span intersects that window fi start−∆, fi end+ ∆ . Then all snippets within this window range will be concatenated to form a coarse caption ˜Ci= Tj, Tj+1, . . . , T k . Then we leverage GPT-4o to enhance the quality of ˜Ci. Specifically, GPT-4o is instructed to (i) remove statements unrelated to the displayed frame and (ii) refine the description to ensure the correct usage of clinical terminology. Formally, Ci=GPT-4o ˜Ci, Fi|Irephrase , (3) where Cidenotes the refined caption, and Irephrase is the prompt (see Appendix E.1 for more details). The final frame–caption pair is Pi={Fi, Ci}, and the sequence of frame-caption pairs of the entire video is SP= [P1,···, Pn]. 3.3 Multi-Frame Merging The paired frames described above usually belong to longer narrative units—for example, a radiologist may spend several consecutive slides discussing the same lesion. To capture such continuity, we merge adjacent frame-caption pairs into multi-frame “clips” whenever their captions describe the same clinical concept. The paired caption of each frame already provides a description of its visual content; hence, we rely entirely on the textual correlation between the captions to determine if there is a connection between two frames. Specifically, as illustrated in Figure 2(c), for every consecutive pairPi={Fi, , Ci}andPi+1={Fi+1, , Ci+1}, we ask GPT-4o (prompt in Appendix E.2) whether these two captions are correlated. If yes, we then combine these two pairs: P[i,i+1]= [Fi, Fi+ 1],[Ci⊕Ci+1] , where ⊕represents the text concatenation. We then compare the merged caption [Ci⊕Ci+1]with the next caption Ci+2; if the relation persists, we append Pi+2to the group. This sliding process continues until (i) the next caption is judged unrelated or (ii) the group reaches a maximum of five frames, the limit we adopt in this work. Applying the above procedure to all videos yields 7,998 multi-frame clips, each containing 2–5 medically coherent frame-caption pairs. These clips constitute the basic building blocks for the subsequent VQA-item generation stage. 3.4 Question Answering Generation As shown in Figure 2(d), for each merged group P[i,i+1···]={[Fi, Fi+1,···],[Ci⊕Ci+1,···]}, we instruct GPT-4o to generate challenging multiple-choice questions. Formally, Q, A, R =GPT-4o ([Ci⊕Ci+1···]|Igen), (4) where Q, A, R are the generated question, the correct answer, and the reasoning, respectively. Igenis the generation prompt, enforcing four requirements: (1) Information Grounding :all questions must 5 rely solely on visual evidence explicitly described in the caption; (2) Advanced Clinical Reasoning : each question should probe skills such as anatomical localization and differential diagnosis; (3) Contextual Interaction : the wording must reference the images in order ( e.g., “in the first image ..., whereas in the third image ...”) and require synthesizing information across them; (4) Distraction Options : every item includes plausible but incorrect answer choices that differ from the ground truth in clinically meaningful details. The complete Igenis provided in
https://arxiv.org/abs/2505.16964v1
Appendix E.3. Lastly, each clip is packaged as {Q, A, R, [Fi, Fi+1···]}, forming a single entry. 3.5 Data Filtering Difficulty Filtering. To ensure the high challenge of MEDFRAME QA, we utilize 3 advanced MLLMs—GPT-4-Turbo-V [ 28],o1[40], and GPT-4o [39]—for further filtering. If anyof the models selects the correct option, the question is deemed too easy and discarded. This step trims the pool from 4,457 to 3,654 items. Human Evaluation. Additionally, we conduct a manual evaluation to eliminate entries featuring low- quality frames. In detail, we exclude entries with frames that are: (i) blurred or display overlapping visuals due to faulty video extraction; (ii) show recognizable human faces, infringing upon the privacy guidelines described in Section 3.2; (iii) devoid of significant visual medical content. As a result, 803 entries were excluded, yielding a final benchmark set of 2,851 high-quality entries. 4 Experiments 4.1 Data Statistics In this section we summarize the data distribution of MEDFRAME QA. Starting from the 3,420 instructional videos collected in Section 3.1, we extract 111,942 key-frames and retain 9,237 high- quality, medically relevant frames. These frames are used to construct 2,851 multi-image, closed- ended, single-choice VQA pairs, which span 9 human body systems and 43 organs, featuring 114 unique keyword combinations derived from the most common diseases and their associated diagnostic imaging modalities for each organ. Each generated VQA pair consists of 2–5 frames, accompanied by a challenging question that requires integrating information across all provided frames to answer correctly. The composition of body systems, organs and modalities in MEDFRAME QAis provided in Appendix A and shown in Figure 5 (a) (b) (c) respectively. We stress that the defining feature of MEDFRAME QAis that every question is tethered to multiple images, deliberately pushing models to reason across frames—a core requirement in real-world diagnosis. Concretely, among the 2,851 VQA items, 1,186 pairs contain 2 frames, 602 pairs contain 3 frames, 256 pairs contain 4 frames, and 807 pairs contain 5 frames. We also present the distribution of frames per question in Figure 5(e). 4.2 Models We evaluate both proprietary and open-source MLLMs on MEDFRAME QA, encompassing reasoning and non-reasoning models, with a particular focus on recent advancements in medical reasoning. For evaluation, we use the prompt template as in MMMU-pro[17] (see details in Appendix E.4). Reasoning Models: We evaluate MEDFRAME QAon recent reasoning models, including the propri- etary model o4-mini [41],o3[41],o1[40],Claude-3.7-Sonnet [42] and Gemini-2.5-Flash [43]. We also include the open-source reasoning model QvQ-72B-Preview [44]. Non-Reasoning Models: We also evaluate MEDFRAME QAon non-reasoning models. including proprietary models, GPT-4o [39],GPT-4o-mini [39] and GPT-4-Turbo-V [28]. We also include the open-source model Qwen2.5-VL-72B-Instruct [45]. 4.3 Main Results Advanced MLLMs struggle to holistically understanding multi-images. Table 2 presents the evaluation of 10 advanced MLLMs on MEDFRAME QA. In general, all assessed models demonstrate persistently low accuracy, with the peak accuracy remaining below 55.00%. The proprietary model, 6 Table 2: Accuracy of Models on MEDFRAME QA.We report the system-wise accuracy of models on MEDFRAME QA. The results are averaged over all the tasks in MEDFRAME QA. The best results on each system and average accuracy
https://arxiv.org/abs/2505.16964v1
are highlighted in bold. In general, all assessed models demonstrate persistently low accuracy, with system-wise performance of substantial variability in task difficulty. ModelAccuracy per SystemAvg CNS RES CIR DIG URI REP END MSK AUX Proprietary Reasoning Models o1 46.91 48.88 49.49 47.45 49.03 42.26 47.68 51.59 48.75 47.91 o3 47.81 52.00 50.00 48.48 50.71 45.02 51.84 54.90 50.41 50.18 o4-mini 46.03 49.78 48.74 48.63 51.85 43.62 52.44 53.38 50.82 49.40 Gemini-2.5-Flash 48.82 58.26 57.21 50.25 48.61 55.81 55.38 60.21 52.85 54.75 Claude-3.7-Sonnet 49.21 46.09 53.23 50.25 49.07 47.57 47.81 52.42 49.59 49.67 Open-Source Reasoning Models QvQ-72B-Preview 45.24 47.83 50.75 42.00 47.91 49.26 45.82 46.86 44.72 46.59 Proprietary Non-Reasoning Models GPT-4o 48.82 49.13 37.31 50.00 43.98 45.88 46.22 43.60 44.31 45.67 GPT-4o-mini 41.73 36.52 39.30 28.36 35.65 33.83 30.68 34.95 34.96 34.55 GPT-4-Turbo-V 45.28 46.09 42.79 49.75 43.06 48.63 49.80 45.16 46.75 46.69 Open-Source Non-Reasoning Models Qwen2.5-VL-72B-Instruct 40.08 45.22 37.81 37.00 36.74 40.80 42.63 43.55 42.28 40.89 GPT-4o , reaches an average accuracy of 45.67%, significantly lower in comparison to its performance on the single medical VQA benchmark (69.91% on VQA-RAD [ 46] as reported by [ 8]). Meanwhile, the leading open-source model, Qwen2.5-VL-72B-Instruct , achieves merely 40.89% accuracy. Together, these findings suggest that current advanced MLLMs fall short in capability to thoroughly analyze multiple medical images. Reasoning enhances multi-image understanding. As shown in Table 2, we find that reason- ing MLLMs consistently outperform non-reasoning ones. Gemini-2.5-Flash attains the high- est accuracy among all models, notably outperforming the top non-reasoning model GPT-4o by 9.08% (54.75% vs45.67%). Among the open-source models, QvQ-72B-Preview achieves an ac- curacy of 46.59%, showcasing a 5.70% enhancement compared to its non-reasoning counterpart, Qwen2.5-VL-72B-Instruct . This indicates that reasoning is particularly beneficial in clinical scenarios, which frequently involve multiple images. Overlooking or misinterpreting hinders reasoning across image sequence. Despite the relatively enhanced performance of reasoning models, their performance is still limited. Our investigation reveals this arises from neglecting or misinterpreting the intermediary images during continuous reasoning over an image sequence. Here, we present a case study highlighting instances where o1 fails to provide correct reasoning steps for questions in M EDFRAME QA. •Case 1: Neglegence of important information within multiple frames. In Figure 3, we present a case where o1fails to integrate important information across multiple frames, leading to a flawed overall reasoning. While o1correctly identifies the “polar vessel sign” in the Doppler frame as suggestive of a parathyroid adenoma, it neglects anatomical cues from the transverse and sagittal views. These clearly demonstrate that the lesion is located posterior and inferior to the thyroid gland with a cranial-caudal orientation—hallmark features of parathyroid adenomas. By focusing solely on vascular features in a single frame and ignoring spatial localization in the others, o1misses the most definitive diagnostic evidence and produces an incorrect conclusion. •Case 2: Mistake drawn from single image resulting in significant errors in subsequent reasoning. In Figure 4, we present a case where o1fails to extract the correct spatial cue from a critical frame, leading to a cascade of reasoning errors. Specifically, o1misinter- prets the axial image by identifying medial
https://arxiv.org/abs/2505.16964v1
displacement of the nerve root, despite clear imaging evidence of lateral displacement caused by a foraminal disc herniation. This initial misjudgment propagates through its reasoning chain, ultimately resulting in the selection of an anatomically incorrect answer that contradicts the information integrated across both frames. 7 Based on the MRI findings across the images , which of the following best describes the progression and current state of the pathology in the tibia and fibula?From the coronal and axial views, the tibia clearly shows a chronic ‐ appearing cortical lesion (the “defect”) and low/intense (dark) marrow signal changes consistent with sclerosis, with relatively little in the way of active periosteal reaction compared to a more acute process. The fibula, by contrast, does not demonstrate significant signal abnormality or periosteal elevation, suggesting it remains uninvolved. This pattern aligns best with chronic changes in the tibia and a normal fibula. Answer: C The tibia demonstrates a cortical defect with dark marrow signal , consistent with chronic osteomyelitis, as described in all images. Over time, the periosteal elevation in the tibia has reduced , indicating some improvement. The fibula, in contrast, shows a bright marrow signal , which is indicative of acute inflammation, and signs of periosteal elevation are also noted. This combination of findings is most consistent with option E.O1 Reasoning Correct ReasoningQ Option Answer:EO1's mistake primarily stems from ignoring key information in the images , especially regarding the fibula. In particular, o1 primarily overlooked the second and third images , where the fibula is clearly visible. These images show the bright marrow signal and periosteal elevation in the fibula, which indicates acute inflammation .The tibia was correctly identified as having chronic changes, but the fibula's acute changes were not adequately considered, leading to an incorrect assessment of its condition.https://youtu.be/ CoTD37V8neQ A. The tibia shows a cortical defect with extensive marrow edema and periosteal elevation, while the fibula remains unaffected with normal marrow signal. B. Both the tibia and fibula exhibit cortical defects, but the tibia shows dark marrow signal consistent with chronic osteomyelitis, while the fibula shows bright marrow signal indicative of acute inflammation. C. The tibia demonstrates a cortical defect with dark marrow signal and reduced periosteal elevation over time, while the fibula shows no significant changes and maintains normal marrow signal. D. The tibia and fibula both exhibit bright marrow signals, with the tibia showing a larger cortical defect and persistent periosteal elevation, consistent with worsening osteomyelitis. E. The tibia shows a cortical defect with dark marrow signal and reduced periosteal elevation over time, while the fibula shows bright marrow signal and signs of acute periosteal elevation. F. Both the tibia and fibula exhibit cortical defects and dark marrow signals, with the tibia showing a larger defect and more extensive periosteal elevation.Wrong Reason Figure 3: Failure case study of o1onMEDFRAME QA.Negligence of important information across multiple frames. In this case, o1overlooked critical features in the second and third frames, which ultimately led to the selection of an incorrect answer. Table 3: Accuracy (%) of Models by Frame Count and Modality on MEDFRAME QA.We
https://arxiv.org/abs/2505.16964v1
report the accuracy of models on questions in MEDFRAME QAgrouped by frame count with standard deviation ( SD) and by modality. We empirically observe that accuracy fluctuates with increasing frame count and varies significantly across common imaging modalities. ModelAccuracy (%) by Frame Count Accuracy (%) by Modality 2 3 4 5 SD CT MRI Ultrasound X-ray Other o1 48.16 45.64 51.43 48.15 2.37 48.98 45.40 49.05 49.16 51.64 o3 50.00 47.46 53.60 51.38 2.57 50.09 48.57 51.45 53.06 52.38 o4-mini 50.21 46.23 50.00 50.37 1.99 48.08 48.85 52.34 50.33 53.49 Gemini-2.5-Flash 53.54 55.48 55.47 55.76 1.02 54.57 53.60 57.36 58.14 49.24 QvQ-72B-Preview 46.88 45.91 46.48 46.69 0.42 45.45 45.24 50.65 44.85 57.58 GPT-4-Turbo-V 47.47 45.51 46.88 46.34 0.83 46.83 43.48 50.65 49.17 51.52 GPT-4o 47.30 45.18 40.23 45.35 3.01 45.52 43.27 48.58 47.51 51.52 GPT-4o-mini 35.16 36.21 32.42 33.09 1.77 35.26 34.31 34.88 34.55 29.55 Claude-3.7-Sonnet 49.41 48.01 51.56 50.68 1.55 50.75 49.11 49.10 49.83 46.21 Qwen2.5-VL-72B-Instruct 41.99 40.40 38.67 40.32 1.36 38.99 40.73 42.38 42.52 49.24 4.4 Evaluation across anatomical structures or frame numbers Comparisons between anatomical structures and modalities. We report results for nine systems: Central Nervous System ( CNS ), Respiratory System ( RES ), Circulatory System ( CIR), Digestive System ( DIG ), Urinary System ( URI), Reproductive System ( REP ), Endocrine System ( END ), Musculoskeletal System ( MSK ), and Auxiliary ( AUX ). The system-wise performance we report in Table 2 reveals substantial variability in task difficulty. For instance, Gemini-2.5-Flash achieves an accuracy of 60.21% on questions related to the musculoskeletal system, but only 48.61% on the urinary system, resulting in an accuracy gap of 11.60 percentage points. In Appendix D, we present a detailed analysis of performance variation across four representative organs in MEDFRAME QA. We also report the performance of MLLMs across different imaging modalities in Table 3. Notably, the accuracy varies significantly across common modalities such as CT, MRI, Ultrasound, and X-ray. QvQ-72B-Preview exhibits a 5.80% performance gap between Ultrasound and X-ray, whereas Gemini-2.5-Flash shows a 4.54% gap between MRI and X-ray. These discrepancies across anatomical structures and modalities highlight the modality sensitivity of current MLLMs, suggesting 8 Based on the MRI findings across the first and second images , which of the following best describes the most likely anatomical and pathological relationship between the nerve root and the disc herniation on the right side? A. The nerve root is displaced medially by a foraminal disc herniation, with no visible surrounding fat B. The nerve root is displaced laterally by a foraminal disc herniation, with visible surrounding fat. C. The nerve root is compressed and displaced medially by a central disc herniation, with no visible surrounding fat. D. The nerve root is compressed and displaced laterally by a central disc herniation, with visible surrounding fat. E. The nerve root is unaffected and remains in its normal position, surrounded by white fat. F. The nerve root is compressed and displaced medially by a far lateral disc herniation, with visible surrounding fat.From the axial and sagittal views, the disc clearly projects into the
https://arxiv.org/abs/2505.16964v1
foramen laterally, pushing the exiting (right ‐ sided) nerve root inward (medially) and obliterating the usual bright (fat) rim around the root on T1. This corresponds to a foraminal herniation displacing the nerve root medially and eliminating its surrounding fat. Answer: A The first image shows a mass compressing the right nerve root, likely due to a disc herniation, and displacing it laterally . The second image confirms this finding, describing a foraminal disc herniation on the right side that displaces the nerve root laterally while still showing surrounding white fat. The combination of these findings rules out medial displacement or central disc herniation and confirms the presence of a foraminal disc herniation with lateral displacement and visible fat.O1 Reasoning Correct ReasoningQ Option Answer:B o1 made a directional error when reading Image 1 . It incorrectly claimed that the nerve root was displaced medially, when the imaging clearly shows lateral displacement due to a foraminal disc herniation pressing from the inside.https://youtu.be/ xplhH8SaO7E Wrong ReasonFigure 4: Failure case study of o1onMEDFRAME QA.A mistake originating from a single image can result in significant errors in subsequent reasoning. In this case, o1made a directional error when interpreting the first frame, which propagated through its reasoning process and ultimately led to an incorrect answer. that training should include more diverse and balanced modality-organ combinations to improve generalization. Comparisons betweem VQAs with different numbers of frames. In Table 3, we report the accuracy of models on questions in MEDFRAME QA, grouped by the number of frames each question contains. Empirically, we observe that accuracy fluctuates as the number of images per question increases, with performance improving at certain frame counts and declining at others. Among the MLLMs, GPT-4o exhibits substantial fluctuation, with a standard deviation of 3.01, whereas QvQ-72B-Preview shows minimal variation, with a standard deviation of just 0.42. These fluctuations suggest that model performance is not strictly determined by the number of frames, but may instead be influenced by the complexity or redundancy of visual information across frames. 5 Conclusion and Limitations This paper introduces MEDFRAME QA, a multi-image medical visual question answering benchmark, comprising 2851 multi-image multi-choice questions, sourced from 3420 medical videos of 114 keywords and covering over 43 organs. We also propose an automated pipeline to generate high- quality multi-image VQA data from YouTube while ensuring semantic progression and contextual consistency across frames. Unlike exsiting datasets that rely on single-image inputs or lack of detailed reason about the answer, MEDFRAME QAhas both multi-image question answering pairs and detailed reasoning process, containing 2-5 images input and 3.24 images input per question. We comprehensively benchmark ten state-of-the-art models, presenting accuracies predominantly below 50%. While MEDFRAME QAreveals clear evidence of current MLLMs’ inability in handling multi- image questions of clinical reasoning, effective strategies to enhance their multi-image reasoning capabilities remain underexplored. Future work will focus on developing and evaluating methods to improve such capabilities. We believe MEDFRAME QAwill serve as a valuable resource for advancing research in multimodal medical AI and fostering the development of more capable diagnostic reasoning systems. Acknowledgments and Disclosure of Funding We
https://arxiv.org/abs/2505.16964v1
thank the Microsoft Accelerate Foundation Models Research Program for supporting our comput- ing needs. 9 References [1]Yunfei Xie, Ce Zhou, Lang Gao, Juncheng Wu, Xianhang Li, Hong-Yu Zhou, Sheng Liu, Lei Xing, James Zou, Cihang Xie, and Yuyin Zhou. Medtrinity-25m: A large-scale multimodal dataset with multigranular annotations for medicine. In The Thirteenth International Con- ference on Learning Representations , 2025. URL https://openreview.net/forum?id= IwgmgidYPS . [2]OpenAI. GPT-4 technical report. CoRR , abs/2303.08774, 2023. doi: 10.48550/ARXIV .2303. 08774. URL https://doi.org/10.48550/arXiv.2303.08774 . [3]Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. In Alice Oh, Tristan Nau- mann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Ad- vances in Neural Information Processing Systems 36: Annual Conference on Neural In- formation Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/ 5abcdf8ecdcacba028c6662789194572-Abstract-Datasets_and_Benchmarks.html . [4]Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Chuck Lau, Ryutaro Tanno, Ira Ktena, Basil Mustafa, Aakanksha Chowdhery, Yun Liu, Simon Kornblith, David J. Fleet, Philip Andrew Mansfield, Sushant Prakash, Renee Wong, Sunny Virmani, Christopher Semturs, S. Sara Mahdavi, Bradley Green, Ewa Dominowska, Blaise Agüera y Arcas, Joelle K. Barral, Dale R. Webster, Gregory S. Corrado, Yossi Matias, Karan Singhal, Pete Florence, Alan Karthikesalingam, and Vivek Natarajan. Towards generalist biomedical AI. CoRR , abs/2307.14334, 2023. doi: 10.48550/ ARXIV .2307.14334. URL https://doi.org/10.48550/arXiv.2307.14334 . [5]Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, Juanma Zambrano Chaves, Szu-Yeu Hu, Mike Schaekermann, Aishwarya Kamath, Yong Cheng, David G. T. Barrett, Cathy Cheung, Basil Mustafa, Anil Palepu, Daniel McDuff, Le Hou, Tomer Golany, Luyang Liu, Jean-Baptiste Alayrac, Neil Houlsby, Nenad Tomasev, Jan Freyberg, Charles Lau, Jonas Kemp, Jeremy Lai, Shekoofeh Azizi, Kimberly Kanada, SiWai Man, Kavita Kulkarni, Ruoxi Sun, Siamak Shakeri, Luheng He, Benjamin Caine, Albert Webson, Natasha Latysheva, Melvin Johnson, Philip Andrew Mansfield, Jian Lu, Ehud Rivlin, Jesper Anderson, Bradley Green, Renee Wong, Jonathan Krause, Jonathon Shlens, Ewa Dominowska, S. M. Ali Eslami, Katherine Chou, Claire Cui, Oriol Vinyals, Koray Kavukcuoglu, James Manyika, Jeff Dean, Demis Hassabis, Yossi Matias, Dale R. Webster, Joelle K. Barral, Greg Corrado, Christopher Semturs, S. Sara Mahdavi, Juraj Gottweis, Alan Karthikesalingam, and Vivek Natarajan. Capabilities of gemini models in medicine. CoRR , abs/2404.18416, 2024. doi: 10.48550/ARXIV .2404.18416. URL https://doi.org/10.48550/arXiv.2404.18416 . [6]Xiaoke Huang, Juncheng Wu, Hui Liu, Xianfeng Tang, and Yuyin Zhou. m1: Unleash the potential of test-time scaling for medical reasoning with large language models. arXiv preprint arXiv:2504.00869 , 2025. [7]Juncheng Wu, Wenlong Deng, Xingxuan Li, Sheng Liu, Taomian Mi, Yifan Peng, Ziyang Xu, Yi Liu, Hyunjin Cho, Chang-In Choi, et al. Medreason: Eliciting factual medical reasoning steps in llms via knowledge graphs. arXiv preprint arXiv:2504.00993 , 2025. [8]Joseph Lau, Swagata Gayen, Asma Ben Abacha, and Dina Demner-Fushman. A dataset of clinically generated visual questions and answers about radiology images. Scientific Data , 5:180251, 2018. doi: 10.1038/sdata.2018.251. URL https://doi.org/10.1038/sdata. 2018.251 . [9]Asma Ben Abacha, Sadid
https://arxiv.org/abs/2505.16964v1
A. Hasan, Vivek V . Datla, Joey Liu, Dina Demner-Fushman, and Henning Müller. Vqa-med: Overview of the medical visual question answering task at imageclef 2019. In Working Notes of CLEF 2019 , volume 2380 of CEUR Workshop Proceedings , Lugano, Switzerland, September 9-12 2019. CEUR-WS.org. URL https: //ceur-ws.org/Vol-2380/paper_272.pdf . 10 [10] Asma Ben Abacha, Mourad Sarrouti, Dina Demner-Fushman, Sadid A. Hasan, and Henning Müller. Overview of the vqa-med task at imageclef 2021: Visual question answering and generation in the medical domain. In CLEF 2021 Working Notes , CEUR Workshop Proceedings, Bucharest, Romania, September 21-24 2021. CEUR-WS.org. [11] Xuehai He, Yichen Zhang, Luntian Mou, Eric P. Xing, and Pengtao Xie. Pathvqa: 30000+ questions for medical visual question answering. CoRR , abs/2003.10286, 2020. URL https: //arxiv.org/abs/2003.10286 . [12] Bo Liu, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu. Slake: A semantically- labeled knowledge-enhanced dataset for medical visual question answering. In 18th IEEE International Symposium on Biomedical Imaging, ISBI 2021, Nice, France, April 13-16, 2021 , pages 1650–1654. IEEE, 2021. doi: 10.1109/ISBI48211.2021.9434010. URL https://doi. org/10.1109/ISBI48211.2021.9434010 . [13] Xiaoman Zhang, Chaoyi Wu, Ziheng Zhao, Weixiong Lin, Ya Zhang, Yanfeng Wang, and Weidi Xie. PMC-VQA: visual instruction tuning for medical visual question answering. CoRR , abs/2305.10415, 2023. doi: 10.48550/ARXIV .2305.10415. URL https://doi.org/10. 48550/arXiv.2305.10415 . [14] Yutao Hu, Tianbin Li, Quanfeng Lu, Wenqi Shao, Junjun He, Yu Qiao, and Ping Luo. Omnimed- vqa: A new large-scale comprehensive evaluation benchmark for medical LVLM. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 22170–22183. IEEE, 2024. doi: 10.1109/CVPR52733.2024.02093. URL https://doi.org/10.1109/CVPR52733.2024.02093 . [15] Pengcheng Chen, Jin Ye, Guoan Wang, Yanjun Li, Zhongying Deng, Wei Li, Tianbin Li, Haodong Duan, Ziyan Huang, Yanzhou Su, Benyou Wang, Shaoting Zhang, Bin Fu, Jianfei Cai, Bohan Zhuang, Eric J. Seibel, Junjun He, and Yu Qiao. Gmai-mmbench: A comprehensive multimodal evaluation benchmark towards general medical AI. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors, Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 , 2024. URL http://papers.nips.cc/paper_files/paper/2024/hash/ ab7e02fd60e47e2a379d567f6b54f04e-Abstract-Datasets_and_Benchmarks_Track. html . [16] Xiang Yue, Yuansheng Ni, Tianyu Zheng, Kai Zhang, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. MMMU: A massive multi-discipline multimodal understanding and reasoning benchmark for expert AGI. In IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 9556–9567. IEEE, 2024. doi: 10.1109/CVPR52733.2024.00913. URL https://doi.org/10.1109/CVPR52733.2024. 00913 . [17] Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Botao Yu, Ge Zhang, Huan Sun, Yu Su, Wenhu Chen, and Graham Neubig. Mmmu-pro: A more robust multi-discipline multimodal understanding benchmark. CoRR , abs/2409.02813, 2024. doi: 10.48550/ARXIV .2409.02813. URL https://doi.org/10.48550/arXiv.2409. 02813 . [18] Yuxin Zuo, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu, Ermo Hua,
https://arxiv.org/abs/2505.16964v1
Kaiyan Zhang, Ning Ding, and Bowen Zhou. Medxpertqa: Benchmarking expert-level medical reasoning and understanding. CoRR , abs/2501.18362, 2025. doi: 10.48550/ARXIV .2501.18362. URL https://doi.org/10.48550/arXiv.2501.18362 . [19] Wafa Osman, Faiza Mohamed, Mohamed Elhassan, and Abdelilah Shoufan. Is youtube a reliable source of health-related information? a systematic review. BMC Medical Education , 22 (1):382, 2022. 11 [20] Martin Gameli Akakpo and Patrick Kafui Akakpo. Recognizing the role of youtube in medical education. Discover Education , 3(1):73, 2024. ISSN 2731-5525. doi: 10.1007/ s44217-024-00131-9. URL https://doi.org/10.1007/s44217-024-00131-9 . [21] Adeeb Derakhshan, Linda Lee, Prabhat Bhama, Eric Barbarite, and David Shaye. Assessing the educational quality of ’youtube’ videos for facelifts. American Journal of Otolaryngology , 40(2):156–159, Mar-Apr 2019. ISSN 1532-818X. doi: 10.1016/j.amjoto.2019.01.001. URL https://doi.org/10.1016/j.amjoto.2019.01.001 . Epub 2019 Jan 4. [22] Wisdom Oluchi Ikezogwo, Mehmet Saygin Seyfioglu, Fatemeh Ghezloo, Dylan Stefan Chan Geva, Fatwir Sheikh Mohammed, Pavan Kumar Anand, Ranjay Krishna, and Linda G. Shapiro. Quilt-1m: One million image-text pairs for histopathology. In Alice Oh, Tris- tan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/ 775ec578876fa6812c062644964b9870-Abstract-Datasets_and_Benchmarks.html . [23] Yiqi Wang, Wentao Chen, Xiaotian Han, Xudong Lin, Haiteng Zhao, Yongfei Liu, Bohan Zhai, Jianbo Yuan, Quanzeng You, and Hongxia Yang. Exploring the reasoning abilities of multimodal large language models (mllms): A comprehensive survey on emerging trends in multimodal reasoning. CoRR , abs/2401.06805, 2024. doi: 10.48550/ARXIV .2401.06805. URL https://doi.org/10.48550/arXiv.2401.06805 . [24] Yunfei Xie, Juncheng Wu, Haoqin Tu, Siwei Yang, Bingchen Zhao, Yongshuo Zong, Qiao Jin, Cihang Xie, and Yuyin Zhou. A preliminary study of o1 in medicine: Are we closer to an ai doctor? arXiv preprint arXiv:2409.15277 , 2024. [25] Hardy Chen, Haoqin Tu, Fali Wang, Hui Liu, Xianfeng Tang, Xinya Du, Yuyin Zhou, and Cihang Xie. Sft or rl? an early investigation into training r1-like reasoning large vision-language models. arXiv preprint arXiv:2504.11468 , 2025. [26] Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Open- vlthinker: An early exploration to complex vision-language reasoning via iterative self- improvement. arXiv preprint arXiv:2503.17352 , 2025. [27] Rawan AlSaad, Alaa Abd-Alrazaq, Sabri Boughorbel, Arfan Ahmed, Max-Antoine Renault, Rafat Damseh, and Javaid Sheikh. Multimodal large language models in health care: Applica- tions, challenges, and future outlook. Journal of Medical Internet Research , 26:e59505, 2024. doi: 10.2196/59505. URL https://www.jmir.org/2024/1/e59505 . Epub ahead of print. [28] OpenAI. GPT-4V(ision) system card. OpenAI , 2023. URL https://cdn.openai.com/ papers/GPTV_System_Card.pdf . [29] Yaoting Wang, Shengqiong Wu, Yuecheng Zhang, Shuicheng Yan, Ziwei Liu, Jiebo Luo, and Hao Fei. Multimodal chain-of-thought reasoning: A comprehensive survey. CoRR , abs/2503.12605, 2025. doi: 10.48550/ARXIV .2503.12605. URL https://doi.org/10. 48550/arXiv.2503.12605 . [30] Jiazhen Pan, Che Liu, Junde Wu, Fenglin Liu, Jiayuan Zhu, Hongwei Bran Li, Chen Chen, Cheng Ouyang, and Daniel Rueckert. Medvlm-r1: Incentivizing medical reasoning capability of vision-language models (vlms) via reinforcement learning. CoRR , abs/2502.19634, 2025. doi: 10.48550/ARXIV .2502.19634. URL https://doi.org/10.48550/arXiv.2502.19634 . [31] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition
https://arxiv.org/abs/2505.16964v1
via large-scale weak supervision. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA , volume 202 of Proceedings of Machine Learning Research , pages 28492–28518. PMLR, 2023. URL https://proceedings.mlr.press/v202/radford23a.html . 12 [32] Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. MERLOT: multimodal neural script knowledge models. In Marc’Aurelio Ran- zato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual , pages 23634–23651, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ c6d4eb15f1e84a36eff58eca3627c82e-Abstract.html . [33] Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, Cliff Wong, Andrea Tupini, Yu Wang, Matt Mazzola, Swadheen Shukla, Lars Liden, Jianfeng Gao, Angela Crabtree, Brian Piening, Carlo Bifulco, Matthew P. Lungren, Tristan Naumann, Sheng Wang, and Hoifung Poon. Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs, 2025. URL https://arxiv.org/abs/2303.00915 . [34] Deepak Gupta, Kush Attal, and Dina Demner-Fushman. A dataset for medical instruc- tional video classification and question answering. Scientific Data , 10(1):158, 2023. ISSN 2052-4463. doi: 10.1038/s41597-023-02036-y. URL https://doi.org/10.1038/ s41597-023-02036-y . [35] Ming Hu, Lin Wang, Siyuan Yan, Don Ma, Qingli Ren, Peng Xia, Wei Feng, Peibo Duan, Lie Ju, and Zongyuan Ge. Nurvid: A large expert-level video database for nursing procedure activity understanding. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neu- ral Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/ 3a48b0eaba26ba862220a307a9edb0bb-Abstract-Datasets_and_Benchmarks.html . [36] Emmett D Goodman, Krishna K Patel, Yilun Zhang, William Locke, Chris J Kennedy, Rohan Mehrotra, Stephen Ren, Melody Guan, Orr Zohar, Maren Downing, Hao Wei Chen, Jevin Z Clark, Margaret T Berrigan, Gabriel A Brat, and Serena Yeung-Levy. Analyzing surgical technique in diverse open surgical videos with multitask machine learning. JAMA Surgery , 159 (2):185–192, Feb 2024. ISSN 2168-6262. doi: 10.1001/jamasurg.2023.6262. The A VOS dataset includes 1,997 annotated open-surgery videos from 23 procedure types across 50 countries. [37] Tanner Carbonati, Parastou Eslami, Ashish Chaudhari, Elizabeth Herbst, Laurent Fiorina, Pauline Porquet, Christine Henry, Pollard Tom, Brian Gow, Roger G. Mark, Steven Horng, and Nathaniel Greenbaum. Mimic-iv-ecg &amp; mimic-iv-echo: Detection of regional wall motion abnormalities from electrocardiogram using deep learning. JACC , 83(13_Supplement): 1172–1172, 2024. doi: 10.1016/S0735-1097(24)03162-0. URL https://www.jacc.org/ doi/abs/10.1016/S0735-1097%2824%2903162-0 . [38] Negin Ghamsarian, Yosuf El-Shabrawi, Sahar Nasirihaghighi, Doris Putzgruber-Adamitsch, Martin Zinkernagel, Sebastian Wolf, Klaus Schoeffmann, and Raphael Sznitman. Cataract-1k dataset for deep-learning-assisted analysis of cataract surgery videos. Scientific Data , 11(1): 373, 2024. ISSN 2052-4463. doi: 10.1038/s41597-024-03193-4. URL https://doi.org/ 10.1038/s41597-024-03193-4 . [39] Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Madry, Alex Baker- Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex
https://arxiv.org/abs/2505.16964v1
Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, An- drew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo 13 Lugaresi, Carroll L. Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Win- ter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, and Dane Sherburn. Gpt-4o system card. CoRR , abs/2410.21276, 2024. doi: 10.48550/ARXIV .2410.21276. URL https://doi.org/10.48550/arXiv.2410.21276 . [40] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, Ally Bennett, Ananya Kumar, Andre Saraiva, Andrea Vallone, Andrew Duberstein, Andrew Kondrich, Andrey Mishchenko, Andy Applebaum, Angela Jiang, Ashvin Nair, Barret Zoph, Behrooz Ghorbani, Ben Rossen, Benjamin Sokolowsky, Boaz Barak, Bob McGrew, Borys Minaiev, Botao Hao, Bowen Baker, Brandon Houghton, Brandon McKinzie, Brydon Eastman, Camillo Lugaresi, Cary Bassin, Cary Hudson, Chak Ming Li, Charles de Bourcy, Chelsea V oss, Chen Shen, Chong Zhang, Chris Koch, Chris Orsinger, Christopher Hesse, Claudia Fischer, Clive Chan, Dan Roberts, Daniel Kappler, Daniel Levy, Daniel Selsam, David Dohan, David Farhi, David Mely, David Robinson, Dimitris Tsipras, Doug Li, Dragos Oprica, Eben Freeman, Eddie Zhang, Edmund Wong, Elizabeth Proehl, Enoch Cheung, Eric Mitchell, Eric Wallace, Erik Ritter, Evan Mays, Fan Wang, Felipe Petroski Such, Filippo Raso, Florencia Leoni, Foivos Tsimpourlas, Francis Song, Fred von Lohmann, Freddie Sulit, Geoff Salmon, Giambattista Parascandolo, Gildas Chabot, Grace Zhao, Greg Brockman, Guillaume Leclerc, Hadi Salman, Haiming Bao, Hao Sheng, Hart Andrin, Hessam Bagherinezhad, Hongyu Ren, Hunter Lightman, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian Osband, Ignasi Clavera Gilaberte, and Ilge Akkaya. Openai o1 system card. CoRR , abs/2412.16720, 2024. doi: 10.48550/ARXIV .2412.16720. URL https://doi.org/10.48550/arXiv.2412.16720 . [41] OpenAI. Introducing o3 and o4 mini. https://openai.com/index/ introducing-o3-and-o4-mini/ , April 2025. [42] Anthropic. Claude 3.7 sonnet and claude code. https://www.anthropic.com/news/ claude-3-7-sonnet , February 2025. [43] Google. Start building with gemini 2.5 flash. https://developers.googleblog.com/en/ start-building-with-gemini-25-flash/ , April 2025. [44] Q. Team. Qvq: To see the world with wisdom. https://qwenlm.github.io/blog/ qvq-72b-preview/ , December 2024. [45] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Ming-Hsuan Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl
https://arxiv.org/abs/2505.16964v1
technical report. CoRR , abs/2502.13923, 2025. doi: 10.48550/ARXIV .2502.13923. URL https://doi.org/10.48550/arXiv.2502.13923 . [46] Qianqi Yan, Xuehai He, Xiang Yue, and Xin Eric Wang. Worse than random? an embarrass- ingly simple probing evaluation of large multimodal models in medical vqa. arXiv preprint arXiv:2405.20421 , 2024. 14 A Data Distribution We present detailed data distributions across body systems, organs, and imaging modalities in Figure 5(a), (b), and (c), respectively. A word cloud of keywords in MEDFRAME QAis shown in Figure 5(d), and the distribution of frame counts per question is provided in Figure 5(e). (a) Body Systems(d) Keywords (e) Distribution by Frame CountReproductive16.6%Circulatory7.1%Musculoskeletal20.3% Urinary7.6%Respiratory8.1%Digestive14.1%Central Nervous8.9%Endocrine8.8%Auxiliary8.1%Body Systems ExtremitiesChest / ThoraxHead & NeckAbdomenPelvisUrogenital TractSpineSkin / Superficial joints bones skeletal muscles soft tissues lung pulmonary arteries aorta breast heart esophagus pleura trachea bronchi brain pituitary gland thyroid eyes parathyroid ears hypothalamus salivary glands lymph nodes large intestine stomach gallbladder liver adrenal glands pancreas small intestine pancreas (endocrine) vagina prostate uterus ovaries cervix bladder kidneys testes ureters urethra penis spine spinal cord skin(b) Organs(c) Modalities CT37.6%MRI33.6%ultrasound13.6%X-ray10.6%Others4.6% Figure 5: Data distribution of MEDFRAME QA.In Figure 5(a), we show the distribution across body systems; (b) presents the distribution across organs; (c) shows the distribution across imaging modalities; (d) provides a word cloud of keywords in MEDFRAME QA; and (e) reports the distribution of frame counts per question. B API Cost Generation of each data entry costs 5 times calling of GPT-4o API on average, depending on the number of frames involved in the data entry. Construction of 2,851 data entries costs 14,255 API calls in total. For proprietary models (e.g., GPT-4o ,Gemini-2.5-Flash ,Claude-3.7-Sonnet ), we use their official APIs and perform 2,851 requests per model, corresponding to the number of examples in MEDFRAME QA. For open-source models (e.g., QvQ-72B-Preview ,Qwen2.5-VL-72B-Instruct ), we also perform 2,851 requests per model via the APIs provided by Aliyun. 15 C Keyword List System Organ Keyword central nervous system brain stroke CT brain tumor MRI cerebral hemorrhage CT epilepsy EEG imaging traumatic brain injury CT spinal cord spinal cord injury MRI disc herniation MRI spinal stenosis CT myelitis MRI respiratory system lung pneumonia chest radiograph lung cancer CT pulmonary embolism CT angiography chronic obstructive pulmonary disease CT trachea bronchi bronchial asthma bronchography pleura pleural effusion ultrasound circulatory system heart coronary artery disease angiography heart failure echocardiography myocardial infarction CT cardiomyopathy MRI pulmonary arteries pulmonary embolism CT angiography pulmonary hypertension CT aorta aortic aneurysm CT aortic dissection MR angiography digestive system esophagus esophageal cancer CT gastroesophageal reflux esophagram esophageal stricture endoscopic ultrasound stomach gastric cancer CT peptic ulcer gastroscopy gastritis upper gastrointestinal series liver liver cirrhosis CT hepatocellular carcinoma MRI hepatitis ultrasound pancreas pancreatic cancer CT acute pancreatitis CT chronic pancreatitis MRCP gallbladder gallstones ultrasound cholecystitis HIDA scan gallbladder cancer CT small intestine Crohn’s disease MRI enterography small bowel obstruction CT intestinal bleeding capsule endoscopy large intestine colorectal cancer colonoscopy diverticulitis CT ulcerative colitis barium enema urinary system kidneys kidney stones CT renal cell carcinoma MRI pyelonephritis ultrasound ureters ureteral stones CT urography ureteral stricture MR urography bladder bladder cancer cystoscopy urinary tract infection ultrasound bladder
https://arxiv.org/abs/2505.16964v1
stones CT urethra urethral stricture urethrography urethral injury CT urethrography reproductive system testes testicular cancer ultrasound testicular torsion Doppler ultrasound epididymitis ultrasound prostate prostate cancer MRI benign prostatic hyperplasia TRUS prostatitis pelvic CT 1 16 System Organ Keyword penis erectile dysfunction Doppler ultrasound Peyronie’s disease MRI ovaries ovarian cyst ultrasound ovarian cancer MRI polycystic ovary syndrome ultrasound uterus endometrial cancer MRI uterine fibroids ultrasound adenomyosis pelvic MRI cervix cervical cancer MRI cervical dysplasia colposcopy vagina vaginal cancer MRI vaginal prolapse transvaginal ultrasound endocrine system thyroid thyroid nodule ultrasound thyroid cancer scintigraphy hyperthyroidism neck CT parathyroid parathyroid adenoma scintigraphy hyperparathyroidism ultrasound adrenal glands adrenal adenoma CT pheochromocytoma MRI Cushing’s syndrome adrenal scintigraphy pancreas (endocrine) insulinoma CT pancreatic neuroendocrine tumor MRI pituitary gland pituitary adenoma MRI acromegaly dynamic MRI hypothalamus hypothalamic tumor MRI hypopituitarism functional MRI musculoskeletal system bones osteoporosis DEXA bone fracture radiograph osteomyelitis MRI joints osteoarthritis radiograph rheumatoid arthritis MRI joint effusion ultrasound skeletal muscles muscle tear MRI myositis ultrasound muscular dystrophy EMG imaging spine disc herniation MRI spinal stenosis CT spondylolisthesis radiograph auxiliary systems and tissues eyes glaucoma OCT retinal detachment ultrasound macular degeneration fundus photography ears otitis media temporal bone CT hearing loss brain MRI skin melanoma confocal microscopy skin cancer dermatologic ultrasound lymph nodes lymphoma CT lymphadenitis ultrasound soft tissues soft tissue sarcoma MRI lipoma ultrasound salivary glands salivary gland tumor ultrasound sialadenitis sialography breast breast cancer mammography fibroadenoma ultrasound breast cyst MRI 217 D Comparison of Organs We present a detailed organ-wise accuracy comparison of ten state-of-the-art MLLMs on MED- FRAME QA. Our results reveal substantial performance variation across different organs. While Gemini-2.5-Flash outperforms other models on average in Table 2, open-source models like QvQ-72B-Preview demonstrate competitive performance on specific organs, such as the ureters and pulmonary arteries. This variability highlights the sensitivity of MLLM performance to the anatomi- cal structures involved, underscoring the need to develop models that are more robust to anatomical diversity. This variability underscores the sensitivity of MLLM performance to organ-specific features and highlights the need for future research focused on improving anatomical generalization across a wide range of clinical scenarios. Table 4: Accuracy of Models by organs on MEDFRAME QA.We report the organ-wise accuracy of the models on M EDFRAME QA. The best accuracy is highlighted in bold. OrgansModel Accuracy Gemini- 2.5-FlashClaude- 3.7-Sonneto4-mini o3 o1 GPT-4o GPT-4o- miniGPT-4- Turbo-VQvQ-72B Qwen2.5- VL-72B- Instruct auxiliary systems and tissues soft tissues 48.65 37.84 45.95 39.19 35.14 36.49 32.43 35.14 32.43 33.78 salivary glands 55.00 50.00 45.00 52.63 47.37 40.00 40.00 45.00 70.00 75.00 skin 33.33 66.67 50.00 70.00 54.55 75.00 41.67 75.00 41.67 58.33 breast 52.63 55.26 55.26 57.89 58.33 42.11 39.47 39.47 50.00 36.84 lymph nodes 61.11 77.78 72.22 72.22 61.11 55.56 27.78 61.11 44.44 38.89 ears 58.33 47.22 44.44 52.78 57.14 50.00 30.56 55.56 44.44 41.67 eyes 56.25 50.00 54.17 46.81 51.06 43.75 37.50 52.08 50.00 43.75 central nervous system brain 50.00 49.38 42.41 45.86 46.05 51.25 44.38 46.88 42.41 44.30 spinal cord 46.81 48.94 52.13 51.06 48.35 44.68 37.23 42.55 50.00 32.98 circulatory system pulmonary arteries 54.84 56.99 50.54 49.46 51.09 43.01 44.09
https://arxiv.org/abs/2505.16964v1
47.31 58.06 40.86 aorta 60.81 48.65 45.21 50.00 45.83 35.14 35.14 41.89 43.24 32.43 heart 55.88 52.94 51.52 51.52 53.12 26.47 35.29 32.35 47.06 41.18 digestive system large intestine 47.29 47.29 42.64 38.28 41.73 48.06 23.26 46.51 33.59 30.47 esophagus 59.26 51.85 70.37 62.96 59.26 62.96 22.22 62.96 59.26 48.15 small intestine 61.11 55.56 72.22 58.82 62.50 44.44 16.67 55.56 50.00 33.33 gallbladder 37.70 44.26 34.43 38.33 41.38 40.98 39.34 47.54 41.67 41.67 stomach 59.09 59.09 55.17 60.00 54.12 57.95 32.95 56.82 45.45 42.05 liver 54.90 54.90 52.94 60.78 52.94 50.98 29.41 43.14 49.02 41.18 pancreas 39.29 35.71 42.86 39.29 35.71 42.86 25.00 42.86 35.71 25.00 endocrine system pancreas (endocrine) 41.18 35.29 52.94 35.29 35.29 41.18 17.65 41.18 29.41 29.41 hypothalamus 56.67 43.33 53.85 50.00 42.31 46.67 43.33 46.67 43.33 53.33 parathyroid 56.41 38.46 47.37 50.00 57.14 41.03 35.90 46.15 53.85 38.46 pituitary gland 56.34 56.34 59.15 57.75 56.52 45.07 21.13 47.89 50.70 42.25 adrenal glands 53.12 43.75 53.12 43.75 25.00 53.12 40.62 43.75 34.38 28.12 thyroid 58.06 51.61 46.77 55.74 50.00 48.39 30.65 61.29 46.77 51.61 musculoskeletal system spine 57.14 49.11 48.21 58.04 48.65 47.32 35.71 50.00 47.62 42.86 bones 62.68 50.70 51.77 56.83 54.07 43.66 37.32 38.03 46.48 38.73 skeletal muscles 63.55 61.68 62.62 54.29 50.94 45.79 38.32 51.40 56.31 61.17 joints 58.53 50.69 52.53 52.31 51.87 40.55 31.34 44.24 42.40 38.71 reproductive system vagina 56.88 50.46 44.44 47.17 38.24 49.54 35.78 54.13 48.62 47.71 penis 42.86 28.57 28.57 14.29 14.29 42.86 28.57 50.00 57.14 42.86 ovaries 50.79 47.62 44.44 46.77 52.54 42.86 22.22 38.10 65.08 53.97 prostate 50.63 49.37 40.51 42.86 30.26 46.84 43.04 48.10 45.57 32.91 cervix 61.29 53.23 41.67 38.98 47.37 48.39 32.26 48.39 40.32 35.48 testes 64.20 46.91 46.91 51.25 52.50 44.44 34.57 45.68 56.79 34.57 uterus 52.31 40.00 46.15 46.88 42.19 41.54 32.31 53.85 36.92 38.46 respiratory system trachea bronchi 50.00 60.00 55.56 62.50 55.56 70.00 30.00 50.00 70.00 60.00 lung 59.11 47.29 50.25 53.00 50.51 48.28 35.96 45.32 49.75 45.32 pleura 52.94 23.53 41.18 35.29 25.00 47.06 47.06 52.94 11.76 35.29 urinary system ureters 44.59 44.59 40.54 46.48 42.65 40.54 25.68 45.95 46.58 35.62 kidneys 50.00 51.19 58.33 50.00 54.32 50.00 38.10 46.43 48.81 38.10 urethra 52.17 43.48 60.87 43.48 40.91 21.74 47.83 26.09 43.48 43.48 bladder 51.43 57.14 54.29 65.71 54.29 51.43 42.86 40.00 51.43 31.43 18 E Prompt Details E.1 Filter and Rephrase Captions Prompts for Filtering Non-Medical Image and Rephrasing prompt_template: You are given the following: - A **keyframe image** extracted from a YouTube video retrieved using the keyword "{keyword}", which relates to the "{or gan}" in the "{system}". - The keyframe corresponds to the time interval: [{frame_start_time}, {frame_end_time}] in the video. You may assume that visual content remains stable during this period. - A **list of caption segments**, spanning from {start_time} to {end_time} seconds, provided as a JSON array in the `{caption_json_list}` variable. These segments represent the spoken content near the frame's timestamp and may contain information that helps describe or interpret the keyframe image. Each caption object contains: - `"startT ime"`: start time in seconds - `"endT ime"`: end
https://arxiv.org/abs/2505.16964v1
time in seconds - `"sentence"`: caption content ### Your Task 1. **Determine Benchmark Eligibility**: Answer these questions to guide your reasoning : 1. Does the image prominently depict clear , authentic medical imaging relevant to "{keyword}" (e.g., sharp radiographs or scans, including multiple images if they are all visible and relevant)? 2. Is the image primarily composed of medical imaging , even if there are text overlays or minor visual obstructions? 3. Is the image suitable for inclusion in a medical benchmark dataset (e.g., sharp, intelligible, and relevant to medical imaging, with at least 85% of the image area consisting of meaningful medical imaging, excluding blank regions, borders, or irrelevant content)? 4. Is the image free of any unrelated human faces, including but not limited to presenters in video conference screenshots (e.g., Zoom speaker windows) or other non-medical human portraits? 2. **Faithful Rephrasing**: - Rephrase the caption into a coherent, fluent, and high-quality medical description of the visual content of the current frame, as conveyed solely by the dialogue in the provided captions. - The description must use precise medical terminology and reflect a medical imaging context (e.g., radiology or anatomy). - Include only information explicitly stated in the captions that dir ectly r elates to the curr ent frame’ s visual content , such as descriptions, identifications, observations, questions, answers, corrections, and transitional statements. - Strictly avoid any details not present in the captions, including information from the image itself, external context, or unrelated dialogue (e.g., discussions about other frames or topics). ### Output Format Return your answer as a valid JSON object, you **should not include markdown in your output**: {{ "result": "yes" | "no", "reason": "A concise explanation (max 50 words) for why the image is or is not suitable for the benchmark.", "captions": all the captions combined together , "rephrased_description": "A faithful and fluent rephrasing of the caption content, without hallucination." }} If the image is **not** suitable for the benchmark (i.e., `"result": "no"`), then only return the following fields in your output, you **should not include markdown in your output**: {{ "result": "no", "reason": "A concise explanation (max 50 words) for why the image is not suitable for the benchmark.", }} 19 E.2 Transcripts Relation Check Prompts for Pairing Related Captions prompt_template: You are given one or more caption segments corresponding to one or more continuous medical keyframes from a video. You do not have access to the actual images. These caption segments come from a medical video retrieved using the keyword "{keyword}", and are related to the body part "{body_part}". Each caption describes the anatomical structures or procedural content visible in its corresponding keyframe. Your task is to analyze the content of all caption segments and determine which segments are discussing the same or closely r elated medical topic or structur e (e.g., same procedure, same or gan, or same pathology). Group together all captions that appear to describe the same medical subject. Each group should represent a coherent topic or issue that could be visually identifiable in the corresponding keyframes. Below are all the caption
https://arxiv.org/abs/2505.16964v1
segments: ``` {caption} ``` Requir ements: - Focus only on medically or visually coherent topics. - Do not group captions based only on linguistic similarity—there must be a medically meaningful connection. - Each group must contain at least one caption. - If a caption clearly describes a dif ferent topic from others, place it in its own group. - For each group, provide a brief explanation in the reason field describing why these captions are grouped together . Output Format: The output must strictly follow the JSON format below (no markdown, no explanations): {{ "frames": [all the caption numbers], "pairs_of_related_frames": [ {{ "selected_captions": [1, 2], "related_reason": "Both captions describe the insertion of a catheter into the same artery ." }}, {{ "selected_captions": [3], "related_reason": "This caption describes a dif ferent procedure involving the venous system." }} ] }} 20 E.3 Multi-Frame VQA Pair Generation Prompts for Generating VQA pairs prompt_template: Your task is to generate expert-level, medically valuable question that: - Uses every piece of visual information contained in the captions (treat the captions only as your private description of each image). - Demands advanced competencies such as anatomical reasoning, dif ferential diagnosis, pathology identification, or procedural planning. - Is grounded solely in what can be seen on the images. Do not add outside facts unless the finding is directly evident from the described appearance. - Refers to each picture as “first image”, “second image”, etc. in the order implied by the captions. - Never hints at, quotes, or mentions the captions, videos, or any textual description. All wording must make it seem as though the questioner has the images in front of them. - Add as many plausible but misleading distractors as possible (commonly 4–6 or more). Craft the incorrect answer choices so they are commonly confused with the correct diagnosis/procedure given the depicted findings, thereby maximizing the likelihood of error for anyone who has not carefully interpreted every visual detail. - Important: Do not generate questions that test theoretical definitions, textbook knowledge, or general medical concepts alone. Only generate questions whose answers depend on observing specific visual features explicitly described in the captions. Do not ask about general patterns like 'penumbra parameters'—instead, ask how those parameters appear in the actual image described. Below ar e all the caption segments: ``` {caption} ``` Output Format (strict JSON structur e, no markdown allowed): {{ "related_captions": ["caption_1", "caption_2", ...], "mcq_questions": [ {{ "question": "A medically grounded visual question requiring comparison across the provided images.", "options": ["Option A", "Option B", "Option C", "Option D", ...], "correct_answer": "Please select the best answer from the given options.", "reasoning_chain": "A clear explanation of how the correct answer is visually derived by integrating details from all related images.", "supporting_segments": {{ "caption_1": "Supporting phrase from caption_1.", "caption_2": "Supporting phrase from caption_2.", "...": "Add additional quotes as needed." }} }} ] }} 21 E.4 Benchmark Evaluation Prompts for Evaluation GPT Series & Claude & Qwen Series Pr ompt Template: Answer the following multiple-choice question. Images are provided. The last line of your response should be strictly of the following format:
https://arxiv.org/abs/2505.16964v1
’ Answer: $LETTER ’ (without quotes) where LETTER is one of the options. For example, if the correct answer is A, your response should be: ’Answer: A’. Think step by step before answering. Question:{question} Options: {options} Gemini Pr ompt Template: Answer the following multiple-choice question. Images are provided. The last line of your response should be strictly of the following format: ’ The final answer is $\\boxed{{LETTER}}$ ’ (without quotes) where LETTER is one of the options. For example, if the correct answer is A, your response should be: ’The final answer is $\\boxed{{A}}$’. Think step by step before answering. Question:{question} Options: {options} QVQ Pr ompt Template: Answer the following multiple-choice question. Images are provided. The last line of your response should be strictly of the following format: ’ **Final Answer**\n\n\\[ \\boxed{{LETTER}} \\] ’ (without quotes) where LETTER is one of the options. For example, if the correct answer is A, your response should be: ’**Final Answer**\n\n\\[ \\boxed{{A}} \\]’. Think step by step before answering. Question:{question} Options: {options} 22 F Representative Examples F.1 Two Frames Example Example - #1 Question: Based on the radiographic findings in the first and second images, which of the following best describes the side and severity of the pneumothorax, as well as the associated mediastinal shift?Input Images:System : respiratory system Organ : lung Modality : X-ray YouTube Link : https://youtu.be/ J1n2mJ00xKs Options: A. A large left-sided pneumothorax with mediastinal shift to the right, as seen in the first image, and a lar ge right- sided pneumothorax with mediastinal shift to the left, as seen in the second image. B. A small left-sided pneumothorax with no mediastinal shift in the first image, and a lar ge right-sided pneumothorax with mediastinal shift to the left in the second image. C. A large right-sided pneumothorax with mediastinal shift to the left in both the first and second images. D. A large left-sided pneumothorax with no mediastinal shift in the first image, and a lar ge right-sided pneumothorax with mediastinal shift to the right in the second image. E. A large left-sided pneumothorax with mediastinal shift to the left in the first image, and a lar ge right-sided pneumothorax with mediastinal shift to the right in the second image. F. A small right-sided pneumothorax with no mediastinal shift in the first image, and a lar ge left-sided pneumothorax with mediastinal shift to the right in the second image. Answer: A Reason: The first image shows a lar ge left-sided pneumothorax, evidenced by the additional line between the third and fourth ribs and the collapse of the left lung, with mediastinal structures displaced to the right. The second image depicts a large right-sided pneumothorax, as indicated by the significant collapse of the right lung and mediastinal shift to the left. These findings are consistent with the descriptions provided in both images. 23 F.2 Three Frames Example Example - #2 Question: Based on the imaging findings across the first, second, and third images, which of the following best describes the most likely etiology and current stage of the observed pathology?Input Images:System : central
https://arxiv.org/abs/2505.16964v1
nervous system Organ : brain Modality : CT YouTube Link : https://youtu.be/ eoxKSAoGW2s Options: A. Acute ischemic stroke involving the right anterior cerebral artery and middle cerebral artery territories with significant mass ef fect and early signs of herniation. B. Chronic ischemic changes in the right cerebral hemisphere with resolved mass ef fect and no evidence of acute intervention. C. Acute hemorrhagic stroke in the right cerebral hemisphere with associated midline shift and ventricular compression. D. Acute ischemic stroke involving the left anterior cerebral artery and middle cerebral artery territories with no significant mass ef fect. E. Subacute ischemic stroke in the right cerebral hemisphere with complete resolution of midline shift following surgical intervention. F. Acute ischemic stroke involving the right anterior cerebral artery and middle cerebral artery territories with persistent mass ef fect despite sur gical decompression. Answer: A Reason: The correct answer is derived by integrating the following visual findings: (1) The first image shows dif fuse parenchymal hypodensity , sulcal ef facement, and poor gray-white dif ferentiation in the right hemisphere, consistent with acute ischemic stroke. (2) The second image highlights a well-defined area of low attenuation involving the anterior and middle cerebral artery territories, with significant mass ef fect, ventricular compression, and midline shift, further supporting acute ischemic stroke with complications. (3) The third image confirms the persistence of poor gray-white dif ferentiation and mass ef fect, despite sur gical decompression, indicating an acute stage of ischemic stroke with ongoing complications. The other options are incorrect because they either misidentify the hemisphere, mischaracterize the pathology as hemorrhagic, or incorrectly describe the stage or resolution of the findings. 24 F.3 Four Frames Example Example - #3 Question: Based on the radiographic findings across the described images, which combination of features most strongly supports the diagnosis of advanced intervertebral disc degeneration at L4-5 compared to L5-S1?Input Images:System : musculoskeletal system Organ : spine Modality : X-ray YouTube Link : https://youtu.be/U57_gD8Ftkw Options: A. Significant disc narrowing at L4-5 to 3-4 mm, sclerosis of the end plates, and absence of a vacuum phenomenon. B. Disc narrowing at L4-5 to 3-4 mm, intact end plates without sclerosis, and whitening of the end plates due to prolonged stress. C. Disc narrowing at L4-5 to 3-4 mm, sclerosis of the end plates, whitening of the end plates, and absence of fluid leading to a vacuum phenomenon. D. Thick disc at L5-S1 measuring 12-13 mm, intact end plates without sclerosis, and absence of a vacuum phenomenon. E. Disc narrowing at L4-5 to 3-4 mm, intact end plates without sclerosis, and thick disc at L5-S1 measuring 12-13 mm. F. Disc narrowing at L4-5 to 3-4 mm, sclerosis of the end plates, whitening of the end plates, and thick disc at L5-S1 measuring 12-13 mm. Answer: C Reason: The correct answer integrates multiple visual findings described across the images. The significant narrowing of the L4-5 disc space to 3-4 mm is a hallmark of advanced degeneration, as noted in all images. Sclerosis of the end plates at L4-5 is explicitly described in image_2 and image_3, indicating chronic stress.
https://arxiv.org/abs/2505.16964v1
Whitening of the end plates due to prolonged stress is mentioned in image_3 and image_4, further supporting degeneration. The vacuum phenomenon, described in image_3 and image_4, is a key feature of advanced degeneration, where gas accumulates in the disc space due to fluid loss. These findings collectively point to advanced intervertebral disc degeneration at L4-5 compared to L5-S1, which remains thicker and less degenerated. 25 F.4 Five Frames Example Example - #4 Question : In the provided CT angiography images of the chest, which anatomical structure is most likely to serve as the primary landmark for orienting the scan and dif ferentiating between the mediastinal and pulmonary vasculature regions?Input Images:System : respiratory system Organ : lung Modality : CT YouTube Link : https://youtu.be/ rmX9AeGYscU Options: A. Aortic arch B. Left atrium C. Right pulmonary artery D. Superior vena cava E. Descending thoracic aorta F. Main pulmonary artery Answer: A Reason: The aortic arch is explicitly described across all images as the key landmark for orientation in the CT angiography scans. It is a readily identifiable structure that helps in distinguishing the mediastinal anatomy from the pulmonary vasculature. Other options, such as the left atrium or right pulmonary artery , are part of the chest anatomy but are not emphasized as primary orientation landmarks in the described images. 26
https://arxiv.org/abs/2505.16964v1
BP-Seg : A graphical model approach to unsupervised and non-contiguous text segmentation using belief propagation Fengyi Li, Kayhan Behdin, Natesh Pillai, Xiaofeng Wang, Zhipeng Wang, Ercan Yildiz LinkedIn Corporation {fenli, kbehdin, napillai, xiaofwan, zhipwang, eyildiz}@linkedin.com Abstract Text segmentation based on the semantic mean- ing of sentences is a fundamental task with broad utility in many downstream applications. In this paper, we propose a graphical model- based unsupervised learning approach, named BP-Seg for efficient text segmentation. Our method not only considers local coherence, cap- turing the intuition that adjacent sentences are often more related, but also effectively groups sentences that are distant in the text yet seman- tically similar. This is achieved through belief propagation on the carefully constructed graph- ical models. Experimental results on both an illustrative example and a dataset with long- form documents demonstrate that our method performs favorably compared to competing ap- proaches. 1 Introduction Segmenting text into semantically coherent seg- ments has been a long-studied problem in the field of natural language processing (Pak and Teh, 2018; Badjatiya et al., 2018). The applications of text segmentation range from information retrieval (Yu et al., 2023; Huang et al., 2003), document sum- marization (Cho et al., 2022; Miculicich and Han, 2023), disclosure analysis (Wang et al., 2018; Song, 2021), and optimizing prompts for large language models (LLMs) by extracting the most relevant parts (Lewis and Brown, 2023). Traditional methods, whether super- vised (Koshorek et al., 2018; Badjatiya et al., 2018; Glavas and Somasundaran, 2020) or unsupervised (Hearst, 1997; Glavaš et al., 2016; Barakat et al., 2020), mainly focus on contiguous orsequential text segmentation. The goal is to cluster consecutive sentences in a way that ensures those within the same group are semantically more similar to each other than to sentences in different groups. For example, if a text consists of five sentences labeled {1,2,3,4,5}, traditionalmethods might segment them into groups such as{1,2}and{3,4,5}. However, in practice, it is sometimes the case that {1,5}are more semantically similar and should form one group, while{2,4,5}form another. On the other hand, some frameworks for text segmentation disregard the adjacency relationships between sentences and the overall structure of the text. For example, methods such as k-means (Lloyd, 1982) treat each sentence (or its embedding) as an isolated data instance, without considering that a sentence is often more likely to be semantically connected to its adjacent sentences than to those that are farther apart. To the best of our knowledge, however, there is a lack of literature on semantic classification methods that take into account both adjacent and distant (non-adjacent) sentences. One example of such an application is prompt pruning for LLMs. When users write prompts, the sentences typically follow a logical flow, but some may be redundant. Splitting a prompt into groups, potentially non-sequential, while still accounting for the semantic coherence of contiguous sentences can facilitate downstream tasks such as prompt pruning, ultimately improving both efficiency and relevance in LLM interactions (Gao et al., 2024). In this work, we propose a new framework for text segmentation that accounts for the fact that ad- jacent sentences are
https://arxiv.org/abs/2505.16965v1
typically more related, while also enabling the grouping of non-contiguous sen- tences that are semantically similar. To achieve this, we first embed sentences into vector represen- tations using sentence embeddings (Reimers and Gurevych, 2019), so that semantically similar sen- tences are mapped closer together in the embedding space. This allows us to form a graph from the text, where the nodes represent the embedded sentences and edges encode the strength of their semantic relationships. We then apply Belief Propagation (BP, Pearl (1982)), an inference algorithm used in graphical models, to generate clusters. To the bestarXiv:2505.16965v1 [cs.CL] 22 May 2025 of our knowledge, this work presents the first suc- cessful application of BP to text segmentation that accounts for semantic meaning of sentences in both continuous and non-continuous settings. 2 Method Our algorithm, named BP-Seg , consists of three main steps: sentence embedding, constructing the graphical model, and running BP. We discuss each of these steps in detail below. 2.1 Sentence embeddings Given a text, represented as an ordered collection of sentences {Si}n i=1, we can obtain their numerical vector representations using sentence embeddings. This can be efficiently achieved with libraries such astransformers ,sentence-transformers , or tensorflow_hub . Once encoded, semantically similar sentences are expected to have higher co- sine similarity scores, indicating their closeness in the embedding space. We use Rito denote the sentence embedding of Si. 2.2 Constructing the graphical model The text segmentation process begins with the ini- tialization of a set of cluster representatives, de- noted as Cj, which serve as the reference repre- sentatives for segment assignments. In practice, these representatives are randomly selected from the set of input sentence embeddings. Given a text with nsentences, we define kclusters and randomly choose ksentence embeddings as the initial representatives, C={C1, C2, . . . , C k}, Cj∈ {R1, R2, . . . , R n}, with Ci̸=Cj. Let x={x1, x2, . . . , x n}be the segment assignments, where each xirepresents the segment label as- signed to sentence Si. Therefore, each xitakes a discrete value from the set {1,2, . . . , k }, where kis the total number of segments. Letpi(xi)be the probability that the ith seg- ment is assigned with a label xi. If we assume the joint distribution factorizes, we can write p(x1, . . . , x n) =1 ZQ fψf(xf)Q gψg(xg), where ψfrepresents unary factors and ψgrepresents pairwise factors, and Zis the normalizing con- stant. To be precise, ψf(xf)can be written in the form of ψi(xi), encoding how strongly the ith segment prefers the cluster Cxi∈C. Similarly, the pairwise factors, ψg(Xg)written in the form ofψi,j(xi, xj), encode how compatible the i-th segment is assigned with the label xiand the j- th segment is assigned with the label xj, wherexi, xj∈ {1,2,···, k}. Ifψi,j(xi, xj)is large, it means that assigning the i-th segment with label xiand the j-th segment with label xjfits together well. In practice, one has the freedom to choose ψfandψg. In this work, we set the node and edge factors as follows: ψi(xi) = exp(sim( Ri, Cxi)), (1) and
https://arxiv.org/abs/2505.16965v1
ψi,j(xi, xj) =( 1, x i=xj exp(−ϕi,j) otherwise ,(2) where ϕi,j=λ 2(−sim(Ri, Rj) + 1) exp( −|i−j|2/σ). (3) Here, sim(·,·)denotes the cosine similarity be- tween two embeddings. A higher value of ψi(xi) indicates a greater likelihood that sentence Sibe- longs to segment xi. Note also that ψi,j(xi, xj)≤ 1, with equality holding if and only if xi=xj, i.e., when the two sentences are assigned the same label. A larger ϕi,jleads to a smaller ψi,j, indicating that semantically dissimilar sentences possess weaker connections. Although in this work, we adopt specific forms ofψiandψi,jto encode the semantic relationships between sentences and their assignments. However, in practice, one may choose alternative or domain- specific formulations, provided they are compatible with the desired inference algorithm. 2.3 BP (Sum-Product) for text segmentation After assigning the node and edges with proper weights, we can start implementing the BP algo- rithm. The goal of BP is to maximize the marginal probability of segment assignments by iteratively exchanging messages between sentences and up- dating their segment beliefs. A message from ito jrepresents node i’s belief about the possible val- ues of that node jtakes, considering all evidence except what comes from node jitself. At each iter- ation, every node (sentence embedding) Risends a message to its neighboring node Rj, conveying how strongly Rjis associated with a given segment. These messages incorporate both the unary factor, which measures the semantic similarity of a sen- tence to its assigned segment representative, and the pairwise factor, which enforces consistency be- tween related sentences. Before process begins, the messages mi→j(xj)must be initialized. The sim- plest approach is to set all messages to be uniform, i.e.,mi→j(xj) = 1 /kfor all i, j. This assumes no prior preference for any segment, allowing BP to refine the segmentation purely based on updates. The message from RitoRjat iteration tis updated as: m(t) i→j(xj) =X xi ψi(xi)ψi,j(xi, xj) Y k∈{1,...,n})\jm(t−1) k→i(xi) ,(4) where ψiis the unary potential, and ψi,jis the pair- wise potential. Each node updates its belief about its segment assignment by accumulating incoming messages from all neighboring sentences: bi(xi)∝ψi(xi)Y j∈{1,...,n}\imj→i(xi).(5) This iterative process continues until convergence, where the segment labels stabilize. The final seg- mentation is determined by selecting the segment with the highest belief for each node, x∗ i= arg max xi∈{1,···,k}bi(xi). (6) We summarize our proposed algorithm in Algo- rithm 1. More analysis of BP can be read in Mur- phy et al. (2013); Yedidia et al. (2003). 3 Related work A prior work has explored a variation of us- ing graph-based models for text segmentation. GraphSeg , proposed by Glavaš et al. (2016), for example, also employs unsupervised learning for text segmentation within a graph-based framework. However, their primary objective is to produce con- tiguous segmentations, whereas our method allows for a non-contiguous segmentation that accounts for both neighboring and distant sentences. More- over, their algorithm requires additional informa- tion, such as each word’s information content based on its relative frequency, whereas BP-seg relies solely on embeddings and no external data. More- over, after encoding sentences into embeddings and computing cosine similarities, their approach dis-
https://arxiv.org/abs/2505.16965v1
covers segmentations by finding maximal cliques — fundamentally different from our probabilistic strategy, in which we seek an assignment that max- imizes the marginal distribution. One could in prin- ciple apply k-means to group sentence embeddings;Algorithm 1 BP-Seg 1:Input: Sentence embeddings {R1, . . . , R n} 2:Output: Segment assignment {x∗ 1, . . . , x∗ n} 3:Initialization: Initialize ksegment represen- tatives {C1, . . . , C k}. Initialize node and edge factors following (1)and (2), and initialize all messages m(0) i→j(xj) = 1/k. 4:fort= 1toTdo 5: foreach embedding Rido 6: foreachRj∈ {1, . . . , n } \ido 7: Update messages using (4). 8: end for 9: end for 10:end for 11:foreach embedding Rido 12: Update belief using (5). 13:end for 14:Final Segmentation: 15:foreach embedding Rido 16: Assign segment with highest belief us- ing (6). 17:end for 18:return Segment assignments {x∗ 1, . . . , x∗ n} however, such a method is entirely context-agnostic and considers only pairwise embedding similari- ties. 4 Experiments 4.1 Illustrative Example To demonstrate the effectiveness of our proposed segmentation method, we compare its performance against GraphSeg (implemented using the code available here1),k-means, and a large language model (LLM). The input text used for segmenta- tion, generated by GPT-4o with additional human- written content, is as follows: The sun was shining brightly. It was a beautiful morning. I decided to go for a walk. Suddenly, dark clouds appeared. I’ll play tennis tomorrow. What are you doing? Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. US Open is a tennis tournament. I am here working on my project. The sun came out again. Who is going to win the US Open? Each sentence is treated as an individual text unit for segmentation. We analyze the segmenta- 1https://github.com/Dobatymo/graphseg-python 3–5 6–8 9–11 3–11 3–15 12–15 ARI NMI ARI NMI ARI NMI ARI NMI ARI NMI ARI NMI BP-Seg 0.57 0.83 0.75 0.89 0.73 0.87 0.72 0.87 0.64 0.84 0.62 0.83 GraphSeg 0.65 0.87 0.58 0.83 0.52 0.81 0.55 0.83 0.46 0.79 0.40 0.77 k-means 0.53 0.84 0.52 0.79 0.52 0.76 0.50 0.79 0.45 0.74 0.45 0.70 Table 1: Average performance on the Choi dataset measured using ARI and NMI across different subsets. Higher values indicate better performance. Note that a random segmentation method achieves a 0in ARI. tion results obtained using our proposed approach BP-Seg ,GraphSeg ,k-means, and an LLM (GPT- 4o). To evaluate the results, we verify (1) whether the algorithm produces more than one cluster, and (2) whether the three tennis-related sentences col- ored in red appear together in the same cluster. The detailed experimental details and results with vari- ous parameters are presented in Appendix A.1. ForBP-Seg andk-means, we set the number of segments to k= 2,3,···,7. For the LLM, we explicitly prompt it to generate k= 2,3,···,7 segments. The prompt can be read in A.1.4. As observed, all methods generate more than one clus- ters. Furthermore, k-means successfully group sentences based on thematic coherence across all values of k. Both BP-Seg and the LLM success-
https://arxiv.org/abs/2505.16965v1
fully group the tennis-related sentences in the same cluster in 5out of the 6tested values of k. For GraphSeg , we set the minimum number of sen- tences per segment to 1 to allow maximum flexibil- ity. However, despite trying various thresholds τ, GraphSeg fails to cluster thematically related sen- tences effectively, as it is designed for contiguous segmentation. 4.2 Choi dataset In this example, we implement our approach, BP-Seg , along with GraphSeg andk-means on the Choi dataset (Choi, 2000). The performance of the LLM is not included in this case, as its out- put does not include every sentence from the orig- inal text. Traditionally, for contiguous text seg- mentation, two evaluation metrics are commonly reported: Pk(Beeferman et al., 1999) and Win- dowDiff (WD) (Pevzner and Hearst, 2002). The Pkmetric checks whether the boundary status (i.e., whether two sentences within a fixed-size window belong to the same segment) matches between the ground truth and the prediction. WD, on the other hand, measures whether the number of boundaries within the window is consistent with the ground truth. However, both metrics assume contiguity and are not suitable for evaluating non-contiguous text segmentation.Therefore, we report Adjusted Rand Index (ARI) (Hubert and Arabie, 1985; Wagner and Wag- ner, 2007) and Normalized Mutual Information (NMI) (Kvålseth, 2017), which are appropriate for clustering-based evaluations. This adjustment ac- counts for the fact that the outputs of BP-Seg and k-means may result in non-contiguous segmenta- tions, even though the ground truth segmentation is contiguous. Additionally, for efficiency, we use a variant of BP-Seg that is also based on message passing but offers faster computation. Please refer to Algorithm 2 for more details. The Adjusted Rand Index (ARI) ranges from −1 to1, where 1indicates a perfect match between pre- dicted and true clusters, 0corresponds to random labeling, and negative values indicate performance worse than random. The Normalized Mutual Infor- mation (NMI) ranges from 0to1, with 1represent- ing perfect alignment and 0indicating statistical independence. Table 2 presents the average seg- mentation performance of the three methods on the Choi dataset. As shown, BP-Seg outperforms the other two methods across all subsets, except in cases where each segment contains very few sen- tences (e.g., 3–5). The standard deviation of the performance can be read in Table 2. 5 Conclusion We presented BP-Seg , an efficient unsupervised ap- proach for text segmentation using belief propaga- tion. Our method effectively balances local contex- tual coherence with global semantic similarity, en- abling more meaningful and flexible segmentation of text. Although designed for non-contiguous seg- mentation, experimental results show that BP-Seg outperforms several competitive methods on the standard contiguous segmentation task, achieving strong performance on metrics such as ARI and NMI. In future work, we aim to evaluate the effec- tiveness of our approach in real-world downstream applications, including LLM prompt pruning, in- formation retrieval, and question answering. Limitations In this study, all examples are in English. The example in Section 4.1 was generated by GPT- 4o with additional human-written content, and the Choi dataset in Section 4.2 is also synthetic. As a result, these
https://arxiv.org/abs/2505.16965v1
examples may not accurately reflect real-world scenarios, and our evaluations are lim- ited to these two cases. Nevertheless, we believe the insights from our findings will inspire further research in text segementation and benefit a wide range of related applications. Ethics Statement In this paper, we utilize data from two sources: one generated by ChatGPT with additional human- written sentences, and another from a publicly available dataset. We anticipate that our method can offer benefits such as enhanced information retrieval and reduced token usage in LLM prompts, thereby reaching a broader audience. References Pinkesh Badjatiya, Litton J. Kurisinkel, Manish Gupta, and Vasudeva Varma. 2018. Attention-based neural text segmentation. ArXiv , abs/1808.09935. Berat Kurar Barakat, Ahmad Droby, Reem Alaasam, Boraq Madi, Irina Rabaev, Raed Shammes, and Ji- had El-Sana. 2020. Unsupervised deep learning for text line segmentation. 2020 25th International Con- ference on Pattern Recognition (ICPR) , pages 2304– 2311. Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Ma- chine learning , 34:177–210. Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, and Dong Yu. 2022. Toward unifying text seg- mentation and long document summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 106– 118, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Freddy YY Choi. 2000. Advances in domain inde- pendent linear text segmentation. arXiv preprint cs/0003083 . Jun Gao, Ziqiang Cao, and Wenjie Li. 2024. Selfcp: Compressing over-limit prompt via the frozen large language model itself. Information Processing & Management , 61:103873. Goran Glavaš, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation us- ing semantic relatedness graphs. In Proceedings ofthe Fifth Joint Conference on Lexical and Computa- tional Semantics , pages 125–130, Berlin, Germany. Association for Computational Linguistics. Goran Glavas and Swapna Somasundaran. 2020. Two- level transformer and auxiliary coherence mod- eling for improved text segmentation. ArXiv , abs/2001.00891. Marti A. Hearst. 1997. Texttiling: segmenting text into multi-paragraph subtopic passages. 23(1):33–64. Xiangji Huang, Fuchun Peng, Dale Schuurmans, Nick Cercone, and Stephen Robertson. 2003. Applying machine learning to text segmentation for informa- tion retrieval. Inf. Retr. , 6:333–362. Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of classification , 2:193–218. Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text seg- mentation as a supervised learning task. ArXiv , abs/1803.09337. Tarald O Kvålseth. 2017. On normalized mutual infor- mation: measure derivations and properties. Entropy , 19(11):631. Mike Lewis and Tom Brown. 2023. Retrieval- augmented generation for large language models. arXiv preprint arXiv:2312.10997 . Stuart P Lloyd. 1982. Least squares quantization in pcm. IEEE Transactions on Information Theory , 28(2):129–137. Lesly Miculicich and Benjamin Han. 2023. Docu- ment summarization with text segmentation. ArXiv , abs/2301.08817. Kevin Murphy, Yair Weiss, and Michael I Jordan. 2013. Loopy belief propagation for approximate inference: An empirical study. arXiv preprint arXiv:1301.6725 . Irina Pak and Phoey Lee Teh. 2018. Text segmentation techniques: A critical review. Judea Pearl. 1982. Reverend bayes on inference engines: A distributed hierarchical approach. Probabilistic and Causal Inference . Lev Pevzner and Marti A. Hearst. 2002. A
https://arxiv.org/abs/2505.16965v1
critique and improvement of an evaluation metric for text segmentation. Computational Linguistics , 28(1):19– 36. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. pages 3973–3983. Shiwon Song. 2021. The informational value of seg- ment data disaggregated by underlying industry: Evi- dence from the textual features of business descrip- tions. The Accounting Review , 96(6):361–396. Silke Wagner and Dorothea Wagner. 2007. Comparing clusterings: an overview. Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmenta- tion. In Conference on Empirical Methods in Natural Language Processing . Jonathan S Yedidia, William T Freeman, Yair Weiss, and 1 others. 2003. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium , 8(236–239):0018–9448. Hai Yu, Chong Deng, Qinglin Zhang, Jiaqing Liu, Qian Chen, and Wen Wang. 2023. Improving long docu- ment topic segmentation models with enhanced co- herence modeling. ArXiv , abs/2310.11772. A Appendix A.1 Illustrave example A.1.1 Results using BP-Seg Setλ= 0.3σ= 10 in(3) k= 2segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. I decided to go for a walk. Suddenly, dark clouds appeared. I’ll play tennis tomorrow. What are you doing? People ran for shelter. US Open is a tennis tournament. I am here working on my project. The sun came out again. Who is going to win the US Open? [Segment 2]: Thunder rumbled in the distance. The rain poured down heavily. k= 3 segments : [Segment 1]: The sun was shining brightly. It was a beautiful morning. I decided to go for a walk. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. The sun came out again. [Segment 2]: I’ll play tennis tomorrow. What are you doing? US Open is a tennis tournament. I am here working on my project. Who is going to win the US Open? [Segment 3]: People ran for shelter. k= 4 segments : [Segment 1]: The sun was shining brightly. It was a beautiful morning. What are you doing? People ran for shelter. I am here working on my project. The sun came out again. [Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. The rain poured down heavily. [Segment3]: Thunder rumbled in the distance. [Segment 4]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? k= 5 segments : [Segment 1]: The sun was shining brightly. It was a beautiful morning. Suddenly, dark clouds appeared. People ran for shelter. US Open is a tennis tournament. The sun came out again. [Segment 2]: I decided to go for a walk. [Segment 3]: I’ll play tennis tomorrow. What are you doing? Who is going to win the US Open? [Segment 4]: Thunder rumbled in the distance. The rain poured down heavily. [Segment 5]: I am here working on my project. k= 6 segments : [Segment 1]: The sun was shining brightly. It was a beautiful morning. The sun came out again. [Segment 2]: I decided to go for a walk. [Segment 3]:
https://arxiv.org/abs/2505.16965v1
I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 4]: Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. [Segment 5]: People ran for shelter. [Segment 6]: What are you doing? I am here working on my project. k= 7 segments : [Segment 1]: The sun was shining brightly. People ran for shelter. [Segment 2]: It was a beautiful morning. I decided to go for a walk. [Segment 3]: Suddenly, dark clouds appeared. Thunder rumbled in the distance. [Segment 4]: What are you doing? I am here working on my project. [Segment 5]: The rain poured down heavily. [Segment 6]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 7]: The sun came out again. A.1.2 Results using GraphSeg treshold τ= 0.1,0.3,0.5,0.7,0.9and minimal segment size n= 1:[Segment 1]: The sun was shining brightly. It was a beautiful morning. [Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. What are you doing? [Segment 3]: Thunder rumbled in the distance. The rain poured down heavily. [Segment 4]: People ran for shelter. US Open is a tennis tournament. [Segment 5]: I am here working on my project. The sun came out again. Who is going to win the US Open? A.1.3 Results using k-means k= 2 segments :[Segment 1]: I’ll play tennis tomorrow. What are you doing? US Open is a tennis tournament. I am here working on my project. Who is going to win the US Open? [Segment 2]: The sun was shining brightly. It was a beautiful morning. I decided to go for a walk. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. The sun came out again. k= 3 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. The sun came out again. [Segment 2]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 3]: I decided to go for a walk. What are you doing? I am here working on my project. k= 4 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. The sun came out again. [Segment 2]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 3]: I decided to go for a walk. What are you doing? I am here working on my project. [Segment 4]: People ran for shelter. k= 5segments :[Segment 1]: I decided to go for a walk. People ran for shelter. [Segment 2]: The sun was shining brightly. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The sun came out again. [Segment 3]: What are youdoing? I am here working on my project.
https://arxiv.org/abs/2505.16965v1
[Segment 4]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 5]: It was a beautiful morning. The rain poured down heavily. k= 6 segments :[Segment 1]: Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. [Segment 2]: What are you doing? I am here working on my project. [Segment 3]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 4]: The sun was shining brightly. The sun came out again. [Segment 5]: People ran for shelter. [Segment 6]: It was a beautiful morning. I decided to go for a walk. k= 7 segments :[Segment 1]: I am here working on my project. [Segment 2]: The sun was shining brightly. It was a beautiful morning. The sun came out again. [Segment 3]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 4]: I decided to go for a walk. What are you doing? [Segment 5]: People ran for shelter. [Segment 6]: Suddenly, dark clouds appeared. Thunder rumbled in the distance. [Segment 7]: The rain poured down heavily. A.1.4 Results using LLM (GPT-4o) Prompt : Please generate {number_seg} segmentation for the following text according to their sematic meaning. Adjacent sentences do not have to be in the same segment. The sun was shining brightly. It was a beautiful morning. I decided to go for a walk. Suddenly, dark clouds appeared. I’ll play tennis tomorrow. What are you doing? Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. US Open is a tennis tournament. I am here working on my project. The sun came out again. Who is going to win the US Open? Return the output in the following format: [Segment 1]: The sun was shining brightly. It was a beautiful morning. [Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. ... k= 2 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. The sun came out again. [Segment 2]: I decided to go for a walk. I’ll play tennis tomorrow. What are you doing? US Open is a tennis tournament. I am here working on my project. Who is going to win the US Open? k= 3 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. The sun came out again. [Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. [Segment 3]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? k= 4 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. The sun came out again. [Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. Thunder
https://arxiv.org/abs/2505.16965v1
rumbled in the distance. The rain poured down heavily. People ran for shelter. [Segment 3]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 4]: What are you doing? I am here working on my project. k= 5 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. The sun came out again. [Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. [Segment 3]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 4]: What are you doing? [Segment 5]: I am here working on my project.k= 6 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning. [Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. The sun came out again. [Segment 3]: I’ll play tennis tomorrow. [Segment 4]: What are you doing? [Segment 5]: US Open is a tennis tournament. Who is going to win the US Open? [Segment 6]: I am here working on my project. k= 7 segments :[Segment 1]: The sun was shining brightly. It was a beautiful morning.[Segment 2]: I decided to go for a walk. Suddenly, dark clouds appeared. [Segment 3]: Thunder rumbled in the distance. The rain poured down heavily. People ran for shelter. [Segment 4]: The sun came out again. [Segment 5]: I’ll play tennis tomorrow. US Open is a tennis tournament. Who is going to win the US Open? [Segment 6]: What are you doing? [Segment 7]: I am here working on my project. A.2 Choi dataset Here mi(xi)can be viewed as a running pref- erence or partial belief for node iin label xi. We also set ψi(xi) = sim( Ri, Cxi)andψi,j= λ 2(−sim(Ri, Rj) + 1) exp( −|i−j|2/σ). For com- parison, we set the number of initial clusters in BP-Seg to be equal to the number of sentences, although many clusters may become empty after the algorithm terminates. We fix the number of iterations to T= 5, and choose σ= 10 and λ= 300 to account for the length of the text. ForGraphSeg , we set the threshold parameter to τ= 0.2and the minimum segment size to n= 1 to allow maximum flexibility. For the k-means baseline, we cap the number of clusters kat 20 and rely on the default number of iterations in sklearn.cluster.KMeans . Algorithm 2 Fast BP-Seg 1:Input: Sentence embeddings {R1, . . . , R n} 2:Output: Segment assignment {x∗ 1, . . . , x∗ n} 3:Initialization: Initialize ksegment representatives {C1, . . . , C k}. Initialize node and edge factors following (1) and (2), and initialize all messages m(0) i(xi) = 1/k. 4:fort= 1toTdo 5: foreach embedding Rido 6: forxi={1,2,···, k}do 7: m(t) i(xi) =ψi(xi) +Pn j=1ψi,jm(t−1) j(xi) 8: end for 9: end for 10:end for 11:foreach embedding Rido 12: Update belief
https://arxiv.org/abs/2505.16965v1
using bi(xi) =ψi(xi) +m(T) i(xi) 13:end for 14:Final Segmentation: 15:foreach embedding Rido 16: Assign segment using x∗ i= arg maxxi∈{1,···,k}bi(xi). 17:end for 18:return Segment assignments {x∗ 1, . . . , x∗ n} 3–5 6–8 9–11 3–11 3–15 12–15 ARI NMI ARI NMI ARI NMI ARI NMI ARI NMI ARI NMI BP-Seg 0.10 0.04 0.08 0.04 0.08 0.04 0.09 0.04 0.08 0.03 0.08 0.03 GraphSeg 0.11 0.04 0.07 0.03 0.06 0.02 0.08 0.03 0.06 0.02 0.07 0.02 k-means 0.09 0.03 0.08 0.04 0.08 0.05 0.08 0.04 0.07 0.04 0.07 0.05 Table 2: Standard deviation on the Choi dataset measured using ARI and NMI across different subsets.
https://arxiv.org/abs/2505.16965v1
arXiv:2505.16967v1 [cs.IR] 22 May 2025Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval Nandan Thakur*Crystina Zhang*Xueguang Ma Jimmy Lin David R. Cheriton School of Computer Science, University of Waterloo, Canada Code: https://github.com/castorini/rlhn Dataset: https://huggingface.co/rlhn Abstract Training robust retrieval and reranker models typically relies on large-scale retrieval datasets; for example, the BGE collection contains 1.6 million query-passage pairs sourced from vari- ous data sources. However, we find that certain datasets can negatively impact model effective- ness — pruning 8 out of 15 datasets from the BGE collection reduces the training set size by 2.35×and increases nDCG@10 on BEIR by 1.0 point. This motivates a deeper examination of training data quality, with a particular focus on “false negatives”, where relevant passages are incorrectly labeled as irrelevant. We pro- pose a simple, cost-effective approach using cascading LLM prompts to identify andrelabel hard negatives. Experimental results show that relabeling false negatives with true positives improves both E5 (base) and Qwen2.5-7B re- trieval models by 0.7–1.4 nDCG@10 on BEIR and by 1.7–1.8 nDCG@10 on zero-shot AIR- BENCH evaluation. Similar gains are observed for rerankers fine-tuned on the relabeled data, such as Qwen2.5-3B on BEIR. The reliability of the cascading design is further supported by human annotation results, where we find judg- ment by GPT-4o shows much higher agreement with humans than GPT-4o mini. 1 Introduction Modern-day retrievers and rerankers are data- hungry, relying on large and high-quality training datasets to accurately retrieve or rerank across chal- lenging domains (Thakur et al., 2021; Muennighoff et al., 2023; Yang et al., 2024b; SU et al., 2025). A training dataset for information retrieval (IR) typi- cally has multiple instances consisting of a training query, labeled positive passages, and a set of mined hard negative passages . Sampling hard negatives has been consistently used in retrieval models to im- prove downstream retrieval accuracy (Karpukhin *Both authors equally contributed in this work.et al., 2020; Xiong et al., 2021; Qu et al., 2021; Moreira et al., 2024, inter alia ). More recently, state-of-the-art (SoTA) retrieval models are observed to fine-tune on enormous train- ing dataset sizes. While the general notion is that more training data is better (Chen et al., 2024a; Li et al., 2024; Muennighoff et al., 2025), we show the contrary, fine-tuning on a select few datasets is rather crucial. For example, removing ELI5 sur- prisingly improves nDCG@10 on 7 out of 14 of the BEIR datasets (Thakur et al., 2021) and the average nDCG@10 by 0.6 points. A similar obser- vation is also made on other training datasets: by pruning 8 out of the 15 datasets in the BGE training collection (Li et al., 2024),1we improve E5 (base) fine-tuned retrieval model by 1.0 point nDCG@10 on BEIR (as shown in Figure 4). The above observation reveals that there is a non- negligible amount of “false” or mislabeled data mixed in the current enormous training dataset, which not only adds unnecessary training cost but also hurts the training process. In this work, we propose eliminating the issue from the perspec- tive of false negatives ,
https://arxiv.org/abs/2505.16967v1
which are typically intro- duced by misclassifying unlabeled candidates as negative training examples. This is especially severe for sparsely-annotated retrieval datasets, such as MS MARCO (Nguyen et al., 2016) or NQ(Kwiatkowski et al., 2019). Figure 3 shows an example of a false negative present in HOT- POTQA(Yang et al., 2018). Previously, it has been observed that fine-tuning on datasets with false neg- atives affects the model’s generalizability (Qu et al., 2021; Moreira et al., 2024; Tao et al., 2024). To avoid false negatives, existing approaches adopted knowledge distillation or hard negative fil- tering: Qu et al. (2021) distill knowledge from a cross-encoder. Moreira et al. (2024) filter poten- 1The pruned dataset contains only 42.5% pairs of the original dataset, making it 2.35 ×smaller. 1 Wild Beast (roller coaster) Wild Beast is a wooden roller coaster located at Canada's W onderland, in Vaughan, Ontario, Canada . Originally named "Wilde Beast", it is one of the four roller coasters that debuted with the park in 1981, and is one of two wooden coasters at Canada's W onderland modelled after a ride at Coney Island amusement park in Cincinnati, Ohio [ ... ] Canada's Wonderland Canada's W onderland is a 330 acre theme park located in V aughan, Ontario , a suburb approximately 40 km north of Downtown Toronto. Opened in 1981 by the Taft Broadcasting Company and The Great-W est Life Assurance Company as the first major theme park in Canada, it remains the country's largest. [ ... ] The 330 acre park includes a 20 acre water park named Splash W orks. [ ... ] Splash Kingdom W aterpark Splash Kingdom W aterpark (formerly known as Pharaoh's Lost Kingdom) is an Egyptian-beach themed water park, trampoline park, and concert venue located in Redlands, California, United States. Splash Kingdom is the larges t water park in the Inland Empire . The park is known for having the world's tallest enclosed body-flume waterslide, and the world's tallest [ ... ] Splash Works Splash W orks is a 20 acre water park located within the park boundaries of Canada's W onderland in V aughan, Ontario, Canada . Entry is free with park admission. Splash W orks is home to "Whitewater Bay", the largest outdoor wave pool in Canada, and is today home to 16 waterslides. Splash W orks operates during the summer months of June through September .What park contains the W ild Beast and a 20 acre water park? True Positive True Negative False Negative True PositiveHotpotQA Query Ground T ruth Positives Unlabeled Hard NegativesFigure 1: Example of a training instance (query, ground truth positives, and unlabeled hard negatives) with detected false negatives taken from HOTPOT QA. The false negative passage ( Splash Works ) is mislabeled as it is relevant in answering the user’s query. The relevant parts of the text are highlighted in blue. tial false negatives based on relevance score to the query. However, they don’t directly tackle the false negative data samples. The former alleviates the is- sue based on the assumption that the cross-encoder is more robust to false negatives
https://arxiv.org/abs/2505.16967v1
than retrieval mod- els, yet does not remove the false negatives them- selves. As we will show in the results section, albeit smaller, inferior training data also negatively affect cross-encoders. The latter alleviates the issue based on the assumption that the relevance scores of false negatives are systematically higher than 95% of the positive scores, which does not consider variance at the level of data samples. In our work, we propose RLHN (ReLabeling HardNegatives), utilizing a cost-effective frame- work with LLM cascading (Chen et al., 2023) to identify and accurately relabel false negatives (at a data sample level) in seven pruned training datasets from the BGE training collection (Li et al., 2024). In the first stage, we use GPT-4o-mini, a cost- effective LLM, to identify false negatives in all training instances. Next, the detected false nega- tives are relabeled with a more reliable judge, GPT- 4o. We observe a maximum of 56% of training pairs in MS MARCO can contain false negatives, to a minimum of about 3% in SCIDOCS RR. The framework is better illustrated in Figure 2. With false negative documents detected, we mod- ify the training datasets in three different ways: (i) remove : discarding the whole training instance, (ii)HN remove : removing only the false hard neg- atives, and (iii) RLHN : relabeling the false hard negatives as ground truth. Our results show con- sistently that RLHN setting achieves the highest nDCG@10 scores on BEIR (Thakur et al., 2021) andAIR-B ENCH (Yang et al., 2024b), amongsttheir counterparts with both retrievers: E5 (base) and Qwen2.5-7B and a reranker with Qwen2.5-3B. To better understand the behavior of LLM judg- ment, we compare LLM judgment with human assessors on 670 randomly sampled query–hard negative pairs. We observe the Cohen’s Kappa ( κ) score of GPT-4o is 10 points higher than GPT-4o- mini, which echoes their effectiveness in improving training data quality. Lastly, we provide a qualita- tive analysis examining different categories of false negatives identified in training datasets. To the best of our knowledge, we are the first to report that carelessly adopting enormous train- ing data may negatively affect the retriever and reranker model training, and propose a solution by focusing on the data—identifying and relabeling the false hard negatives. 2 Related Work Sparsely-annotated datasets. Popular IR train- ing datasets, such as MS MARCO (Nguyen et al., 2016), were shallow pooled and sparsely judged by human assessors (Mackenzie et al., 2021; Arabzadeh et al., 2022). The assessor observed a few passages from a baseline retrieval system, picked those relevant to the query, and labeled them as ground-truth. On the other hand, non-relevant judged passages (i.e., passages seen but preferred lower than the ground truth) were not provided. Therefore, an assumption is made in fine-tuning where remaining passages (in a passage corpus) are negatives, and a few mined passages similar to the query are labeled as hard negatives. In this work, we avoid relabeling false positives, as these labels are trustworthy, provided by a human assessor, who can have a different preference than the LLM itself. 2 GPT-4o miniGPT-4o Hard NegativesQuery
https://arxiv.org/abs/2505.16967v1
Ground Truth or Positive LLM Judge CascadingRemaining Hard Negatives(False) Hard Negatives YesNo Relevance = 0Relevance = 1 Relevance = 1Exit NoExit Yes Cost Ef fective Accurate Training Dataset Instance Identified & Relabeled Hard NegativesRelevance = 0prompt promptFigure 2: Flowchart for RLHN (ReLabeling HardNegatives): (1) Provide the query, ground-truth or positive passages, and hard negative passages from a training instance as input, (2) Prompt a cost-effective LLM judge (e.g., GPT-4o-mini) and evaluate whether any hard negative is misclassified, (3) If yes, repeat the prompt with an accurate LLM judge (e.g., GPT-4o) (4) Output the relabeled hard negative passages (which are found relevant) and either remove them or relabel them as ground-truth passages in our experiments. LLM-based data curation. Hiring human as- sessors for judgments is expensive and time- consuming, and produces limited training pairs, e.g., 1K pairs in LIMA (Zhou et al., 2023). Alterna- tively, LLMs as judges have been recently explored for dataset curation in tasks, such as instruction fine-tuning (Chen et al., 2024b; Chen and Mueller, 2024), or even code-generation (Jain et al., 2024). False negatives. Qu et al. (2021) first noted the issue of false negatives in retrieval, where certain hard negative passages should have been classi- fied as positives. However, instead of curating the training datasets, RocketQA (Qu et al., 2021) fine-tuned models by distilling knowledge from the cross-encoder label. Similarly, Moreira et al. (2024) examined various filtering methods for nega- tive sampling by avoiding very hard negatives. Lee et al. (2024, 2025b) use an LLM such as Gemini to relabel positive passages and identify better hard negatives, but they do not provide further qualita- tive analysis on its effectiveness or publicly share their information or training datasets. 3 The RLHN Methodology In this section, we discuss the LLM judge cascad- ing framework, training dataset modifications, and dataset postprocessing and statistics. 3.1 LLM Judge Cascading Framework We adopt a simple and cost-effective approach of LLM judge cascading framework (shown in Fig- ure 2) inspired by Chen et al. (2023) to identify false hard negatives in training datasets at a large scale. The framework involves two major steps: 1.Cost-effective judge (GPT-4o-mini): We prompt GPT-4o-mini (OpenAI, 2024), a cost- effective LLM in the first stage to scan and iden- tify potential pairs with false negatives across all training pairs.2.Accurate judge (GPT-4o): Next, we prompt GPT-4o (OpenAI, 2024), a more reliable but expensive judge2to evaluate the potential pairs containing false negatives identified by GPT-4o- mini and re-evaluate them using GPT-4o. 3.2 Training Dataset Modification Upon successful completion of identifying the false negatives in our training datasets, we compare three operations on the identified false negatives, modi- fying the training datasets: •Remove : Discard the complete training instance due to the low quality, even if it contains at least one false negative.3 •HN Remove : Discard only the detected false negatives from the hard negative subset, keep- ing the training instance with the remaining hard negatives. •RLHN : Relabel only the detected false negatives, by adding them to the ground truth subset, keep- ing the training instance with the remaining hard negatives. 3.3
https://arxiv.org/abs/2505.16967v1
Dataset Postprocessing & Statistics In Table 1, we show the training dataset statis- tics observed in the BGE training collection. MS MARCO contains the highest amount of training pairs, followed by HOTPOT QA. All datasets con- tain training pairs with 1–3 ground-truth passages and 13–25 hard negatives (except NQwith 98 hard negatives). False negatives. From Table 1, we see a majority of detected false negatives occur in MS MARCO (91.6% of all detected pairs). A maximum of up to 56% of all training pairs in MS MARCO con- tain false negatives, to a minimum of about 3% 2GPT-4o-mini and GPT-4o pricing (as of May 15th, 2025) is 0.6$ and 5.0$ for 1M input tokens and 2.4$ and 20.0$ for 1M output tokens, respectively. 3We lose out on the training pair in the “remove” technique. 3 Dataset#Train Avg. Avg. RLHN Pairs GT/Q HN/Q Stage 1 Stage 2 MS MARCO 485,823 1.1 25.0 391,965 326,301 HOTPOT QA 84,516 2.0 20.0 11,268 4,756 NQ 58,568 1.0 98.5 32,184 19,199 FEVER 29,096 1.3 20.0 7,764 3,577 SCIDOCS RR 12,655 1.6 19.7 2,068 351 FIQA-2018 5,500 2.6 15.0 3,632 1,833 ARGUANA 4,065 1.0 13.6 0 0 Table 1: BGE training dataset statistics (Chen et al., 2024a). Avg. GT/Q denotes the average ground truth passages per query, and Avg. HN/Q denotes the average hard negative passages per query. RLHN Stages 1 & 2 show training pairs with at least one false hard negative. inSCIDOCS RR.4From Figure 3, we observe that in 58% of all detected false negative pairs, only a single false positive was detected, and 19% with two false negatives, and less than 1% with eight or more false negatives. If we detect any training pair with detected false negatives over a certain thresh- oldk(k= 7in our experiments), we excluded the pair completely in RLHN, as the query is likely to be ambiguous, that might not be a useful training instance (e.g., what color is amber urine? ). 4 Experimental Setting BGE training data. We utilize the original BGE training dataset5(Li et al., 2024), a comprehen- sive collection with training datasets for retrieval (e.g., NQ, MS MARCO), clustering (e.g., Twen- tyNewsgroups), and classification (e.g., Amazon- Reviews) tasks. Many of these training datasets are used in fine-tuning of popular retriever mod- els such as E5-Mistral (Wang et al., 2024), GRIT- LM (Muennighoff et al., 2024), Linq (Choi et al., 2024), LLM2Vec (BehnamGhader et al., 2024), CDE (Morris and Rush, 2025), or NV-Embed (Lee et al., 2025a). Our work focuses on the retrieval task, therefore, we remove all training datasets from clustering and classification tasks, resulting in 15 datasets focused on the retrieval task, com- prising a total of 1.6M training pairs, originally released with the MIT license. LLM judges. In our work, we use GPT-4o- mini (version 2024-07-18 ) and GPT-4o (version 2024-11-20 ) as the judge using the Azure Ope- nAI service in the batch setting. We follow a tem- 4We avoid relabeling ARGUANAdue to its inherent complex task, which doesn’t measure directly for argument similarity, but rather counter arguments given an argument. Therefore, we keep the
https://arxiv.org/abs/2505.16967v1
original dataset in fine-tuning without relabeling. 5BGE dataset: huggingface.co/datasets/cfli/bge-full-data 1234567891011121314150204060% of Training Pairs 57.63 % 18.67 % 8.93 % 5.19 % 3.23 % 1.94 % 1.32 % 0.88 % 0.59 % 0.46 % 0.32 % 0.24 % 0.17 % 0.12 % 0.09 %Avg. Distribution of False Hard NegativesFigure 3: The distribution of training pairs (with at least one false negative) across false hard negatives detected. 58% of the training pairs detected contain a single false negative, 19% with two false negatives, and so on. perature setting of 0.1and use a chain-of-thought prompt setting (Wei et al., 2022). The prompt first evaluates the relevance between every hard nega- tive passage and the question, and compares them with the ground truth to identify potential false negatives. We can prompt up to 25 hard negative passages per query as shown in Figure 6. Evaluation benchmarks. We evaluate the re- trieval and reranker accuracy of the models fine- tuned on datasets with false negatives either re- moved or relabeled with RLHN on the BEIR bench- mark (Thakur et al., 2021) and AIR-B ENCH (Yang et al., 2024b). Both benchmarks evaluate retrieval accuracy in nDCG@10. BEIR contains human- constructed datasets, and AIR-B ENCH contains datasets automatically generated by LLMs without human intervention. In BEIR, we drop Quora and CQADupstack and evaluate on the remaining 16 datasets. In AIR-B ENCH (version 24.05 ), we eval- uate five specific domains in English-only: Arxiv, Finance, Healthcare, Law, and News. Backbone models. We use the E5 (base) unsuper- vised6(Wang et al., 2022b, 2024), a BERT-based encoder, due to its high accuracy on BEIR (pre- liminary results in Appendix A), the inclusion of a pre-training stage, and lower training complex- ity. E5 (base) contains 110M parameters, 12 layers, and a 768 embedding dimension with mean pool- ing. Also, we use a LLM-based decoder model with Qwen2.5-7B model7(Yang et al., 2024a) with 7.61B parameters, 28 layers, and a 3584 embed- ding dimension with the [EOS] token pooling as the retrieval models. In addition, we use Qwen2.5-3B model (Yang et al., 2024a)8for the reranker. 6intfloat/e5-base-unsupervised on HuggingFace. 7Qwen/Qwen2.5-7B on HuggingFace. 8Qwen/Qwen2.5-3B on HuggingFace. 4 BEIR DatasetNo Filtering Cascading Stage 1: GPT-4o-mini Cascading Stage 2: GPT-4o-mini + GPT-4o No Filtering Cascading Stage 2 Default Remove HN Remove RLHN Remove HN Remove RLHN Default HN Remove RLHN Backbone E5 (base) E5 (base) E5 (base) E5 (base) E5 (base) E5 (base) E5 (base) Qwen2.5-7B Qwen2.5-7B Qwen2.5-7B TREC-COVID†0.783 0.786 0.793 0.798 0.794 0.785 0.809 0.797 0.771 0.815 NFCorpus†0.378 0.378 0.380 0.381 0.380 0.382 0.390 0.389 0.389 0.391 NQ 0.595 0.593 0.592 0.602 0.573 0.598 0.591 0.597 0.602 0.623 HotpotQA 0.737 0.737 0.736 0.739 0.741 0.736 0.735 0.704 0.702 0.729 FiQA-2018 0.439 0.443 0.440 0.444 0.441 0.445 0.448 0.453 0.461 0.465 ArguAna 0.701 0.702 0.706 0.700 0.700 0.700 0.692 0.554 0.550 0.560 Touché-2020†0.256 0.255 0.271 0.268 0.218 0.265 0.266 0.221 0.211 0.230 DBPedia 0.438 0.439 0.437 0.442 0.433 0.441 0.447 0.443 0.456 0.472 SCIDOCS 0.242 0.243 0.243 0.244 0.245 0.243 0.242 0.245 0.243 0.252 FEVER 0.878 0.875 0.876 0.877 0.881 0.876 0.871 0.863 0.857 0.872 Climate-FEVER 0.391 0.388 0.385
https://arxiv.org/abs/2505.16967v1
0.391 0.382 0.384 0.367 0.370 0.373 0.360 SciFact 0.735 0.741 0.731 0.733 0.744 0.735 0.740 0.755 0.755 0.767 TREC-NEWS†0.465 0.470 0.466 0.473 0.464 0.473 0.484 0.494 0.480 0.487 Robust04†0.442 0.448 0.452 0.471 0.447 0.458 0.497 0.501 0.501 0.540 Signal-1M (RT)†0.275 0.279 0.275 0.275 0.274 0.270 0.274 0.275 0.268 0.280 BioASQ†0.378 0.382 0.385 0.392 0.384 0.384 0.394 0.408 0.412 0.438 Avg. 16 (All) 0.508 0.510 0.511 0.514 0.506 0.511 0.515 0.504 0.502 0.518 Avg. 7 (OOD) 0.425 0.428 0.432 0.437 0.423 0.431 0.445 0.441 0.433 0.454 Table 2: Retrieval results measuring nDCG@10 on 16 datasets in the BEIR benchmark by fine-tuning retrieval models on variants of the BGE training dataset after relabeling false negatives. The seven unseen (or out-of-domain) datasets during fine-tuning are highlighted with †and their average scores are provided in Avg. 7. w/o ELI5 w/o FEVER** w/o HotpotQA** w/o MS MARCO Documentw/o Stack Overflow (Dup.)w/o Trivia QAw/o NLI w/o SQuAD w/o ArguAna** w/o FiQA-2018** w/o MS MARCO Passage** w/o NQ** w/o Quora w/o ScidocsRR** w/o STS0.5000.5050.5100.5150.5200.5250.530BEIR (Avg. 14) 0.525 0.505 0.511 0.520 0.517 0.520 0.518 0.518 0.506 0.517 0.508 0.517 0.522 0.518 0.517Baseline: ALL (0.519) 7 Pruned (**) (0.529)Better than ALL Worse than ALLSame as ALL Figure 4: Dataset pruning by leaving one dataset out during fine-tuning E5 (base) on the BGE-training col- lection; [ALL] denotes fine-tuning on all datasets with 1.6M training pairs; [7 Pruned] denotes fine-tuning on 680K training pairs with seven remaining datasets (or 42.5% pairs) after dataset pruning. Fine-tuning details. All our E5 (base) models are fine-tuned using 4 ×L40S GPUs, with a learn- ing rate of 2e-5, a sequence length of 350 tokens for both the query and passage for 4–5 epochs. We ap- pend a “ query: ” and “ passage: ” prefix during fine-tuning each model. Similarly, our Qwen2.5- 7B and Qwen2.5-3B models are fine-tuned using a maximum of two H200 GPUs. All our retrieval training datasets are converted into the Tevatron for- mat, and fine-tuning is conducted via the Tevatron repository9(Gao et al., 2023; Ma et al., 2025). 9Tevatron repository: https://github.com/texttron/tevatron5 Experimental Results 5.1 Preliminary Results: Dataset Pruning False datapoint can hurt the training of retriever models. We assess the individual dataset contri- bution by evaluating several model variants by leav- ing one dataset out and fine-tuning the rest. As we fine-tune many models, i.e., one for each removed dataset, we limit these experiments to E5 (base). Summarized results are shown in Figure 4 (detailed results can be found in Table 10), demonstrating that training datasets (highlighted in red) can hurt the model retrieval accuracy, such as ELI5, remov- ing which improves the nDCG@10 on BEIR (0.519 →0.525). Also, it shows that certain datasets (high- lighted in green) are crucial for model accuracy. Based on findings in Figure 4 and selecting nec- essary datasets for individual task-based perfor- mances in BEIR, we prune the original 16 retrieval datasets in the BGE collection and select seven datasets (highlighted as ∗∗), reducing the training dataset size from 1.6M to 680K training pairs in our experiments. The average nDCG@10 score of E5 (base) improves from 0.519
https://arxiv.org/abs/2505.16967v1
→0.529 on 14 datasets on average in BEIR, by fine-tuning on al- most 2.35 ×smaller dataset (1.6M →680K). 5.2 Main Results: Relabeling False Negatives This section shows the results of the fine-tuned models on the variants of the training dataset de- scribed in Section 3.1 and 3.2, keeping the rest of the model training parameters unchanged. 5 Backbone Technique Arxiv Finance Health. Law News Avg. 5 E5 (base) Default 0.345 0.401 0.521 0.117 0.455 0.368 Cascading Stage 1: GPT-4o-mini E5 (base) Remove 0.346 0.407 0.526 0.118 0.452 0.370 E5 (base) HN Remove 0.344 0.406 0.522 0.118 0.459 0.370 E5 (base) RLHN 0.362 0.421 0.522 0.123 0.465 0.379 Cascading Stage 2: GPT-4o-mini + GPT-4o E5 (base) Remove 0.341 0.403 0.514 0.125 0.438 0.364 E5 (base) HN Remove 0.346 0.411 0.525 0.124 0.464 0.374 E5 (base) RLHN 0.356 0.440 0.521 0.138 0.476 0.386 Qwen2.5-7B Default 0.325 0.391 0.479 0.115 0.430 0.348 Cascading Stage 2: GPT-4o-mini + GPT-4o Qwen2.5-7B HN Remove 0.335 0.384 0.487 0.111 0.423 0.348 Qwen2.5-7B RLHN 0.330 0.418 0.494 0.133 0.450 0.365 Table 3: Retrieval results measuring nDCG@10 on five specialized domains in AIR-B ENCH dev (version 24.05 ) by fine-tuning E5 (base) and Qwen2.5-7B on variants of the BGE training dataset with RLHN. BEIR benchmark. Results in Table 2 show that in both E5 (base) and Qwen2.5-7B, the RLHN tech- nique achieves the best overall average nDCG@10 of 0.515 and 0.518 on 16 datasets on BEIR, outper- forming models trained in the default setting and remove techniques. The relabeled data in RLHN improves model generalization, with improvements strongly visible in seven out-of-domain (OOD) datasets in BEIR. Stage 1 (RLHN) outperforms the default setting by 2.0 points and Stage 2 (RLHN) by 3.2 points in nDCG@10. Overall, relabeling false negatives improves the data quality, which is quite visible in model generalization across out-of- domain settings in BEIR. AIR-B ENCH .In addition to BEIR, AIR-B ENCH provides a zero-shot setting to evaluate on chal- lenging domains, such as Law. Table 3 shows the average nDCG@10 on five specialized domains. We also confirm that the improvements in model generalization are similar to what we observed in BEIR. Stage 1 (RLHN) setting improves the de- fault setting by 1.1 point nDCG@10, and Stage 2 (RLHN) further improves by 2.1 points nDCG@10. Overall, without changing the model or training parameters, mitigating false negatives in training datasets with RLHN helps the model generalize better on specialized domains in AIR-B ENCH . Reranker results. Training data with improved quality also benefits the cross-encoder rerankers. Table 4 shows the result comparison on the BEIR benchmark, where we rerank the top-100 results from the fine-tuned E5 (base) in the default setting. We observe that training rerankers with data fixed on RLHN Stages 1 and 2 progressively increases nDCG@10 on BEIR datasets by 0.5 points and 0.8 points. This improvement is most prominent on the 7 OOD datasets, consistent with the aboveBEIR DatasetNo Filtering Cascading Stage 1 Cascading Stage 2 Default RLHN RLHN TREC-COVID†0.836 0.861 0.862 NFCorpus†0.401 0.414 0.415 NQ 0.730 0.739 0.736 HotpotQA 0.863 0.861 0.861 FiQA-2018 0.517 0.521 0.519
https://arxiv.org/abs/2505.16967v1
ArguAna 0.740 0.730 0.763 Touché-2020†0.275 0.308 0.313 DBPedia 0.532 0.536 0.538 SCIDOCS 0.278 0.273 0.270 FEVER 0.941 0.939 0.936 Climate-FEVER 0.457 0.468 0.430 SciFact 0.786 0.793 0.794 TREC-NEWS†0.507 0.513 0.527 Robust04†0.531 0.548 0.589 Signal-1M†0.292 0.276 0.274 BioASQ†0.510 0.505 0.500 Avg. 16 (All) 0.575 0.580 0.583 Avg. 7 (OOD) 0.479 0.489 0.497 Table 4: Reranker results measuring nDCG@10 on 16 datasets in BEIR by fine-tuning reranker models (based on Qwen2.5-3B) on variants of the BGE training datasets after relabeling false negatives. Stage 1 and 2 refers to GPT-4o-mini andGPT-4o-mini + GPT-4o . observation on retrievers: the data correction on the two stages improves the averaged OOD results by 1.0 and 1.8 points, respectively. We notice that the scale of the improvement on cross-encoders is not as large as on retrievers, which may indicate that cross-encoder rerankers are comparatively robust to false negative data than retrievers. However, albeit small, cross-encoders still benefit from training data of higher quality, especially for generalizing to unseen domains. 6 Analysis Robustness of RLHN across varying training data subsets. As training datasets can be large, effectively relabeling using the LLM cascading pipeline to label all training pairs can be pro- hibitive. Therefore, from Figure 5, we demon- strate that RLHN remains robust and shows similar accuracy gains, even for smaller randomly sam- pled subsets of the training dataset. To test this, we conducted four random subsets (100K, 250K, 400K, and 680K) of the training datasets, with each dataset distribution shown in Table 8. Overall, we have two main findings here: (i) E5 (base) model fine-tuned on RHLN Stages 1 and 2 training data with false hard negatives relabeled as positives outperforms the default setting con- sistently , and (ii) the higher slope in nDCG@10 demonstrates the continual improvement across zero-shot domains with increasing amounts of train- ing data, especially as observed in AIR-B ENCH . 6 100K 250K 400K 680K0.5050.5100.515nDCG@10 BEIR (Avg. 16) 100K 250K 400K 680K0.420.430.44 BEIR OOD (Avg. 7)No Filter RLHN Stage 1 (GPT-4o-mini) RLHN Stage 2 (GPT-4o-mini + GPT-4o) 100K 250K 400K 680K0.370.38 Air-Bench (Avg. 5)Figure 5: nDCG@10 scores on BEIR (Avg. 16 and Avg. 7) and AIR-B ENCH (Avg. 5) by fine-tuning E5 (base) on a subset of the 100K, 250K, 400K, and 680K training pairs using the “RLHN” technique for both stages. All individual dataset scores for both BEIR and AIR-B ENCH are provided in Figure 7 and Figure 8. Datasets → FEVER (3,521) F IQA-2018 (1,829) H OTPOT QA (4,720) S CIDOCS RR (350) SoTA Reranker Judge ↓ mAP@10 P@L(GT) mAP@10 P@L(GT) mAP@10 P@L(GT) mAP@10 P@L(GT) BAAI/bge-reranker-v2-gemma 0.839 0.777 0.632 0.492 0.742 0.638 0.926 0.875 mxbai/rerank-large-v2 0.496 0.365 0.658 0.525 0.737 0.634 0.680 0.524 mxbai/rerank-base-v2 0.570 0.455 0.598 0.464 0.671 0.565 0.612 0.462 Cohere (rerank-v3.5) 0.811 0.740 0.572 0.437 0.694 0.588 0.838 0.743 Alibaba-NLP/gte-reranker-modernbert-base 0.688 0.602 0.545 0.408 0.658 0.560 0.843 0.754 cross-encoder/ms-marco-MiniLM-L12-v2 0.745 0.656 0.517 0.387 0.587 0.479 0.832 0.755 Table 5: Reranker as the judge as a baseline to identify RLHN false negatives in each training dataset (written along with the count of training pairs). mAP@10 calculates the average precision of
https://arxiv.org/abs/2505.16967v1
false negatives (labeled as positives) in the top-10 reranked results. P@L(GT) calculates the precision of false negatives present in top- kreranked results, where kvaries in each query, measuring the count of false negatives detected using RLHN. Reranker distillation is competitive, but lacks in detecting false negatives. A reranker or cross- encoder is used in knowledge distillation to fine- tune a retriever model as an alternative to the tradi- tional contrastive or InfoNCE loss function (Hof- stätter et al., 2020; Qu et al., 2021; Wang et al., 2022a). This technique bypasses the original rele- vance judgments, relying on knowledge within the reranker. Instead of relying on RLHN, we evaluate how well existing rerankers detect false negatives in training datasets. To achieve this, we rerank the hard negatives present in each training instance, and compute two metrics: (i) mAP@10, measur- ing the average precision of false negatives in the top-10 results, and (ii) P@L(GT), measuring the precision of false negatives present in the top- k results, where kis the count of false negatives. Table 5 shows results by six reranker judges from various sources on four datasets. We observe bge-reranker-v2-gemma judge achieves the high- est scores amongst its counterparts, to identify false negatives labeled by RLHN (except FiQA-2018). However, on datasets such as FIQA-2018 and HOT- POTQA, rerankers can only identify 52.5–63.8% false negatives, indicating that existing rerankers are competitive, but still require improvement. We suspect that rerankers are fine-tuned on these ex- isting training datasets containing false negatives, which hurt their model accuracy.Metric GPT-4o-mini GPT-4o Cohen’s Kappa ( κ) 0.320 0.390 Table 6: Cohen’s κscore of GPT-4o-mini and GPT-4o judgments with human judgments on 670 query–hard negative pairs. 7 Human Validation We conducted a validation study with three human assessors conducting data annotation in Label Stu- dio10. Assessors were briefed on the relevance task, and then manually evaluated 670 query–hard neg- ative pairs in total, where the hard negatives were randomly selected from the RLHN set containing at least one false negative. During the assessment, all annotators worked independently and were not exposed to the LLM prediction. An example of the data annotation interface is provided in Figure 9. Table 6 reports the Cohen’s Kappa ( κ) between each LLM’s predictions and the human labels. The κscores are consistent with others reporting simi- lar human-LLM agreement (Arabzadeh and Clarke, 2025). GPT-4o shows a substantially higher agree- ment level with human annotators than GPT-4o- mini. This confirms the above empirical results, where relabeling with GPT-4o shows consistent gains over GPT-4o-mini on the training effective- ness of retrieval and reranker models. 10Label Studio: github.com/HumanSignal/label-studio 7 QueryGround Truth or Positive Passages False Negatives (Detected by RLHN) Positive Positive False Hard Negative False Hard Negative (Q1) Which is a food magazine, Latin Mass Maga- zine or Saveur?Latin Mass Magazine :AJournalof Catholic Culture, commonly referred to asLatin Mass Magazine, isanAmer i- canCatholic magazine published quar- terly, with a traditionalist Catholic view- point. [ ... ]Saveur :Saveur isagourmet, food, wine, and travel magazine that spe- cializesinessays about variousworld cuisines. Its slogan—"Savor a World of Authentic Cuisine"—signals the publi- cation’s
https://arxiv.org/abs/2505.16967v1
focus on enduring culinary tra- ditions [ ... ]Food & Wine :Food &Wine isa monthly magazine published byTime Inc. Itwas founded in1978 byAri- aneandMichael Batterberry. Itfeatures recipes, cook ingtips, travel information, restau rantreviews, chefs, wine pairings andseasonal holidaycontent[...]Cocina (magazine) :isaColom bian- based monthly magazine published by Publicaciones SemanaS.A.. Itfeatures recipes, cook ingtips, culinary tourism information, restau rantreviews, chefs, wine pairings andseasonal holidaycon- tent[...] (Q2) What year was the premier profes- sional ice hockey league in the world established?2016–17 Minnesota Wild season : The 2016–17 Minnesota Wild season was the17th seasonfortheNational Hockey League franchise thatwasestablished onJune 25,1997.National Hockey League :The Na- tional Hockey League (NHL; French: "Ligue nationale dehockey —LNH" ) isaprofessional icehockey league cur- rently comprising31teams [ ... ]History of the National Hockey League (1917–42) : History of the National Hockey League (1917–42) TheNational Hockey League (NHL) wasfounded in 1917 followingthedemise ofitspredecessorleague, theNational Hockey Associa- tion(NHA). [ ... ] (Q3) name meaning yin and yangYin and yang : In Chinese philosophy, yinandyang (also, yin-yang oryinyang) describes how apparently oppositeorcontrary forces areactuallycomplementary, interconnected, andinterdependent inthenaturalworld, and how they give rise to each other as they interrelate to one another.Yin and yang :YinandYang arean- cient Chinese philo soph icalterms, with theYinYang Theorybeingafundamen- talpart ofFeng Shui. ItisaChinese theoryontheperspectiveofcontinuous change andbalance. [ ... ]Yin Yang Symbols and Their Mean- ings: In a nutshell, Chinese yinyang symbols represent perfect balance. A great deal ofChinese philosophystems from theconcept ofyinandyang -op- posites interacting [ ... ] (Q4) Charles, Prince of Wales is patron of numerous other organizations.Charles, Prince of Wales : Charles, Prince of Wales (Charles Philip Arthur George; born 14 November 1948) is the eldest child and heir apparent of Queen Elizabeth II [ ... ] Charles’s interestsencompass arange ofhumanitarianandsocialissues: he founded ThePrince’s Trust in1976, sponsorsThePrince’s Char ities, andispatron ofnumerousother charitableandartsorganisations. [ ... ]Julia Cleverdon Dame : Julia Charity Cleverdon (born 19 April 1950) is a British charity worker who served for 16 years as Chief ExecutiveofBusiness intheCom munity, oneofthePrince’s Char itiesofCharles, Prince ofWales.The Prince’s Trust :ThePrince’s Trust isacharityintheUnited King dom founded in1976 byCharles, Prince of Wales, andFrederickJohn Pervinto help young people. [ ... ] Table 7: Qualitative analysis showcasing the different varieties of false negatives detected by RLHN. The first two questions are taken from HOTPOT QA, the third from MS MARCO , and the last from FEVER . The text supporting the query is highlighted in green, partially supporting in orange, and not supporting with red. 8 Qualitative Analysis on False Negatives We qualitatively analyze the labeling accuracy of our LLM cascading framework by manually spot- checking a few training instances. As shown in Table 7, we observe a different variety of false neg- atives, which can be one of the following scenarios: 1. Detected false negatives are incorrect or not relevant. GPT-4o can detect a false negative, not relevant to the query. E.g., (Q1) query asks which is a food magazine, within Latin Mass or Saveur, however, the false negatives identify different food magazines such as Food & Wine or Cochina, which are both incorrect. 2. The ground truth may be incorrectly labeled. In a
https://arxiv.org/abs/2505.16967v1
limited number of queries, we observe that the ground truth passage can have conflicting informa- tion with the false negative, and can be incorrectly labeled. E.g., the correct answer to the (Q2) query asking about the professional ice hockey establish- ment is 1917, present within the false negative, and 1997, mentioned in the ground truth, is incorrect. 3. The query may be too generic or ambiguous. In a substantial amount of training pairs in MS MARCO , we observe that the training query is rather ambiguous, leading to many false negatives being detected. E.g., all passages are relevant to the (Q3) query, including both the ground truth and false negatives, define “yin and yang” correctly in a different interpretation.4. False negatives detected can be partially cor- rect. Not all detected false negatives are relevant to the query. E.g., one of the false negatives is par- tially relevant to the query (Q4) about Charles or the Prince of Wales’ organizations. 9 Conclusion In this work, we emphasize the importance of clean training datasets. First, we showed that certain datasets can negatively impact model effectiveness when fine-tuned across a huge collection with many training pairs. Dataset pruning removes 57.5% (8 datasets out of 15) and improves the model accu- racy on BEIR by even 1.0 point and making the dataset 2.35 ×smaller. Next, after pruning, we observed the issue of false hard negatives in the remaining training datasets, where passages in the hard negative list are misclassified and are relevant to the query. We presented RLHN, an effective cas- cading LLM approach for relabeling hard negatives as ground truth or positives. Using RLHN, retrievers and rerankers were fine- tuned, and they consistently improved in model generalization, without any other modifications in the training settings on both BEIR and zero-shot AIR-Bench evaluations. We have released our mod- ified training datasets and code. We will continue cleaning other popular training datasets to promote its use by the community and highlight the criti- cal role of data quality in fine-tuning retrieval and reranker models. 8 Acknowledgments This research was supported in part by the Nat- ural Sciences and Engineering Research Council (NSERC) of Canada. Additional funding is pro- vided by Microsoft via the Accelerating Foundation Models Research program. Limitations Even though we propose an effective technique to identify and relabel false hard negatives with RLHN, no technique is perfect and has its limi- tations. Making those explicit is a critical point in understanding the RLHN results and improve- ments, and for future work, to propose even better detection techniques. 1. False positives in training datasets. De- tecting and relabeling false positives in training datasets is an important avenue of potential re- search. However, we avoid checking for false posi- tives, as these labels are trustworthy, provided by a human assessor, who can have a different pref- erence than the LLM itself. False positives might occur in a dataset due to human errors in existing datasets, but we suspect both the importance and frequency of detected false positives to be much lower than false negatives. 2. Cleaning extremely
https://arxiv.org/abs/2505.16967v1
large training datasets. The maximum training dataset size that we cov- ered in our work contained ≤1M training pairs. This is a reasonable dataset size to apply RLHN within a strict compute budget. Cleaning extremely large training datasets (for example, containing be- tween 1–10M training pairs) is not feasible, as it may require a very high computation budget, with detection using GPT-4o. In the future, we wish to experiment with open-sourced LLMs, such as Qwen-3 (Yang et al., 2025), as an alternative in our LLM cascading pipeline, allowing relabeling of extremely large training datasets. 3. Multilingual and long-context document retrieval datasets. A majority of the training datasets included in the BGE training collection have average document lengths up to a few hundred words, roughly equivalent to a few paragraphs. Ap- plying RLHN to clean long-context document re- trieval datasets, such as MLDR (Chen et al., 2024a) and multilingual training datasets, such as MIR- ACL (Zhang et al., 2023), would be highly relevant in the future.4. Multi-vector retrieval models. A popular suite of retrieval models includes multi-vector mod- els, such as ColBERT (Khattab and Zaharia, 2020; Santhanam et al., 2022), representing queries and documents by multiple contextualized token-level embeddings. In our work, we limited our experi- ments to dense retrievers and rerankers. We keep RLHN with an extension to multi-vector models as future work, using a training repository such as PyLate (Chaffin and Sourty, 2024). References Loubna Ben Allal, Anton Lozhkov, Elie Bak- ouch, Gabriel Martín Blázquez, Guilherme Penedo, Lewis Tunstall, Andrés Marafioti, Hynek Kydlícek, Agustín Piqueres Lajarín, Vaibhav Srivastav, Joshua Lochner, Caleb Fahlgren, Xuan-Son Nguyen, Clé- mentine Fourrier, Ben Burtenshaw, Hugo Larcher, Haojun Zhao, Cyril Zakka, Mathieu Morlon, and 3 others. 2025. SmolLM2: When Smol goes big - data- centric training of a small language model. CoRR , abs/2502.02737. Negar Arabzadeh and Charles LA Clarke. 2025. A Human-AI comparative analysis of prompt sensitivity in LLM-based relevance judgment. arXiv preprint arXiv:2504.12408 . Negar Arabzadeh, Alexandra Vtyurina, Xinyi Yan, and Charles L. A. Clarke. 2022. Shallow pooling for sparse labels. Inf. Retr. J. , 25(4):365–385. Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and Siva Reddy. 2024. LLM2vec: Large language mod- els are secretly powerful text encoders. In First Con- ference on Language Modeling . Antoine Chaffin and Raphaël Sourty. 2024. Pylate: Flexible training and retrieval for late interaction models. Jianlyu Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. 2024a. M3- embedding: Multi-linguality, multi-functionality, multi-granularity text embeddings through self- knowledge distillation. In Findings of the Asso- ciation for Computational Linguistics: ACL 2024 , pages 2318–2335, Bangkok, Thailand. Association for Computational Linguistics. Jiuhai Chen and Jonas Mueller. 2024. Automated data curation for robust language model fine-tuning. CoRR , abs/2403.12776. Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srini- vasan, Tianyi Zhou, Heng Huang, and Hongxia Jin. 2024b. AlpaGasus: Training a better alpaca with fewer data. In The Twelfth International Conference on Learning Representations . 9 Lingjiao Chen, Matei Zaharia, and James Zou. 2023. FrugalGPT: How to use large language models while reducing
https://arxiv.org/abs/2505.16967v1
cost and improving performance. CoRR , abs/2305.05176. Chanyeol Choi, Junseong Kim, Seolhwa Lee, Jihoon Kwon, Sangmo Gu, Yejin Kim, Minkyung Cho, and Jy-yong Sohn. 2024. Linq-Embed-Mistral technical report. CoRR , abs/2412.03223. Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2023. Tevatron: An efficient and flexible toolkit for neural retrieval. In Proceedings of the 46th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’23, page 3120–3124, New York, NY , USA. Association for Computing Machinery. Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, and Allan Hanbury. 2020. Improving efficient neural ranking models with cross-architecture knowledge distillation. CoRR , abs/2010.02666. Naman Jain, Tianjun Zhang, Wei-Lin Chiang, Joseph E. Gonzalez, Koushik Sen, and Ion Stoica. 2024. LLM- assisted code cleaning for training accurate code gen- erators. In The Twelfth International Conference on Learning Representations . Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 6769–6781, Online. Association for Computational Linguistics. Omar Khattab and Matei Zaharia. 2020. ColBERT: Effi- cient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval , SIGIR ’20, page 39–48, New York, NY , USA. Association for Computing Machinery. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics , 7:452– 466. Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2025a. NV-Embed: Improved techniques for training LLMs as generalist embedding models. InThe Thirteenth International Conference on Learn- ing Representations . Jinhyuk Lee, Feiyang Chen, Sahil Dua, Daniel Cer, Madhuri Shanbhogue, Iftekhar Naim, Gus- tavo Hernández Ábrego, Zhe Li, Kaifeng Chen, Hen- rique Schechter Vera, Xiaoqi Ren, Shanfeng Zhang,Daniel Salz, Michael Boratko, Jay Han, Blair Chen, Shuo Huang, Vikram Rao, Paul Suganthan, and 28 others. 2025b. Gemini embedding: Generalizable embeddings from gemini. CoRR , abs/2503.07891. Jinhyuk Lee, Zhuyun Dai, Xiaoqi Ren, Blair Chen, Daniel Cer, Jeremy R. Cole, Kai Hui, Michael Boratko, Rajvi Kapadia, Wen Ding, Yi Luan, Sai Meher Karthik Duddu, Gustavo Hernández Ábrego, Weiqiang Shi, Nithi Gupta, Aditya Kusupati, Pra- teek Jain, Siddhartha Reddy Jonnalagadda, Ming- Wei Chang, and Iftekhar Naim. 2024. Gecko: Ver- satile text embeddings distilled from large language models. CoRR , abs/2403.20327. Chaofan Li, Minghao Qin, Shitao Xiao, Jianlyu Chen, Kun Luo, Yingxia Shao, Defu Lian, and Zheng Liu. 2024. Making text embedders few-shot learners. CoRR , abs/2409.15700. Xueguang Ma, Luyu Gao, Shengyao Zhuang, Ji- aqi Samantha Zhan, Jamie Callan, and Jimmy Lin. 2025. Tevatron 2.0: Unified document re- trieval toolkit across scale, language, and modality. Preprint , arXiv:2505.02466. Joel Mackenzie, Matthias Petri, and Alistair Moffat. 2021. A sensitivity
https://arxiv.org/abs/2505.16967v1
analysis of the MSMARCO pas- sage collection. CoRR , abs/2112.03396. Gabriel Moreira, Radek Osmulski, Mengyao Xu, Ronay Ak, Benedikt Schifferer, and Even Oldridge. 2024. NV-Retriever: Improving text embedding mod- els with effective hard-negative mining. CoRR , abs/2407.15831. John Xavier Morris and Alexander M Rush. 2025. Con- textual document embeddings. In The Thirteenth International Conference on Learning Representa- tions . Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, and Douwe Kiela. 2024. Generative representational in- struction tuning. CoRR , abs/2402.09906. Niklas Muennighoff, Hongjin SU, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, and Douwe Kiela. 2025. Generative representational in- struction tuning. In The Thirteenth International Conference on Learning Representations . Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. MTEB: Massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Com- putational Linguistics , pages 2014–2037, Dubrovnik, Croatia. Association for Computational Linguistics. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrat- ing neural and symbolic approaches 2016 co-located 10 with the 30th Annual Conference on Neural Infor- mation Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016 , volume 1773 of CEUR Workshop Proceedings . CEUR-WS.org. OpenAI. 2024. Hello GPT-4o. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized train- ing approach to dense passage retrieval for open- domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies , pages 5835–5847, On- line. Association for Computational Linguistics. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. Col- BERTv2: Effective and efficient retrieval via lightweight late interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022 , pages 3715– 3734. Association for Computational Linguistics. Hongjin SU, Howard Yen, Mengzhou Xia, Weijia Shi, Niklas Muennighoff, Han yu Wang, Liu Haisu, Quan Shi, Zachary S Siegel, Michael Tang, Ruoxi Sun, Jin- sung Yoon, Sercan O Arik, Danqi Chen, and Tao Yu. 2025. BRIGHT: A realistic and challenging bench- mark for reasoning-intensive retrieval. In The Thir- teenth International Conference on Learning Repre- sentations . Chongyang Tao, Tao Shen, Shen Gao, Junshuo Zhang, Zhen Li, Zhengwei Tao, and Shuai Ma. 2024. LLMs are also effective embedding models: An in-depth overview. CoRR , abs/2412.12591. Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual . Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. 2022a. GPL: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval. InProceedings of
https://arxiv.org/abs/2505.16967v1
the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies , pages 2345–2360, Seattle, United States. Association for Computational Linguistics. Liang Wang, Nan Yang, Xiaolong Huang, Binx- ing Jiao, Linjun Yang, Daxin Jiang, Rangan Ma- jumder, and Furu Wei. 2022b. Text embeddings by weakly-supervised contrastive pre-training. CoRR , abs/2212.03533.Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2024. Improv- ing text embeddings with large language models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 11897–11916. Association for Computational Linguistics. Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, and Iacopo Poli. 2024. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. CoRR , abs/2412.13663. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. 2022. Chain-of-Thought prompt- ing elicits reasoning in large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems , NIPS ’22, Red Hook, NY , USA. Curran Associates Inc. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neigh- bor negative contrastive learning for dense text re- trieval. In International Conference on Learning Representations . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Day- iheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, and 41 others. 2025. Qwen3 technical report. Preprint , arXiv:2505.09388. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayi- heng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Ji- axi Yang, Jingren Zhou, Junyang Lin, Kai Dang, and 22 others. 2024a. Qwen2.5 technical report. CoRR , abs/2412.15115. Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, and Jingren Zhou. 2024b. AIR-bench: Benchmarking large audio-language models via generative comprehension. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 1979–1998, Bangkok, Thailand. Associ- ation for Computational Linguistics. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answer- ing. In Conference on Empirical Methods in Natural Language Processing (EMNLP) . 11 Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xi- aoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. MIRACL: A multilingual retrieval dataset covering 18 diverse languages. Transactions of the Association for Computational Linguistics , 11:1114–1131. Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu,
https://arxiv.org/abs/2505.16967v1