text
string | source
string |
|---|---|
Augmentation Recipe for Improving Transcriptomic Representations Figure 2. Performance comparison of the distillation and augmentation components of our approach compared to existing distillation methods (a) and biological data augmentation techniques (b) across five training seeds. Higher is better for all metrics. Semi-Clipped and PEA maintain interpretability and achieve the highest performance on all OOD datasets. (a) Z-scores of evaluation metrics (relationship recall and Tx preservability) are shown, with cool colors for label-based methods and warm colors for label-free approaches, without data augmentation. (b) Raw scores are shown for relationship recall and Tx preservability. Transcriptomics data augmentations, MWO (Kircher et al., 2022), scVI denoising (Lopez et al., 2018), MDWGAN-GP (Li et al., 2023), scGFT (Nouri, 2025), are applied within Semi-Clipped training . We compare training results where we simultaneously use all evaluated data augmentations, both with and without PEA, to assess its additional impact in a practical setting on both evaluation tasks. 5.1. Semi-Clipped Enables Robust and Generalizable Transcriptomic Representations To analyze the effect of different training choices on Semi- Clipped performance, we first examine its impact on known biological relationship recall using the HUVEC-KO dataset. Without data augmentations and using the CLIP loss , we conduct two comparisons. Figure 1 (left) compares training an scVI-like MLP from scratch for the distillation task against using a pretrained scVI model. Additionally, it evaluates the effect of introducing an image adapter instead of relying solely on a transcriptomics adapter while keep- ing image embeddings frozen. Figure 1 (right) compares finetuning a pretrained scGPT model for distillation versus freezing scGPT and training a transcriptomics adapter on its own or with an image adapter. We find that leverag- ing a pretrained encoder with adapters consistently outper- forms both training from scratch and finetuning for both scVI and scGPT. Furthermore, aligning transcriptomic rep- resentations to frozen image embeddings, as proposed in Semi-Clipped, yields superior performance compared to also training a microscopy imaging adapter. We evaluate Semi-Clipped’s ability to learn generalizable and biologically meaningful representations of transcrip- tomics compared to existing distillation methods, using anscVI pretrained encoder for transcriptomics . For clar- ity, we define label-free approaches as those that do not use biological labels in the training objective, even if labels are used for modality pairing. To ensure a fair comparison of the core methods, no data augmentation is applied . Figure 2 (a) presents the performance of Semi-Clipped against various label-based and label-free distillation ap- proaches on the Transcriptomic Interpretability Preservation and Known Biological Relationship Recall tasks across all three OOD datasets. Scores are standardized as z-scores and averaged over 5 seeds, with higher values indicating better performance. Distillation methods using label super- vision (cool colors) generally show weaker relationship re- call compared to unsupervised multimodal methods (warm colors) and even underperform the unimodal baseline in LINCS andSC-RPE1 . In contrast, Semi-Clipped achieves the highest relationship recall in HUVEC-KO andSC-RPE1 while also slightly surpassing the unimodal baseline in tran- scriptomics interpretability in HUVEC-KO . This suggests successful knowledge transfer from morphology to tran- scriptomics without sacrificing interpretability. In LINCS , Semi-Clipped performs competitively, outperforming all label-supervised distillation
|
https://arxiv.org/abs/2505.21317v1
|
methods and the unimodal base- line in relationship recall while closely matching the best unsupervised multimodal methods. On the transcriptomics 6 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Figure 3. Ablation study on the known relationship recall score of hyperparameters choices (Tx Adapter learning rate, CLIP loss temperature, batch size, and training epochs) for training Semi-Clipped on the HUVEC-KO dataset, including the selected optimal configuration (dotted vertical line). For each studied parameter, we set all other hyperparameters at their best performing value. While performance varies with parameter changes, the method remains largely robust, showing minimal degradation and no collapse preservation metric for LINCS andSC-RPE1 , Semi-Clipped retains strong interpretability, slightly trailing the unimodal baseline but outperforming most distillation approaches. This minor limitation likely reflects the challenge of main- taining interpretability across unseen cell types. Overall, Semi-Clipped effectively balances generalization and inter- pretability across all OOD settings, consistently achieving the most robust performance across all metrics and demon- strating its strength as a distillation method. To further assess Semi-Clipped’s robustness, we conduct a detailed ablation study on individual hyperparameters when trained independently on the HUVEC-KO dataset (Figure 3). We isolate the effect of each parameter by fixing all others to their optimal values. The results reveal that while performance fluctuates with changes in configuration, Semi- Clipped remains resilient, exhibiting only minor degrada- tion without any performance collapse. Optimal learning is achieved with a balanced learning rate, a lower tempera- ture for the CLIP loss term, and larger batch sizes, though the method still performs competitively even with small batches. Additionally, increasing the number of training epochs yields substantial improvements. These findings reinforce the method’s stability and reliability across a wide range of training conditions. 5.2. PEA Enhances Distillation Across Methods and Synergizes with Existing Augmentations We further evaluate the effectiveness of our proposed PEA data augmentation in enhancing distillation performance across both evaluation tasks. Using Semi-Clipped as the base model , we compare its performance over five training seeds in three settings: (1) without any data augmentation, (2) with multiple existing biologically inspired transcrip- tomics augmentations from the literature, each used sepa- rately, and (3) with PEA as the sole augmentation. This initial evaluation isolates the specific contribution of PEA. Additionally, to reflect real-world training conditions where multiple augmentations are typically applied together, weconduct a broader comparison. Specifically, we compare Semi-Clipped trained with all existing biological augmenta- tions except PEA against its performance when trained with the full set of augmentations, including PEA. Figure 2 (b) presents the results of this comparison. Across all three eval- uation datasets, PEA achieves state-of-the-art performance in Known Biological Relationship Recall, significantly out- performing all existing approaches. It also preserves tran- scriptomic interpretability, matching the no-augmentation baseline in SC-RPE1 while surpassing it in HUVEC-KO andLINCS . Notably, PEA alone improves performance over not using augmentations by 17% in HUVEC-KO , 55% in LINCS , and 20% in SC-RPE1 . More strikingly, PEA out- performs the combined effect of all other biological aug- mentations used together, highlighting its strong biological foundation and
|
https://arxiv.org/abs/2505.21317v1
|
ability to introduce meaningful variation to the distillation process. Furthermore, integrating PEA with all other augmentations further enhances performance be- yond using PEA alone, demonstrating its complementarity to existing transcriptomics augmentation techniques. This combined approach yields the highest overall improvements, increasing performance over the no-augmentation baseline by 25% in HUVEC-KO , 69% in LINCS , and 26% in SC- RPE1 . These results confirm that PEA not only provides substantial individual benefits but also synergizes effectively with existing augmentation strategies. We assess whether PEA enhances performance across differ- ent distillation approaches beyond Semi-Clipped and com- pare its impact on various methods. Specifically, we apply PEA to KD, SHAKE, VICReg, and Semi-Clipped and eval- uate its effect on benchmark tasks. Each method is trained over 15 different seeds, both with and without PEA, and we use a Wilcoxon signed-rank test to determine the statisti- cal significance of improvements. Table 1 summarizes the results: PEA consistently enhances performance across all three OOD datasets for every distillation approach, with par- ticularly strong gains in LINCS andSC-RPE1 . This confirms that PEA is broadly beneficial across methods. Notably, 7 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations MethodHUVEC-KO LINCS SC-RPE1 Tx Preservation Known Relationships Tx Preservation Known Relationships Tx Preservation Known Relationships Random baseline 33.92 ±0.09 10.37±0.11 47.09±0.07 10.81±0.04 25.34±0.11 10.03±0.02 Unimodal baseline 52.23 ±0.34 16.51±0.85 93.35±0.07 12.21±0.11 37.75±0.28 25.29±0.24 KD 52.90 ±0.31 16.00±1.36 92.69±0.15 11.83±0.27 37.43±0.32 23.9±0.18 KD + PEA ↑54.12±0.63 ↑20.65±1.78 ↑93.11±0.29 ↑15.73±0.59 ↑37.55±0.38 ↑29.02±0.36 SHAKE 51.93 ±0.80 17.02±1.02 91.64±0.23 12.09±0.49 36.95±0.46 25.13±0.19 SHAKE + PEA ↑52.93±0.83 ↑19.98±1.34 ↑92.43±0.31 ↑16.84±0.51 ↓36.15±0.51 ↑30.81±0.31 VICReg 51.87 ±0.39 17.25±1.14 91.19±0.19 12.96±0.45 36.75±0.17 32.19±0.26 VICReg + PEA ↑53.76±0.66 ↑20.46±0.83 ↑91.22±0.25 ↑18.12±0.19 ↓36.33±0.22 ↑38.14±0.29 Semi-Clipped 52.78 ±0.27 19.71±1.19 92.71±0.23 12.68±0.33 37.54±0.19 32.65±0.21 Semi-Clipped + PEA ↑53.87±0.37 ↑23.05±0.42 ↑93.15±0.38 ↑19.63±0.18 ↑37.56±0.15 ↑39.84±0.23 Table 1. Performance improvement of different distillation methods with and without PEA under all OOD settings. We average the scores of 15 different seeds for each model, and the p-value of every result improvement is below 0.05 using the Wilcoxon signed-rank statistical test. Improvements from using PEA are indicated with upward arrows. For each OOD setting, the best-performing model is shown in bold, and the second-best is underlined. Using PEA as data augmentation for distillation approaches preserves the transcriptomics information while widely improving the zero-shot retrieval of known biological relationships for all the three OOD datasets used for evaluation. it also significantly improves Transcriptomic Interpretabil- ity, likely due to its ability to preserve biological informa- tion while introducing controlled variations, enhancing the signal-to-noise ratio. All gains in Known Relationship Re- call between PEA and non-PEA settings are statistically significant (p-values <0.05). Importantly, Semi-Clipped remains the top-performing approach in Known Biological Relationship Recall across all evaluation datasets when us- ing PEA, while also achieving the second-best performance in Transcriptomic Interpretability Preservation across all datasets. We analyze the contribution of each PEA component to performance improvements by conducting an ablation study on the HUVEC-KO dataset. We evaluate its impact on KD, SHAKE, VICReg, and Semi-Clipped, progressively adding PEA components to the base distillation methods without
|
https://arxiv.org/abs/2505.21317v1
|
augmentations. Each step in the ablation builds upon the pre- vious one: (1) Fixed biological augmentation : applying a predefined set of batch correction techniques. (2) Inference on TVN-corrected embeddings : applying Typical Varia- tion Normalization (TVN) (Ando et al., 2017) correction to zSat inference before passing them to the adapter fS. (3) Augmentation stochasticity : randomly dropping a subset of batch correction steps to introduce variation. (4) Control sampling : randomly sampling a varying amount of control samples for correction, completing the full PEA approach. Table 2 summarizes the results, averaged over 15 seeds and reporting Known Biological Relationship Recall for each distillation approach. Every component contributes incre- mental improvements, with control sampling providing the strongest boost, particularly for Semi-Clipped. Importantly, all distillation methods show consistent performance gains at each step, indicating that each PEA component plays a critical role in enhancing distillation outcomes.5.3. Semi-Clipped with PEA Enables Synergistic Integration of Morphological and Transcriptomic Insights We analyze the biological insights provided by Semi- Clipped trained with PEA, comparing the known biological relationships it retrieves to those identified independently by unimodal microscopy imaging and transcriptomics models on the HUVEC-KO OOD dataset. Specifically, we evalu- ate the quantity and overlap of relationships retrieved by KD, SHAKE, VICReg, and Semi-Clipped, all trained with PEA, to assess whether these models remain faithful to transcriptomics-specific relationships or exhibit modality drift. This is quantified by measuring the intersection be- tween relationships retrieved by each distillation method and those identified by the unimodal transcriptomics model. Figure 4 presents Venn diagrams of these intersections. Semi-Clipped shows strong alignment with transcriptomics- retrieved relationships while also capturing additional bi- ological insights typically associated with morphological features. In contrast, while the other distillation approaches retrieve many known relationships, they exhibit minimal overlap with those identified by transcriptomics alone. No- tably, KD and SHAKE, both label-based methods, demon- strate particularly weak alignment with transcriptomics re- lationships, likely due to the confounding effects of weak biological labels used during training. These findings sug- gest that Semi-Clipped effectively preserves transcriptomic insights while significantly enriching them with complemen- tary morphological information, achieving a better balance between biological faithfulness and multimodal integration. We next investigate whether distilling morphological fea- tures into transcriptomics yields a purely additive effect or if it generates emergent synergies between modalities. To assess this, we analyze the set Distillation \(Transcrip- tomics ∪Microscopy) in Figure 4, representing relationships 8 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations PEA Configuration KD SHAKE VICReg Ours Base Method 16.00 17.02 17.25 19.71 + Fixed Bio-Aug 16.76 17.22 17.97 19.95 + Inference on TVN 18.76 18.32 18.62 20.58 + Aug. Stochasticity 19.37 18.84 19.43 21.79 + Ctrl Sampling (PEA) 20.65 19.98 20.46 23.05 Table 2. Ablation study of different PEA components on HUVEC- KOevaluation dataset, on retrieval of known relationships of dif- ferent distillation methods, averaged over 15 seeds. We see that combining all PEA components achieves significant improvements, especially when using our Semi-Clipped approach. uniquely retrieved by the distillation model but absent in unimodal transcriptomics or microscopy imaging. For all evaluated methods,
|
https://arxiv.org/abs/2505.21317v1
|
we perform Gene-Set Enrichment Anal- ysis (GSEA) (Subramanian et al., 2005) to identify enriched biological pathways within this set compared to other dis- tinct relationships retrieved by each distillation approach, filtering for gene sets with p-values <0.01. Surprisingly, KD, SHAKE, and VICReg fail to significantly enrich any bi- ological pathway, whereas Semi-Clipped uniquely enriches pathways related to the cell cycle and post-translational modifications (Appendix Table 3). This suggests that dis- tilling morphological traits into transcriptomics using our approach enhances the capture of cell cycle-related informa- tion, which may be less detectable or noisier in either modal- ity alone. This outcome likely arises from Semi-Clipped’s ability to integrate rich phenotypic information from mi- croscopy imaging, including morphological traits, spatial organization, and cellular process indicators, with transcrip- tomic markers such as mitochondrial RNA gene counts, often associated with cell cycle activity. This fusion enables a deeper biological synergy, allowing distillation to reveal novel biological insights that neither modality could achieve independently, while still permitting unimodal inference rather than requiring multimodal fusion. 6. Discussion In this work, we introduced Semi-Clipped, a self-supervised framework for distilling morphological knowledge of biol- ogy into transcriptomic representations using multimodal alignment techniques. Additionally, we proposed PEA, a biologically informed augmentation strategy that repurposes batch correction to enhance representation learning. Our results demonstrate that Semi-Clipped outperforms existing distillation methods while preserving transcriptomic inter- pretability. Furthermore, we show that label-free distillation consistently surpasses label-based approaches, reinforcing that biological labels often lack the granularity needed to fully capture cellular complexity. A key contribution of this work is the reinterpretation of batch correction as a biologically meaningful data augmentation. Unlike conven- tional transcriptomic data augmentations that may disrupt Figure 4. Venn diagrams of retrieved biological relationships for KD, SHAKE, VICReg, and Semi-Clipped (all trained with PEA) on the HUVEC-KO OOD dataset. Semi-Clipped shows the high- est overlap with transcriptomics while integrating morphological insights, whereas KD and SHAKE exhibit the weakest alignment, possibly due to reliance on weak biological labels. Detailled mea- sures of the gains and losses of each method in each modality are available in Figure 5. critical expression signals, PEA introduces plausible vari- ability while maintaining essential biological properties. This approach significantly improves cross-modal distilla- tion performance, increasing Known Biological Relation- ship Recall in OOD tests while preserving interpretability. Beyond aligning transcriptomic and morphological informa- tion, Semi-Clipped reveals emergent biological synergies, particularly in cell cycle regulation and post-translational modifications. Despite these advantages, challenges remain. Random pairing within treatment groups may dilute repre- sentation quality when subtle intra-group differences exist, highlighting the need for better matching strategies. Lim- ited large-scale paired data also restricts broader applica- bility. Nonetheless, Semi-Clipped is computationally effi- cient: training on a scaled version of our dataset with 1.3 million weakly paired samples takes only 19 hours on a sin- gle H100 GPU, thanks to the use of frozen backbones and lightweight adapters. As multimodal datasets grow, scaling these methods could further advance biological research and cross-modal understanding. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning in its application to life
|
https://arxiv.org/abs/2505.21317v1
|
sciences. There are many potential societal consequences of our work, 9 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations especially relating to the discovery of new biological rela- tionships and potential drug treatments. Utmost care should be taken to validate safety and efficacy of model predictions in pre-clinical trials. References Alfasly, S., Shafique, A., Nejat, P., Khan, J., Alsaafin, A., Alabtah, G., and Tizhoosh, H. Rotation-agnostic image representation learning for digital pathology. In CVPR , 2024. Ando, D. M., McLean, C. Y ., and Berndl, M. Improving phe- notypic measurements in high-content imaging screens. bioRxiv , July 2017. Bardes, A., Ponce, J., and LeCun, Y . Vicreg: Variance- invariance-covariance regularization for self-supervised learning. In ICLR , 2022. Bendidi, I., Bardes, A., Cohen, E., Lamiable, A., Bollot, G., and Genovesio, A. No free lunch in self supervised representation learning, 2023. Bendidi, I., Bardes, A., Cohen, E., Lamiable, A., Bollot, G., and Genovesio, A. Exploring self-supervised learning biases for microscopy image representation. Biological Imaging , 4, 2024a. doi: 10.1017/S2633903X2400014X. Bendidi, I., Whitfield, S., Kenyon-Dean, K., Yedder, H. B., Mesbahi, Y . E., Noutahi, E., and Denton, A. K. Bench- marking transcriptomics foundation models for perturba- tion analysis : one pca still rules them all. In NeurIPS AIDrugX Workshop , 2024b. Bourriez, N., Bendidi, I., Cohen, E., Watkinson, G., Sanchez, M., Bollot, G., and Genovesio, A. ChAda- ViT : Channel Adaptive Attention for Joint Representa- tion Learning of Heterogeneous Microscopy Image . In CVPR , 2024. Camunas-Soler, J. Integrating single-cell transcriptomics with cellular phenotypes: cell morphology, ca2+ imag- ing and electrophysiology. Biophys. Rev. , 16(1):89–107, February 2024. Carpenter, A. E., Jones, T. R., Lamprecht, M. R., Clarke, C., Kang, I. H., Friman, O., Guertin, D. A., Chang, J. H., Lindquist, R. A., Moffat, J., Golland, P., and Sabatini, D. M. CellProfiler: image analysis software for identi- fying and quantifying cell phenotypes. Genome Biol. , 7 (10):R100, October 2006. Celik, S., H ¨utter, J.-C., Carlos, S. M., Lazar, N. H., Mohan, R., Tillinghast, C., Biancalani, T., Fay, M. M., Earnshaw, B. A., and Haque, I. S. Building, benchmarking, andexploring perturbative maps of transcriptional and mor- phological data. PLOS Computational Biology , 20(10): 1–24, 10 2024. doi: 10.1371/journal.pcbi.1012463. Chandrasekaran, S. N., Ackerman, J., Alix, E., Ando, D. M., Arevalo, J., Bennion, M., Boisseau, N., Borowa, A., Boyd, J. D., Brino, L., Byrne, P. J., Ceulemans, H., Ch’ng, C., Cimini, B. A., Clevert, D.-A., Deflaux, N., Doench, J. G., Dorval, T., Doyonnas, R., Dragone, V ., Engkvist, O., Faloon, P. W., Fritchman, B., Fuchs, F., Garg, S., Gilbert, T. J., Glazer, D., Gnutt, D., Goodale, A., Grignard, J., Guenther, J., Han, Y ., Hanifehlou, Z., Hariharan, S., Her- nandez, D., Horman, S. R., Hormel, G., Huntley, M., Icke, I., Iida, M., Jacob, C. B., Jaensch, S., Khetan, J., Kost- Alimova, M., Krawiec, T., Kuhn, D., Lardeau, C.-H., Lembke, A., Lin, F., Little, K. D., Lofstrom, K. R., Lotfi, S., Logan, D. J., Luo, Y ., Madoux, F., Marin Zapata, P. A., Marion, B. A., Martin, G., McCarthy, N. J., Mervin, L., Miller, L.,
|
https://arxiv.org/abs/2505.21317v1
|
Mohamed, H., Monteverde, T., Mouchet, E., Nicke, B., Ogier, A., Ong, A.-L., Osterland, M., Otrocka, M., Peeters, P. J., Pilling, J., Prechtl, S., Qian, C., Rataj, K., Root, D. E., Sakata, S. K., Scrace, S., Shimizu, H., Simon, D., Sommer, P., Spruiell, C., Sumia, I., Swalley, S. E., Terauchi, H., Thibaudeau, A., Unruh, A., Van de Waeter, J., Van Dyck, M., van Staden, C., Warchoł, M., Weisbart, E., Weiss, A., Wiest-Daessle, N., Williams, G., Yu, S., Zapiec, B., ˙Zyła, M., Singh, S., and Carpenter, A. E. JUMP cell painting dataset: morphological impact of 136,000 chemical and genetic perturbations. bioRxiv , March 2023. Chen, R. J., Ding, T., Lu, M. Y ., Williamson, D. F. K., Jaume, G., Song, A. H., Chen, B., Zhang, A., Shao, D., Shaban, M., Williams, M., Oldenburg, L., Weishaupt, L. L., Wang, J. J., Vaidya, A., Le, L. P., Gerber, G., Sahai, S., Williams, W., and Mahmood, F. Towards a general- purpose foundation model for computational pathology. Nature Medicine , 2024. Conesa, A., Madrigal, P., Tarazona, S., Gomez-Cabrero, D., Cervera, A., McPherson, A., Szcze ´sniak, M. W., Gaffney, D. J., Elo, L. L., Zhang, X., and Mortazavi, A. A survey of best practices for RNA-seq data analysis. Genome Biol. , 17(1), December 2016. Corso, G., St ¨ark, H., Jing, B., Barzilay, R., and Jaakkola, T. Diffdock: Diffusion steps, twists, and turns for molecular docking. In ICLR , 2023. Cui, H., Wang, C., Maan, H., Pang, K., Luo, F., Duan, N., and Wang, B. scGPT: toward building a foundation model for single-cell multi-omics using generative AI. Nat. Methods , 21(8):1470–1480, August 2024. DeVries, T. and Taylor, G. W. Dataset augmentation in feature space, 2017. 10 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Evans, R., O’Neill, M., Pritzel, A., Antropova, N., Senior, A., Green, T., ˇZ´ıdek, A., Bates, R., Blackwell, S., Yim, J., Ronneberger, O., Bodenstein, S., Zielinski, M., Bridgland, A., Potapenko, A., Cowie, A., Tunyasuvunakool, K., Jain, R., Clancy, E., Kohli, P., Jumper, J., and Hassabis, D. Protein complex prediction with AlphaFold-Multimer. biorxiv , October 2021. Fang, Z., Wang, J., Hu, X., Wang, L., Yang, Y ., and Liu, Z. Compressing visual-linguistic model via knowledge distillation. In ICCV , 2021. Fay, M. M., Kraus, O., Victors, M., Arumugam, L., Vuggu- mudi, K., Urbanik, J., Hansen, K., Celik, S., Cernek, N., Jagannathan, G., Christensen, J., Earnshaw, B. A., Haque, I. S., and Mabey, B. Rxrx3: Phenomics map of biology. bioRxiv , 2023. doi: 10.1101/2023.02.07.527350. Fradkin, P., Azadi, P., Suri, K., Wenkel, F., Bashashati, A., Sypetkowski, M., and Beaini, D. How molecules impact cells: Unlocking contrastive phenomolecular retrieval. In NeurIPS , 2024. Gupta, S., Hoffman, J., and Malik, J. Cross modal distilla- tion for supervision transfer. In CVPR , 2016. Hager, P., Menten, M. J., and Rueckert, D. Best of Both Worlds: Multimodal Contrastive Learning with Tabular and Imaging Data . In CVPR , pp. 23924–23935, Los Alamitos, CA, USA, June 2023. IEEE Computer Society. doi: 10.1109/CVPR52729.2023.02291. URL https://doi.ieeecomputersociety.org/ 10.1109/CVPR52729.2023.02291 . han Li, P., Chinchali, S. P., and Topcu, U. Csa: Data- efficient mapping of
|
https://arxiv.org/abs/2505.21317v1
|
unimodal features to multimodal features, 2024. Hinton, G., Vinyals, O., and Dean, J. Distilling the knowl- edge in a neural network. In NeurIPS Deep Learning Workshop , 2015. Huang, T., You, S., Wang, F., Qian, C., and Xu, C. Knowl- edge distillation from a stronger teacher. In NeurIPS , 2022. Huo, F., Xu, W., Guo, J., Wang, H., and Guo, S. C2kd: Bridging the modality gap for cross-modal knowledge distillation. In CVPR , 2024. Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., ˇZ´ıdek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu,K., Kohli, P., and Hassabis, D. Highly accurate pro- tein structure prediction with alphafold. Nature , 596 (7873):583–589, Aug 2021. ISSN 1476-4687. doi: 10.1038/s41586-021-03819-2. Kenyon-Dean, K., Wang, Z. J., Urbanik, J., Donhauser, K., Hartford, J., Saberian, S., Sahin, N., Bendidi, I., Celik, S., Fay, M., Vera, J. S. R., Haque, I. S., and Kraus, O. Vitally consistent: Scaling biological representation learning for cell microscopy. In ICML , 2025. Kharchenko, P. V . The triumphs and limitations of compu- tational methods for scRNA-seq. Nat. Methods , 18(7): 723–732, July 2021. Kircher, M., Chludzinski, E., Krepel, J., Saremi, B., Beineke, A., and Jung, K. Augmentation of transcrip- tomic data for improved classification of patients with respiratory diseases of viral origin. Int. J. Mol. Sci. , 23 (5):2481, February 2022. Kraus, O., Kenyon-Dean, K., Saberian, S., Fallah, M., McLean, P., Leung, J., Sharma, V ., Khan, A., Balakr- ishnan, J., Celik, S., Beaini, D., Sypetkowski, M., Cheng, C. V ., Morse, K., Makes, M., Mabey, B., and Earnshaw, B. Masked autoencoders for microscopy are scalable learners of cellular biology. In CVPR , 2024. Lafarge, M. W. and Koelzer, V . H. Rotation invariance and extensive data augmentation: A strategy for the mitosis domain generalization (midog) challenge. In Biomedical Image Registration, Domain Generalisation and Out-of- Distribution Analysis , pp. 62–67, Cham, 2022. Springer International Publishing. ISBN 978-3-030-97281-3. Lan, Y .-T., Liu, W., and Lu, B.-L. Multimodal emotion recognition using deep generalized canonical correlation analysis with an attention mechanism. In IJCNN , 2020. doi: 10.1109/IJCNN48605.2020.9207625. Lee, P., Kim, T., Shim, M., Wee, D., and Byun, H. Decom- posed cross-modal distillation for rgb-based temporal action detection. In CVPR , 2023. Li, H., Miao, C., Leung, C., Huang, Y ., Huang, Y ., Zhang, H., and Wang, Y . Exploring representation-level augmen- tation for code search, 2022. Li, L. and Zhe, J. Shadow knowledge distillation: Bridging offline and online knowledge transfer. In NeurIPS , 2022. Li, R., Wu, J., Li, G., Liu, J., Xuan, J., and Zhu, Q. Mdwgan- gp: data augmentation for gene expression data based on multiple discriminator WGAN-GP. BMC Bioinformatics , 24(1):427, November 2023. Liu, T., Li, K., Wang, Y ., Li, H., and Zhao, H. Evaluating the utilities of foundation models in single-cell data analysis.
|
https://arxiv.org/abs/2505.21317v1
|
bioRxiv , September 2023. 11 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Lopez, R., Regier, J., Cole, M. B., Jordan, M. I., and Yosef, N. Deep generative modeling for single-cell transcrip- tomics. Nature Methods , 15(12):1053–1058, 2018. Lu, S., F ¨urth, D., and Gillis, J. Integrative analysis methods for spatial transcriptomics. Nat. Methods , 18(11):1282– 1283, November 2021. Miao, Z., Humphreys, B. D., McMahon, A. P., and Kim, J. Multi-omics integration in the age of million single-cell data. Nat. Rev. Nephrol. , 17(11):710–724, November 2021. Moutakanni, T., Oquab, M., Szafraniec, M., Vakalopoulou, M., and Bojanowski, P. You don’t need domain-specific data augmentations when scaling self-supervised learning. InNeurIPS , 2024. Nouri, N. Single-cell RNA-seq data augmentation using generative fourier transformer. Commun. Biol. , 8(1):113, January 2025. Park, W., Kim, D., Lu, Y ., and Cho, M. Relational knowl- edge distillation. In CVPR , June 2019. Pham, C. and Plummer, B. A. Enhancing feature diversity boosts channel-adaptive vision transformers. In NeurIPS , 2024. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. Learning transferable visual models from natural language supervision. In NeurIPS , 2021. Replogle, J. M., Saunders, R. A., Pogson, A. N., Hussmann, J. A., Lenail, A., Guna, A., Mascibroda, L., Wagner, E. J., Adelman, K., Lithwick-Yanai, G., Iremadze, N., Oberstrass, F., Lipson, D., Bonnar, J. L., Jost, M., Nor- man, T. M., and Weissman, J. S. Mapping information- rich genotype-phenotype landscapes with genome-scale perturb-seq. Cell, 185(14):2559–2575.e28, July 2022. Reymond, J.-L. The chemical space project. Acc. Chem. Res., 48(3):722–730, March 2015. Roheda, S., Riggan, B. S., Krim, H., and Dai, L. Cross- modality distillation: A case for conditional generative adversarial networks. In ICASSP , 2018. doi: 10.1109/ ICASSP.2018.8462082. Rosen, Y ., Roohani, Y ., Agarwal, A., Samotor ˇcan, L., Tab- ula Sapiens Consortium, Quake, S. R., and Leskovec, J. Universal cell embeddings: A foundation model for cell biology. bioRxiv , November 2023. Ruppli, C., Gori, P., Ardon, R., and Bloch, I. Optimiz- ing Transformations for Contrastive Learning in a Dif- ferentiable Framework , pp. 96–105. Springer NatureSwitzerland, 2022. ISBN 9783031167607. doi: 10. 1007/978-3-031-16760-7 10. URL http://dx.doi. org/10.1007/978-3-031-16760-7_10 . Saillard, C., Jenatton, R., Llinares-L ´opez, F., Ma- riet, Z., Cahan ´e, D., Durand, E., and Vert, J.-P. H-optimus-0, 2024. URL https: //github.com/bioptimus/releases/tree/ main/models/h-optimus/v0 . Sanchez, M., Bourriez, N., Bendidi, I., Cohen, E., Svatko, I., Del Nery, E., Tajmouati, H., Bollot, G., Calzone, L., and Genovesio, A. Large scale cell painting guided com- pound selection reveals activity cliffs and functional rela- tionships. May 2025. Sanchez-Fernandez, A., Rumetshofer, E., Hochreiter, S., and Klambauer, G. CLOOME: contrastive learning un- locks bioimaging databases for queries with chemical structures. Nat. Commun. , 14(1):7339, November 2023. Sarkar, P. and Etemad, A. Xkd: Cross-modal knowledge dis- tillation with domain alignment for video representation learning. Proceedings of the AAAI Conference on Artifi- cial Intelligence , 38. doi: 10.1609/aaai.v38i13.29407. Subramanian, A., Tamayo, P., Mootha, V . K., Mukherjee, S., Ebert, B. L., Gillette, M. A., Paulovich, A., Pomeroy, S. L.,
|
https://arxiv.org/abs/2505.21317v1
|
Golub, T. R., Lander, E. S., and Mesirov, J. P. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl. Acad. Sci. U. S. A. , 102(43):15545–15550, October 2005. Subramanian, A., Narayan, R., Corsello, S. M., Peck, D. D., Natoli, T. E., Lu, X., Gould, J., Davis, J. F., Tubelli, A. A., Asiedu, J. K., Lahr, D. L., Hirschman, J. E., Liu, Z., Donahue, M., Julian, B., Khan, M., Wadden, D., Smith, I. C., Lam, D., Liberzon, A., Toder, C., et al. A next generation connectivity map: L1000 platform and the first 1,000,000 profiles. Cell, 171(6):1437–1452.e17, 2017. Theodoris, C. V ., Xiao, L., Chopra, A., Chaffin, M. D., Al Sayed, Z. R., Hill, M. C., Mantineo, H., Brydon, E. M., Zeng, Z., Liu, X. S., and Ellinor, P. T. Transfer learn- ing enables predictions in network biology. Nature , 618 (7965):616–624, June 2023. Tsai, Y .-H. H., Wu, Y ., Salakhutdinov, R., and Morency, L.-P. Self-supervised learning from a multi-view perspec- tive, 2021. URL https://arxiv.org/abs/2006. 05576 . V orontsov, E., Bozkurt, A., Casson, A., Shaikovski, G., Zelechowski, M., Liu, S., Severson, K., Zimmermann, E., Hall, J., Tenenholtz, N., Fusi, N., Mathieu, P., van Eck, A., Lee, D., Viret, J., Robert, E., Wang, Y . K., 12 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Kunz, J. D., Lee, M. C. H., Bernhard, J., Godrich, R. A., Oakley, G., Millar, E., Hanna, M., Retamero, J., Moye, W. A., Yousfi, R., Kanan, C., Klimstra, D., Rothrock, B., and Fuchs, T. J. Virchow: A million- slide digital pathology foundation model, 2024. URL https://arxiv.org/abs/2309.07778 . Wang, Z., Codella, N., Chen, Y .-C., Zhou, L., Dai, X., Xiao, B., Yang, J., You, H., Chang, K.-W., fu Chang, S., and Yuan, L. Multimodal adaptive distillation for leveraging unimodal encoders for vision-language tasks, 2022. URL https://arxiv.org/abs/2204.10496 . Watkinson, G., Cohen, E., Bourriez, N., Bendidi, I., Bollot, G., and Genovesio, A. Weakly supervised cross-modal learning in high-content screening. In ISBI, 2024. doi: 10.1109/ISBI56570.2024.10635200. Wen, H., Tang, W., Dai, X., Ding, J., Jin, W., Xie, Y ., and Tang, J. CellPLM: Pre-training of cell language model beyond single cells. In The Twelfth International Confer- ence on Learning Representations , 2024. URL https: //openreview.net/forum?id=BKXvPDekud . Wenkel, F., Tu, W., Masschelein, C., Shirzad, H., Eastwood, C., Whitfield, S. T., Bendidi, I., Russell, C., Hodgson, L., Mesbahi, Y . E., Ding, J., Fay, M. M., Earnshaw, B., Noutahi, E., and Denton, A. K. Txpert: Leveraging bio- chemical relationships for out-of-distribution transcrip- tomic perturbation prediction, 2025. Wu, K., Peng, H., Zhou, Z., Xiao, B., Liu, M., Yuan, L., Xuan, H., Valenzuela, M., Chen, X. S., Wang, X., Chao, H., and Hu, H. Tinyclip: Clip distillation via affinity mimicking and weight inheritance. In ICCV , 2023. Xi, J., Osea, J., Xu, Z., and Hartford, J. Propensity score alignment of unpaired multimodal data. In NeurIPS , 2024. Xie, R., Pang, K., Chung, S. W., Perciani, C., Mac- Parland, S., WANG, B., and Bader, G. Spatially re- solved gene expression prediction from histology im- ages via bi-modal contrastive learning.
|
https://arxiv.org/abs/2505.21317v1
|
In Thirty-seventh Conference on Neural Information Processing Systems , 2023. URL https://openreview.net/forum? id=eT1tMdAUoc . Xue, Z., Ren, S., Gao, Z., and Zhao, H. Multimodal knowledge expansion. In ICCV , 2021. doi: 10.1109/ ICCV48922.2021.00089. Yang, C., An, Z., Huang, L., Bi, J., Yu, X., Yang, H., Diao, B., and Xu, Y . Clip-kd: An empirical study of clip model distillation. In CVPR , 2024.Yang, F., Wang, W., Wang, F., Fang, Y ., Tang, D., Huang, J., Lu, H., and Yao, J. scBERT as a large-scale pretrained deep language model for cell type annotation of single- cell RNA-seq data. Nat. Mach. Intell. , 4(10):852–866, September 2022. Yao, H., Hanslovsky, P., Huetter, J.-C., Hoeckendorf, B., and Richmond, D. Weakly supervised set-consistency learning improves morphological profiling of single-cell images. In CVPR Workshops , 2024. Ye, C., Ho, D. J., Neri, M., Yang, C., Kulkarni, T., Rand- hawa, R., Henault, M., Mostacci, N., Farmer, P., Renner, S., Ihry, R., Mansur, L., Keller, C. G., McAllister, G., Hild, M., Jenkins, J., and Kaykas, A. DRUG-seq for miniaturized high-throughput transcriptome profiling in drug discovery. Nat. Commun. , 9(1):4307, October 2018. Zhai, X., Mustafa, B., Kolesnikov, A., and Beyer, L. Sig- moid loss for language image pre-training. In ICCV , 2023. Zhou, Y ., Badgery, H., Read, M., Bailey, J., and Davey, C. E. Dda: Dimensionality driven augmentation search for contrastive learning in laparoscopic surgery, 2024. URL https://arxiv.org/abs/2406.00907 . Zhu, Y . and Wang, Y . Student customized knowledge distil- lation: Bridging the gap between student and teacher. In ICCV , 2021. doi: 10.1109/ICCV48922.2021.00501. 13 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations A. Detailed Experimental Setup A.1. Encoders We use three main models for our experiments, in addition to the MLP trained from scratch. Phenom-1 (Kraus et al., 2024) is a Vision Transformer-based model with 300 million parameters, trained using a Masked Autoencoder (MAE) framework. It is pretrained on RPI-93M, a dataset of 93 million microscopy images, capturing diverse cellular phenotypes across CRISPR, chemical, and soluble perturbations, making it highly effective for large-scale cellular morphology analysis. scVI (Lopez et al., 2018) is a probabilistic generative model designed for single-cell RNA sequencing (scRNA-seq) data, trained using a Variational Autoencoder (V AE) framework. It encodes high-dimensional gene expression data into a biologically meaningful latent space, leveraging a zero-inflated negative binomial (ZINB) reconstruction objective to model overdispersion and dropout effects in transcriptomic data. scGPT (Cui et al., 2024) is a transformer-based foundation model pretrained on 33 million scRNA-seq samples using a masked language modeling objective. It captures complex gene–gene and gene–cell interactions, with fine-tuning capabilities for tasks like cell type annotation, multi-omic integration, and perturbation response prediction. These models provide robust, biologically relevant representations tailored for microscopy and transcriptomics data. A.2. Implementation details The MLP adapters fSandfTused in this work are fully connected feedforward networks designed to align embeddings from the transcriptomics (Tx) and microscopy imaging encoders into a shared latent space. For the transcriptomics adapter, the architecture comprises an input layer of size 256, two hidden layers with dimensions 512 and 1024 respectively, and an output
|
https://arxiv.org/abs/2505.21317v1
|
layer of size 768. The image adapter follows a similar design, with an input size of 768, two hidden layers of size 1024, and an output layer of size 768. ReLU activations are applied to all hidden layers, while the output layer uses a linear activation. For VICReg, learning rates for the Tx and image adapters were 0.1 and 1×10−8, respectively, and training spanned 10 epochs with a minimum learning rate of 10−10; the VICReg loss parameters (similarity, variance, and covariance weights) were kept at their default settings. Similarly, SigClip used Tx and Img adapters learning rates of 0.1 and 10−8, respectively, with training conducted for 10 epochs and a minimum learning rate of 10−10; the temperature and normalization parameters were kept at their defaults. For DCCA, the Tx and Img adapters were trained with a learning rate of 10−6and10−8 respectively over 50 epochs, a minimum learning rate of 10−10, and loss parameters including an output dimension size of 30, usage of all singular values, and an epsilon of 10−6. The SHAKE method utilized Tx and Img adapters learning rates of 0.1 and 10−8, Tx and Img classifier learning rates of 10−4and10−7, respectively, and a temperature of 9, with loss balancing hyperparameters α= 10 andβ= 0.001; training was conducted over 10 epochs with a minimum learning rate of10−10. For KD, the Tx adapters and classifier learning rates were 0.1 and 10−4, respectively, with a temperature of 9 andα= 10 , trained for 10 epochs with a minimum learning rate of 10−10. Lastly, C2KD employed Tx and Img adapters learning rates of 0.1 and 10−6, Tx and Img classifier learning rates of 10−5and10−3, respectively, and a temperature of 2 with a Kendall Rank Correlation threshold of 0.3; training spanned 30 epochs with a minimum learning rate of 10−7. At evaluation step, we perform TVN alignment for all output embeddings for the Known Relationship Recall benchmark, and use raw embeddings for Transcriptomic Interpretability Preservation benchmark, as is used in (Bendidi et al., 2024b). B. Evaluation tasks B.1. Known Biological Relationship Recall TheKnown Relationship Recall score is a benchmarking metric introduced in (Celik et al., 2024) and designed to evaluate the extent to which a perturbative map captures established biological relationships. This score serves as a proxy for assessing the biological relevance of the map and its ability to uncover meaningful interactions between genes. By comparing predicted relationships within the map to curated annotations from biological databases, the Known Relationship Recall score provides a quantitative measure of the map’s fidelity to known biology. The computation of the Known Relationship Recall score follows these steps: 1. Pairwise Similarity Computation: For each pair of genes (gi, gj)in the map, we compute the cosine similarity between 1 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations their aggregated embeddings xgiandxgj. The cosine similarity is defined as: cos(xgi,xgj) =⟨xgi,xgj⟩ ∥xgi∥∥xgj∥, where ⟨xgi,xgj⟩is the dot product of the embeddings, and ∥xgi∥is the Euclidean norm of xgi. 2. Selection of Predicted Relationships: Relationships are classified as ”predicted” if their cosine similarity scores fall into the top or
|
https://arxiv.org/abs/2505.21317v1
|
bottom relationships according to a percentage threshold (usually 5%) of the distribution of all pairwise similarities. High similarity scores indicate cooperative relationships, while low scores suggest functional opposition. 3. Validation Against Biological Databases: The predicted relationships are validated using established biological annotations from databases such as CORUM, HuMAP, Reactome, SIGNOR, and StringDB. Only gene that appear in the perturbation dataset are considered for pairs in the database. 4. Recall Calculation for Each Database: For each database, the recall is computed as the fraction of annotated relationships that are successfully identified among the predicted relationships: Recall db=#(True Positive Relationships) #(Total Annotated Relationships in Map) Here, true positive relationships are those annotated in the database that also fall within the predicted set. The final Known Relationship Recall score is computed as the mean of the recall values across the five databases. The Known Relationship Recall score provides a single aggregated metric that encapsulates the map’s ability to recapitulate established biological relationships. A high score indicates strong alignment with existing annotations, demonstrating the map’s utility in representing meaningful biological interactions. B.2. Transcriptomic Interpretability Preservation TheTranscriptomic Interpretability Preservation , first introduced as linear interpretability evaluation in (Bendidi et al., 2024b), is an evaluation framework designed to assess how well a model captures and preserves biologically meaningful patterns in transcriptomic data. This task evaluates the quality of the model’s internal representations and their ability to reconstruct gene expression profiles accurately while maintaining the structural relationships between control and perturbation conditions. By focusing on both the accuracy of reconstructed gene expression profiles and the preservation of batch-specific control-perturbation relationships, this metric provides a holistic view of the model’s capability to retain original transcriptomic interpretability. The evaluation relies on two complementary metrics, which are averaged to compute the final Transcriptomic Interpretability Preservation score: Structural Integrity Score: This metric quantifies how well the model preserves the relationships between control and perturbation conditions within each biological batch. The Structural Integrity score is computed as: Structural Integrity = 1−Structural Distance Structural Distance max, where the Structural Distance measures the Frobenius norm of the difference between centered predicted and actual gene expression matrices, and Structural Distance maxis the theoretical maximum distance, as derived in (Bendidi et al., 2024b). A score close to 1 indicates strong preservation of the structural relationships. Spearman Correlation of Reconstruction: This metric evaluates how accurately the model reconstructs original gene expression profiles from its internal latent representations. The Spearman correlation is calculated between the predicted and true gene expression profiles, providing a robust measure of rank-based agreement. To provide a comprehensive evaluation, the Transcriptomic Interpretability Preservation metric is computed as the average of the Structural Integrity score and the Spearman correlation of reconstruction. By evaluating both aspects, the metric ensures that a model not only produces high-quality reconstructions but also retains the underlying biological structure of the data. This is crucial for downstream applications such as identifying gene interactions or studying the effects of perturbations in various conditions. 2 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations C. Batch Correction Techniques C.1. Centering Centering
|
https://arxiv.org/abs/2505.21317v1
|
involves adjusting the dataset such that each feature has a mean of zero. This is achieved by subtracting the mean of each feature from the data. Given a feature matrix X∈Rn×m, where nis the number of samples and mis the number of features, the centered matrix ˜Xis computed as: ˜Xij=Xij−1 nnX k=1Xkj,∀i= 1, . . . , n, ∀j= 1, . . . , m. This step shifts the data so that each feature’s mean is zero. In batch-corrected biological datasets, centering is typically applied to remove the influence of negative control embeddings, facilitating the focus on perturbation effects. C.2. Center Scaling/Standardization Center scaling/Standardization extends centering by adjusting each feature so that it has unit variance. This ensures comparability across features. For a centered matrix ˜X, the scaled matrix ˆXis defined as: ˆXij=˜Xij σj, σ j=vuut1 nnX k=1˜X2 kj, ∀i= 1, . . . , n, ∀j= 1, . . . , m, where σjrepresents the standard deviation of the j-th feature. Center scaling is important for techniques like Principal Component Analysis (PCA), which are influenced by the scale of the data. C.3. Typical Variation Normalization (TVN) Typical Variation Normalization (TVN) is a technique designed to enhance the representation of biological data by minimizing batch effects and accentuating subtle phenotypic differences. TVN is particularly relevant in high-content imaging screens and other scenarios with significant batch variability. TVN begins by computing the principal components of control samples (negative control conditions) to identify the primary directions of variation. PCA is performed on the centered control data ˜Xcontrol to obtain principal components {v1, . . . ,vm}, with each component representing a variance direction in the data space. The normalization process involves the following steps: 1. Centering and Scaling of Negative Controls: The negative control data Xcontrol is centered and scaled as: ˆXcontrol =Xcontrol−µcontrol σcontrol, where µcontrol andσcontrol are the mean and standard deviation of the control embeddings. 2. Principal Component Analysis (PCA): PCA is conducted on ˆXcontrol to derive principal components. The matrix W∈Rm×mconsists of columns that are the component vectors vj. 3. TVN Transformation: The transformation matrix Tis constructed to normalize variance along each principal component axis: T=W·D−1/2·W⊤, where Dis a diagonal matrix of the eigenvalues associated with the principal components. 4. Application to All Embeddings: The transformation is applied to all embeddings Xallas: XTVN=T·Xall. This step reduces unwanted variation while emphasizing important biological differences, enabling a focus on subtle or rare phenotypic features without batch-related artifacts. D. Additional Results 3 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Figure 5. Comparison of relationship gains and losses across cross-modal distillation methods shown in Figure 4. Our approach achieves the highest overall relationship recall and best preserves transcriptomic information. Figure 6. Literature-known biological relationships retrieved through the LINCS dataset by the transcriptomics and microscopy imaging unimodal encoders, alongside our proposed Semi-Clipped approach, without data augmentations, and a null distribution through randomization of the perturbation labels of the pretrained Semi-Clipped. Semi-Clipped remains consistent with transcriptomics while distilling new relationships from microscopy imaging, displaying a distinct pattern from the null distribution.
|
https://arxiv.org/abs/2505.21317v1
|
4 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Figure 7. Literature-known biological relationships retrieved by the transcriptomics and microscopy imaging unimodal encoders, alongside our proposed Semi-Clipped approach (first row), without data augmentations, across different retrieval thresholds (columns) on the HUVEC-KO dataset. Semi-Clipped remains consistent with transcriptomics while distilling new relationships from microscopy imaging. In second and third row, we compare Semi-Clipped to a null distribution achieved through randomization of the perturbation labels of the pretrained Semi-Clipped. Our approach displays a distinct pattern from the null distribution, and aligns better to both modalities than random. 5 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Enriched Pathways Source Set P-value REACTOME CELL CYCLE CHECKPOINTS Semi-Clipped \(Tx∪Img) 0.0323 KEGG ANTIGEN PROCESSING AND PRESENTATION (Semi-Clipped ∩Img)\Tx 0.0049 KEGG P53 SIGNALING PATHWAY (Semi-Clipped ∩Img)\Tx 0.0033 KEGG RIG I LIKE RECEPTOR SIGNALING PATHWAY (Semi-Clipped ∩Img)\Tx 0.0082 REACTOME ADAPTIVE IMMUNE SYSTEM (Semi-Clipped ∩Img)\Tx 0.0114 REACTOME ANTIGEN PRESENTATION FOLDING ASSEMBLY AND PEPTIDE LOADING OF CLASS I MHC (Semi-Clipped ∩Img)\Tx 0.0049 REACTOME ANTIGEN PROCESSING CROSS PRESENTATION (Semi-Clipped ∩Img)\Tx 0.0016 REACTOME ASPARAGINE N LINKED GLYCOSYLATION (Semi-Clipped ∩Img)\Tx 0.0049 REACTOME CALNEXIN CALRETICULIN CYCLE (Semi-Clipped ∩Img)\Tx 0.0049 REACTOME CELL CYCLE (Semi-Clipped ∩Img)\Tx 0.0480 REACTOME CLASS I MHC MEDIATED ANTIGEN PROCESSING PRESENTATION (Semi-Clipped ∩Img)\Tx 0.0049 REACTOME DDX58 IFIH1 MEDIATED INDUCTION OF INTERFERON ALPHA BETA (Semi-Clipped ∩Img)\Tx 0.0082 REACTOME DEUBIQUITINATION (Semi-Clipped ∩Img)\Tx 0.0130 REACTOME G1 S DNA DAMAGE CHECKPOINTS (Semi-Clipped ∩Img)\Tx 0.0033 REACTOME G ALPHA Q SIGNALLING EVENTS (Semi-Clipped ∩Img)\Tx 0.0065 REACTOME HEMOSTASIS (Semi-Clipped ∩Img)\Tx 0.0082 REACTOME INNATE IMMUNE SYSTEM (Semi-Clipped ∩Img)\Tx 0.0227 REACTOME NEGATIVE REGULATORS OF DDX58 IFIH1 SIGNALING (Semi-Clipped ∩Img)\Tx 0.0033 REACTOME N GLYCAN TRIMMING IN THE ER AND CALNEXIN CALRETICULIN CYCLE (Semi-Clipped ∩Img)\Tx 0.0049 REACTOME OV ARIAN TUMOR DOMAIN PROTEASES (Semi-Clipped ∩Img)\Tx 0.0016 REACTOME PLATELET ACTIV ATION SIGNALING AND AGGREGATION (Semi-Clipped ∩Img)\Tx 0.0065 REACTOME POST TRANSLATIONAL PROTEIN MODIFICATION (Semi-Clipped ∩Img)\Tx 0.0002 REACTOME REGULATION OF TP53 ACTIVITY (Semi-Clipped ∩Img)\Tx 0.0179 REACTOME REGULATION OF TP53 ACTIVITY THROUGH METHYLATION (Semi-Clipped ∩Img)\Tx 0.0016 REACTOME REGULATION OF TP53 ACTIVITY THROUGH PHOSPHORYLATION (Semi-Clipped ∩Img)\Tx 0.0179 REACTOME REGULATION OF TP53 EXPRESSION AND DEGRADATION (Semi-Clipped ∩Img)\Tx 0.0016 REACTOME RNA POLYMERASE II TRANSCRIPTION (Semi-Clipped ∩Img)\Tx 0.0480 REACTOME SIGNALING BY GPCR (Semi-Clipped ∩Img)\Tx 0.0082 REACTOME STABILIZATION OF P53 (Semi-Clipped ∩Img)\Tx 0.0016 REACTOME TRANSCRIPTIONAL REGULATION BY TP53 (Semi-Clipped ∩Img)\Tx 0.0195 Table 3. Gene Set Enrichment Analysis (GSEA) results on the HUVEC-KO dataset, highlighting enriched pathways identified uniquely in the Semi-Clipped approach compared to the transcriptomics and microscopy imaging unimodal encoders. The first row represents pathways uniquely enriched in Semi-Clipped after excluding the union of transcriptomics and morphological relationships, revealing enrichment in cell cycle pathways. The subsequent rows list pathways enriched in the intersection of Semi-Clipped and microscopy imaging, excluding transcriptomics relationships, which shows that in addition to Semi-Clipped unique enriched pathways, our approach is also enriched by morphology specific pathways. 6
|
https://arxiv.org/abs/2505.21317v1
|
Beyond Chemical QA: Evaluating LLM’s Chemical Reasoning with Modular Chemical Operations Hao Li1∗, He Cao2∗, Bin Feng2, Yanjun Shao3, Xiangru Tang3, Zhiyuan Yan1, Li Yuan1‡,Yonghong Tian1‡,Yu Li2‡, 1Peking University,2International Digital Economy Academy,3Yale University lihao1984@pku.edu.cn, caohe@idea.edu.cn, liyu@idea.edu.cn Abstract While large language models (LLMs) with Chain-of-Thought (CoT) reasoning excel in mathematics and coding, their potential for systematic reasoning in chem- istry, a domain demanding rigorous structural analysis for real-world tasks like drug design and reaction engineering, remains untapped. Current benchmarks focus on simple knowledge retrieval, neglecting step-by-step reasoning required for complex tasks such as molecular optimization and reaction prediction. To address this, we introduce ChemCoTBench, a reasoning framework that bridges molecular structure understanding with arithmetic-inspired operations, including addition, deletion, and substitution, to formalize chemical problem-solving into transpar- ent, step-by-step workflows. By treating molecular transformations as modular "chemical operations", the framework enables slow-thinking reasoning, mirroring the logic of mathematical proofs while grounding solutions in real-world chemical constraints. We evaluate models on two high-impact tasks: Molecular Property Optimization and Chemical Reaction Prediction. These tasks mirror real-world challenges while providing structured evaluability. By providing annotated datasets, a reasoning taxonomy, and baseline evaluations, ChemCoTBench bridges the gap between abstract reasoning methods and practical chemical discovery, establishing a foundation for advancing LLMs as tools for AI-driven scientific innovation.1 2 1 Introduction With the rapid advancement of large language models (LLMs), reasoning capabilities have become a defining measure of performance. Techniques like chain-of-thought [ 64] prompting enable LLMs to decompose complex problems into structured, human-like reasoning steps ( system-II [29]), achieving breakthroughs in mathematics [ 47,54,67], coding [ 14,23], and even Olympiad-level challenges [ 17,22,61]. Despite recent advances in LLM reasoning capabilities, chemistry, a discipline fundamental to areas like drug discovery and materials science, still lacks a benchmark that assesses whether these improvements extend to its complex, domain-specific problem-solving needs. While several benchmarks have been proposed for LLMs in chemistry [ 16,34,38,43,69], they primarily focus on domain-specific question answering, which suffers from several key limitations: 1. Lack of Structured, Stepwise Reasoning and Real-World Relevance: Current evaluations often reduce chemistry assessment to factual recall (e.g., naming compounds or reactions), neglecting the need for operational reasoning akin to arithmetic or coding. Unlike mathematical problems, where solutions demand explicit, verifiable steps, chemistry QA tasks fail to simulate how experts 1ChemCoTBench: https://huggingface.co/datasets/OpenMol/ChemCoTBench 2ChemCoTDataset: https://huggingface.co/datasets/OpenMol/ChemCoTBench-CoT Preprint. Under review.arXiv:2505.21318v1 [cs.AI] 27 May 2025 1Add 4Delete3 Scaffold Understand Chemical Operations: 123……Previous Chemical QAs +𝑲𝑶𝑯2 ReactionScientific Benchmark Reasoning LLMs Chem CoT Bench 1 11 1Functional Groups 2Murcko Scaffold2 1 1Add the hydroxyl 2 2Delete the methyl + 1 1Mol-Understanding Mol-Editing Mol-Optimization Reaction Prediction Given two parts of Products, Reagents, and Reactants , predict the rest of Reaction. Optimize the molecule for better chemical properties, e.g. QED, LogP, Solubility, DrD -2, JNK, GSK -3𝜷Help me find the Functional Group, extract the Murcko Scaffold , distinguish if the molecule is Mutated or Permutated.Add/Delete/Substitute the Functional Group of this Molecule for me. 2 2Optimization: add & delete 1 1ReactionQ: What is the physical property of beryllium? A: Beryllium is highly toxic if inhaled, requiring careful handlingFigure 1: Previous chemical benchmarks focus
|
https://arxiv.org/abs/2505.21318v1
|
on factual recall with domain knowledge, while our ChemCoTBench focuses on the evaluation of step-wise reasoning for complex chemical problems by defining a set of modular chemical operations. decompose challenges. For instance, they don’t capture the process of iteratively refining a molecule’s substructure to optimize properties, considering crucial real-world factors like synthesizability or toxicity, or deducing reaction mechanisms through intermediate transformations. This gap means we’re not fully evaluating the analytical depth required in real-world chemistry. Therefore, evaluations must shift from these textbook-like problems to challenges that better reflect practical applications. 2. Ambiguous Skill Attribution in Hybrid Evaluations: Existing benchmarks [ 37,50,63] often conflate reasoning, knowledge recall, and numerical computation into single "exam-style" metrics—for instance, asking LLMs to calculate reaction yields while simultaneously recalling reagent properties. This obscures whether strong performance stems from structured reasoning (e.g., analyzing reaction pathways) or memorized facts (e.g., solvent boiling points). Such ambiguity hinders targeted model improvement and misaligns evaluations with downstream tasks like drug discovery, where success depends on modular reasoning (e.g., decoupling molecular design from synthesizability checks) rather than monolithic problem-solving. To address these limitations, we introduce ChemCoTBench , astep-by-step ,application-oriented , andhigh-quality benchmark for evaluating LLM reasoning in chemical applications. A core innovation of ChemCoTBench is its formulation of complex chemical tasks, specifically targeting molecular modeling and design (Fig.1), into explicit sequences of verifiable modular chemical operations on SMILES structures (e.g., substructure addition, deletion, or substitution). This approach allows for a granular assessment of an LLM’s ability to execute and chain together fundamental chemical transformations. The benchmark features progressively challenging tasks, spanning from basic molecular understanding and editing to property-guided structure optimization and complex multi-molecule chemical reactions. High-quality evaluation is ensured through a dual validation process combining LLM judgment with expert review from 13 chemists. We employ quantitative assessments for all subtasks in ChemCoTBench to evaluate the chemical reasoning ability across reasoning-enhanced and non-reasoning LLMs. Experimental results reveal room for improvement in reasoning LLMs, particularly open-source and distilled-reasoning LLMs, when addressing complex chemical problems. While these models demonstrate strong performance in complex mathematical and coding tasks, they are unable to organize chemical knowledge and establish step-wise modular chemical operations due to the scarcity of chemical reasoning data. Notably, ChemCoTDataset, the large chemical CoT dataset provided by ChemCoTBench, is shown to enhance chemical reasoning performance, effectively addressing the reasoning data scarcity issue in Chemistry. To summarize, our key contributions in this work are as follows: Firstly, to address the lack of reason- ing and application-oriented tasks in existing chemical benchmarks, we propose ChemCoTBench, which evaluates the chemical capabilities of reasoning-LLMs through step-by-step tasks centered on molecular structure modification. Secondly, ChemCoTDataset is provided by ChemCoTBench to facilitate LLMs on chemical reasoning. Finally, extensive experiments demonstrate the effectiveness of ChemCoTBench and its corresponding ChemCoTDataset. 2 2 Related Works LLM Chain-of-Thoughts. LLMs have progressed from text generators to reasoning systems, with [64]’s Chain-of-Thought enabling stepwise problem decomposition via "slow-thinking" paradigms. These reasoning-enhanced LLMs have shown impressive performance in domains requiring system- atic problem-solving skills, particularly in mathematics and coding. Models like DeepSeek-R1 [ 13], Gemini [ 56], and Anthropic
|
https://arxiv.org/abs/2505.21318v1
|
Claude [ 53] have achieved notable results on mathematical bench- marks like MATH [ 19] and GSM8K [ 6], while also excelling at programming. Recent studies have begun exploring LLMs for chemical tasks, such as synthesis planning [ 4] and computational chemistry [ 26,45,51]. However, these efforts lack a systematic evaluation of LLMs’ chemical reason- ing capabilities, spanning spatial reasoning, domain-specific knowledge integration, and multi-step logical inference. Chemical Benchmarks. Current chemical benchmarks primarily focus on assessing discrete knowledge retrieval or simple prediction tasks, rather than evaluating the step-by-step reasoning processes crucial for complex chemical problem-solving. Most existing benchmarks [ 37,38,50,63] concentrate on question-answering formats that test factual recall and precise calculation but offer limited insight into a model’s ability to reason through multi-step chemical problems. Studies like [ 3, 15,43] have begun exploring LLMs’ chemical capabilities but typically focus on isolated tasks rather than comprehensive reasoning scenarios. Recent work by [ 69] introduces ChemLLM, a chemistry- specialized LLM framework with supporting datasets, but its benchmark focuses on knowledge recall rather than complex reasoning. Similarly, [ 15] introduces MolPuzzle, a benchmark for molecular structure elucidation that advances spatial reasoning evaluation but remains limited to spectral interpretation rather than broader chemical reasoning. ChemCoTBench advances chemical reasoning evaluation by using molecular structure to guide step-by-step reasoning, featuring core chemical arithmetic tasks and advanced cross-context applications for more thorough LLM assessment. 3 ChemCoTBench Construction Molecule Optimization𝟑𝟖% Reaction Prediction 𝟑𝟕% Molecule Understanding𝟏𝟗% Molecule Editing𝟔%Molecule Understanding SMILES -level Understanding Scaffold -level Understanding Functional Group Understanding Molecule Editing Substitute Functional Group Add Functional Group Delete Functional Group Molecule Optimization Physicochemical Properties (QED, LogP, Solubility) Protein Activation (DRD2, JNK3, GSK -3𝛽) Reaction Predictions Retrosynthesis Prediction Forward Prediction (Major & Byproduct Prediction) Condition Prediction Mechanism PredictionFunc -Group100% 90% 0%Scaffold SMILES Add Delete Substitute 100% 0%80% 60% 40% PhyChem Protein -Act Retro Fwd Condition Mech𝟗𝟏%𝟗𝟒%𝟏𝟎𝟎% 𝟏𝟎𝟎% 𝟏𝟎𝟎% 𝟏𝟎𝟎%𝟏𝟎𝟎% 𝟏𝟎𝟎% 𝟗𝟔% 𝟖𝟗%𝟗𝟕% 𝟗𝟎% (a)Entity correct rate in understanding & editing(b) Entity correct rate in optimization & reaction prediction(c) Figure 2: (a). Distribution analysis for ChemCoTBench. (b). Samples from both molecular understanding and editing tasks achieved exceptionally high accuracy in chemical expert evaluations of chemical entities, including function group names, molecule names, chemical operation names, reaction information, etc. (c).Samples from molecule optimization and reaction prediction also show high accuracy (> 89%) in chemical expert evaluations. ChemCoTBench contains 1,495 samples across 22 chemical tasks as the benchmark dataset, as shown in Fig 2(a). 14,000 high-quality samples with chain-of-thoughts annotations are further sampled to form the ChemCoTDataset. ChemCoTBench was constructed through over 1,800 hours of combined expert and LLM-assisted annotation. It comprises four main tasks and 22 subtasks, covering a broad spectrum of chemical challenges. We define the reasoning steps of each task as modular chemical operations, as shown in the bottom two lines of Fig. 3. ChemCoTBench is guided by two core principles: Diversity andQuality . Molecular diversity is ensured by systematically selecting compounds with varied scaffolds and functional groups, enabling broad coverage of real- world chemical scenarios. To ensure high data quality, all benchmark samples undergo multi-stage hybrid review by LLMs and expert chemists,
|
https://arxiv.org/abs/2505.21318v1
|
with prompt templates iteratively refined to meet subtask-specific requirements. 3 3.1 Task Construction To evaluate the capabilities of LLMs in chemistry, we constructed a comprehensive suite of tasks. Foundation Task: Molecule-Understanding. We begin with the recognition and counting of two fundamental elements of molecules: (1) Functional groups (FGs) , which are critical clusters of atoms that determine the physicochemical properties and reactivity of organic molecules; (2) Rings , which maintain fixed conformations and serve as stable building blocks in drug design, crystal engineering, and polymer synthesis. The recognition and counting of FGs and rings, which require syntactic and lexical understanding of SMILES, remain challenging for LLMs due to their limited chemical topology awareness. Next, we evaluate the recognition of two more complex scaffolds: (1) Murcko scaffolds , which are molecular frameworks obtained by systematically removing side chains and serve as a foundation for structural analysis in medicinal chemistry; (2) Ring systems , which include fused and bridged ring systems and pose a significant challenge for molecular synthesis. These tasks assess deeper hierarchical comprehension. Finally, we introduce SMILES equivalence tasks, involving permutations and mutations, to test whether LLMs can recognize chemically equivalent structures despite surface-level variations. This probes the models’ robustness to SMILES variability. Foundation Task: Molecule-Editing. This task assesses whether LLMs can perform basic molecular editing operations, such as adding, deleting, and substituting functional groups, when guided by natural language instructions. Analogous to basic arithmetic in mathematics, these editing operations form the building blocks of molecular manipulation. Complex tasks like molecular optimization or synthesis can be translated into specific editing operations. For example, a molecular optimization task can be treated as a series of molecule-editing tasks aimed at improving chemical or biological properties. This task evaluates two core capabilities: the capacity to maintain chemical validity after editing operations and the ability to correctly execute the modifications based on textual instructions. Application Task: Molecule-Optimization. This task evaluates whether LLMs can generate optimized molecules given a source molecule and target property. We consider two levels of molecular properties: At the physicochemical level , we aim to improve LogP, solubility, and QED for improved drug-likeness. At the target level , we aim to improve binding affinity for the DRD2, GSK3- β, and JNK3 target, which poses a more challenging task as it requires the understanding of drug-target interactions. Solving these problems necessitates in-depth analysis and reasoning capabilities, as LLMs must not only parse the molecular structure but also infer how specific structural modifications influence target properties through complex chemical and biological interactions. Application Task: Reaction Prediction. This task evaluates LLMs’ chemical reasoning ability across four tasks: (1) Forward Prediction : Predict major products and by-products from reactants and reagents, requiring knowledge of reactivity, reaction rules, and stability. By-product prediction aids reaction optimization and purification by reflecting kinetics and thermodynamics. (2) Single-Step Ret- rosynthesis : Given a product and reagents, predict reactants by identifying key bond disconnections and functional group transformations under constraints. (3) Reaction Condition Recommendation : Suggest catalysts, solvents, and reagents for given reactants and products, relying on understanding of solvent effects, catalyst mechanisms,
|
https://arxiv.org/abs/2505.21318v1
|
and their impact on yield and selectivity. (4) Reaction Mechanism Understanding : Includes Next Elementary-Step Product Prediction (predicting intermedi- ates stepwise, testing electron flow modeling) and Mechanism Route Selection (choosing the most plausible pathway from alternatives, assessing mechanistic reasoning). Together, these tasks span from overall product prediction to detailed mechanistic insight, providing a comprehensive test of LLMs as chemical reasoning agents. 3.2 Benchmark Construction Data Collection. Raw molecular structures for understanding, editing, and optimization are sourced from published datasets, including PubChem [ 30], ChEMBL [ 11], ZINC [ 25], and Deep-Mol- Opt [ 18]. Chemical reactions are collected from patent databases such as USPTO [ 21], Pistachio [ 42], and Reaxys [ 8]. For reaction mechanism annotation, we refer to the processing pipeline proposed in [28]. The complete data collection protocols are archived in the Appendix B. Data Filtering and Sampling. An initial filtration step removed specimens exhibiting: metal- containing compounds, excessive molecular complexity (defined by the presence of multiple sophis- ticated functional groups and polycyclic architectures), and factually inconsistent data. To ensure both high data diversity and broad coverage, we systematically curate diverse chemical features 4 Raw Data Collection Gather raw molecules and reaction equations from PubChem, ChEMBL, ZINC ,etc.1 Filter & Sampling2 Filter & Stratified sampling over different weights , complexity , and functional groups / reaction types 1Contain metals 2Is ambiguousRemove invalid molecules that …… 3Is too complexCoT Annotation3 Summarize the step-wise reasoning from models and perform CoT distillation with DeepSeek -r1.Question Step-wise CoT Reasoning models Distilled CoT1I need to analyze …… 2The molecule …… 3It is not feasible …… Distillation Review & Refinement4 Have Chemisty Ph.Ds evaluate samples and correct flawed reasoning chains. Count methyl groups number in this molecule: CC1=CC=C(C)C=C1( p-Xylene) Problem: Identify fragment: A methyl group is –CH₃, SMILES=C. Parse molecule SMILES: CC1=CC=C(C)C=C1 shows two “C” substituents attached to the aromatic ring . Match fragment occurrences: Each “C” outside the ring core represents one methyl.Step-wise Reasoning 21 3 Solution: 2 Add a nitro group and remove the chlorine substituent from Clc1ccc(C)cc1 Problem: Functional group analysis: Locate Cl at C-4 and CH₃ at C-1 on the benzene . Delete strategy: Remove chlorine substituent to get Cc1ccccc1 ( toluene ). Add strategy: Add a nitro group at the former Cl site to get CC1=CC=C(C=C1)[N+](=O)[O -](4-nitrotoluene ). Feasibility: Conversion of toluene to 4-nitrotoluene is well-established.Step-wise Reasoning 21 3 Solution: CC1=CC=C(C=C1)[N+](=O)[O -] 4 Optimize the benzene to improve the compound’s solubility property Problem: Structure analysis: Benzene is fully hydrophobic with no polar groups. Property analysis: Negligible solubility ; no H-bond donors/acceptors. 1. Add –OH at C1 to form phenol; 2. Add –COOH at C4 to boost polarity ; 3. Confirm aromaticity and valence .Step-wise Reasoning 21 3 Solution: O=C(O)c1ccc(O)cc1 Predict the product from benzene and chloroethane with AlCl3 reagent Problem: Reactant analysis: C6H6 (Benzene), CH₃CH₂Cl (alkyl halide), AlCl₃ (catalyst). Identify function group: Locate H at benzene and CH₃ at chloromethane. Reaction type predict: friedel-crafts alkylation , The electrophile is the methyl carbocation. Benzene attacks the carbocation Product predict: Ethylbenzene ( C₆H₅CH₂CH₃ )..Step-wise Reasoning 21 3 Solution: c1cccc(CC)c1 4 Dataset Construction
|
https://arxiv.org/abs/2505.21318v1
|
Molecule Understanding Molecule Editing Molecule Optimization Reaction Prediction +Samples Updated Samples + Chemical ExpertsFigure 3: The dataset construction pipeline of ChemCoTBench contains four steps, including raw data collection, molecule filtering and sampling, chain-of-thoughts annotation, and chemical expert review & refinement. We also visualize the samples from the four main tasks and their corresponding modular chemical operations during the reasoning process. across tasks. For molecular understanding, the dataset includes 38 functional groups and 9 ring types. For editing, we cover 57 functional group transformations. Optimization tasks span 4 molecular weight-based structural scales. For reaction tasks, we include 100 common reaction classes, 175 distinct reaction conditions, and 123 annotated reaction mechanisms. Together, these components offer a rich and representative benchmark dataset for evaluating chemical reasoning in LLMs. Chain-of-Thoughts Annotation for Modular Chemical Operations. To derive intermediate reasoning steps for complex chemical problems, we distill the chain-of-thought annotations from LLMs and arrange them as modular chemical operations for systematic evaluation and supervised fine-tuning of reasoning models. Specifically, we analyze the problem-solving strategies of state-of- the-art reasoning models, including Gemini-2.5-pro, DeepSeek-R1, and Claude-3.7-sonnet-thinking, to extract step-wise reasoning patterns. These are distilled into a structured training corpus using DeepSeek-R1 via CoT prompting. As illustrated in Fig. 3, our distilled CoT samples span key chemical tasks including molecular understanding, editing, optimization, and reaction prediction. 3.3 Quality Review & Refinement To ensure the high quality of our benchmark and its large-scale dataset, we performed iterative evaluation and optimization of the molecules, results, and distilled Chain-of-Though reasoning processes from DeepSeek-R1-0324 [ 13] in ChemCoTBench. Our hybrid assessment approach combines automated LLM-based evaluation for scalability with manual expert review by chemists to guarantee scientific rigor, enabling comprehensive dataset refinement while maintaining efficiency. LLM-based CoT Evaluation. To improve the quality of CoT annotations in Deepseek’s process, we focused on two key elements: (1) Task-Specific Prompt Design : We discovered that providing detailed task descriptions and prior knowledge within prompts significantly enhances the model’s performance on chemical tasks. (2) Incorporation of IUPAC name : We found that including IUPAC names helps LLMs better understand complex molecular structures, as these names offer precise details about functional groups. Leveraging these insights, we iteratively refined our prompt designs. We then employed GPT-4o as an LLM verifier to ensure each CoT annotation was consistent with its corresponding prompt template and the provided IUPAC names. Chemical Expert Review & Refinement As a rigorous benchmark evaluation, we engaged 13 chemistry PhD candidates from Top Universities to assess the accuracy of chemical entities, including 5 Table 1: Experiments for the foundational tasks, including molecule understanding, molecule edit- ing, and their correlated subtasks. For the functional-group counting task (FG) and ring counting task (Ring) in the functional-group level molecule understanding, we apply the mean absolute er- ror (MAE) as the evaluation metric. Tanimoto molecule similarity is applied as the evaluation for the Murcko scaffold extraction task (Murcko). The accuracy (%) metric is applied to other subtasks. ModelsFunc-Group Scaffold SMILES Molecule-Edit FG↓Ring↓Murcko ↑Ring-sys ↑ Eq.↑ Add Delete Sub W/ Thinking Gemini-2.5-pro-think 0.11 0.60 0.51 87.5 82 100 85 81.7 Claude3.7-sonnet-think 0.21 1.60
|
https://arxiv.org/abs/2505.21318v1
|
0.40 80.0 84 85 80 83.4 DeepSeek-R1 0.27 1.55 0.34 45.0 65 70 70 68.3 o3-mini@20250103 0.13 0.60 0.39 75.0 78 65 55 80.0 o1-mini@20240912 0.21 1.25 0.25 61.7 66 55 80 58.3 Qwen3-235B-A22B-think 0.42 1.00 0.38 82.5 72 40 75 71.7 Qwen3-32B-think 0.25 0.95 0.21 75.0 68 20 55 20.0 Llama-Nemo-49B-think 0.80 1.90 0.09 86.8 46 0 80 8.0 W/o Thinking GPT-4o@20241120 0.17 1.35 0.21 80.0 72 80 80 65.0 Deepseek-V3 0.15 1.50 0.24 76.7 77 70 75 76.7 Gemini-2.0-flash 0.19 1.65 0.43 75.0 76 65 75 66.7 Qwen3-235B-A22B 0.42 1.00 0.34 82.5 75 40 75 66.7 Qwen3-32B 0.26 0.95 0.22 68.3 67 30 55 25.0 Qwen2.5-72B-Instruct 0.26 0.60 0.24 70.0 61 70 80 56.7 Qwen2.5-32B-Instruct 0.36 0.65 0.12 53.3 62 50 50 48.3 Llama-3.1-70B-Instruct 0.52 1.80 0.12 68.3 67 60 80 50.0 Llama-Nemo-49B 0.72 1.77 0.11 65.0 54 30 55 30.5 Gemma-2-27b-it 0.19 1.65 0.43 66.7 76 75 70 35.0 Phi-4-14B 0.28 1.65 0.15 70.0 65 60 80 38.3 OLMo2-32B-Instruct 0.19 1.05 0.07 63.3 50 15 30 11.7 BioMedGPT-7B 1.6 2.43 0.18 53.3 39 10 12 10 BioMistral-7B 1.0 1.85 0.04 32.5 50 0 10 0 functional groups, molecular names, reaction types, and operation names, in ChemCoTBench’s CoT annotations. As shown in Fig. 2 (b), the evaluation revealed near-perfect accuracy for molecule understanding and editing tasks, while more challenging tasks like molecule optimization and reaction prediction maintained over 90% accuracy (as shown in Fig. 2 (c)). Furthermore, we corrected these errors to enhance ChemCoTBench’s quality. 4 Experiments 4.1 Evaluation Metrics For understanding tasks, functional group (FG) and ring recognition are treated as counting problems, with mean absolute error (MAE) used to measure precision. Scaffold-level understanding includes extracting Murcko scaffolds, evaluated by Tanimoto similarity, and identifying whether complex ring systems are present, evaluated by accuracy. The SMILES equivalence task is formulated as a binary decision problem, determining whether the target and source SMILES represent the same molecule, and is also evaluated using accuracy. For molecule editing, we use Pass@1 to assess whether the edited molecule meets the instructions. Mechanism route selection is framed as a multiple-choice task and evaluated by accuracy. Other reaction tasks are modeled as SMILES generation problems, where evaluation is based on both Top-1 accuracy and fingerprint-based similarity (FTS), using Morgan [ 46], MACCS [7], and RDKit [32] fingerprints to reflect correctness and structural similarity. 4.2 Evaluated LLMs Our evaluation includes three model categories: (1) Reasoning LLMs with explicit step-by-step reasoning, including Deepseek-R1 [ 13], o1-mini [ 58], o3-mini [ 59], Gemini-2.5-pro [ 55], Claude-3.7- Sonnet-thinking [ 53], Qwen-3-thinking [ 60], Llama-Nemotron-thinking [ 2]; (2) General-purpose 6 Table 2: Baseline Performance on Molecule Optimization. The optimized targets are categorized into physicochemical properties (QED, LogP, solubility) and protein activity-related properties (JNK3, DRD2, GSK-3 β), with the latter posing greater challenges to the model’s chemical knowledge and reasoning capabilities. ∆is the mean property improvement, where a negative ∆indicates that most optimizations are property degradations. SR% is the success rate that brings property increase. ModelsLogP Solubility QED DRD2 JNK3 GSK3- β ∆ SR% ∆ SR% ∆ SR% ∆ SR% ∆ SR%
|
https://arxiv.org/abs/2505.21318v1
|
∆ SR% W/ Thinking Gemini-2.5-pro-think -0.28 81 1.91 92 0.21 84 0.35 74 -0.04 35 0.04 68 Claude3.7-sonnet-think 0.41 81 0.59 77 0.09 73 0.18 66 -0.01 49 0.01 57 DeepSeek-R1 0.36 74 1.48 97 0.05 72 0.10 62 -0.06 29 -0.02 41 o3-mini@20250103 0.29 68 1.15 85 0.17 86 0.18 69 -0.08 23 -0.03 45 o1-mini@20240912 -0.42 52 1.78 95 0.07 70 -0.03 37 -0.10 15 -0.08 31 Qwen3-235B-A22B-think -0.01 41 0.27 42 0.01 24 0.03 31 -0.01 23 0.01 31 Qwen3-32B-think 0.0 2 0.11 23 0.02 14 0.0 6 -0.02 6 -0.02 5 Llama-Nemo-49B-think -0.64 24 0.20 24 -0.16 41 -0.05 30 -0.15 7 -0.12 11 W/o Thinking GPT-4o@20241120 -0.20 42 0.82 80 0.05 70 0.05 48 -0.05 30 -0.04 39 DeepSeek-V3 0.08 34 0.47 93 0.08 46 0.02 28 0.0 18 0.0 29 Gemini-2.0-flash 0.35 75 0.19 54 0.10 79 0.15 63 0.03 34 0.0 38 Qwen3-235B-A22B 0.02 41 0.51 45 0.01 26 0.01 31 -0.01 23 0.0 34 Qwen3-32B -0.03 2 0.17 23 0.02 14 -0.01 6 -0.02 6 -0.02 5 Qwen2.5-72B-Instruct -0.12 42 0.28 60 0.03 57 0.04 40 -0.02 26 -0.01 40 Qwen2.5-32B-Instruct 0.03 47 0.42 66 -0.01 54 0.04 32 -0.04 19 -0.02 31 Llama-3.3-70B-Instruct -0.16 42 0.61 80 0.07 61 -0.02 31 -0.04 30 -0.02 40 Llama-Nemo-Super-49B -0.14 27 0.31 41 0.02 50 -0.02 18 -0.04 16 -0.03 27 Gemma-2-27b-it -0.03 34 0.34 66 0.05 56 -0.03 15 0.0 16 -0.01 17 Phi-4-14B -0.10 45 0.28 54 0.11 74 -0.04 18 -0.05 14 -0.04 22 OLMo2-32B-Instruct -2.03 22 1.03 46 -0.13 40 -0.11 7 -0.12 8 -0.11 12 BioMedGPT-7B -0.36 17 0.25 63 -0.29 7 -0.09 5 -0.11 6 -0.08 1 BioMistral-7B 0.01 1 0.24 6 0.0 0 0.0 1 -0.01 1 -0.01 0 non-reasoning LLMs without specialized reasoning mechanisms including GPT-4o [ 24], Qwen- 2.5/3 [ 66], Llama-3.3 [ 12], Gemma-2 [ 57], Phi-4 [ 1], OLMo2 [ 44] (3) Biomolecular LLMs BioMedGPT [ 39], BioMistral [ 31], and Text+Chem T5 [ 5]. This comprehensive comparison eval- uates whether reasoning-specific capabilities provide advantages over domain-specific models in challenging chemical reasoning tasks. Details of evaluation implementation and prompt design are available in Appendix C.2. 4.3 LLMs’ Performance on Solving ChemCoTBench We evaluated reasoning LLMs, their non-reasoning counterparts, and task-specific models [ 5,31,39] on foundational (molecule understanding and editing, Table 1) and application (molecule optimization, Table 2; reaction prediction, Table 3) tasks within ChemCoTBench. Key findings include: Hierarchical Skill Transfer. Strong performance in foundational molecular understanding and edit- ing tasks directly translates to success in complex application tasks. This validates ChemCoTBench’s design, where fundamental chemical knowledge underpins advanced problem-solving. For example, Claude-3.7-sonnet and Gemini-2.5-pro, top performers in foundational tasks (Table 1), also lead in molecule optimization and reaction prediction. Efficacy of Advanced Reasoning in Commercial LLMs: Commercial LLMs equipped with sophisticated reasoning mechanisms (e.g., Deepseek-R1, o3-mini) significantly outperform their non-reasoning counterparts on ChemCoTBench’s challenging applied tasks. In molecule optimization (Table 2), Deepseek-R1 shows a >30% improvement over Deepseek-V3, and o3-mini gains >20% over GPT-4o. Similar trends are observed for reaction prediction (Table 3). This suggests
|
https://arxiv.org/abs/2505.21318v1
|
that RL- honed "slow thinking" capabilities [ 40,48,62], when combined with sufficient domain knowledge, enable superior abstraction and problem-solving beyond mere knowledge retrieval. 7 Table 3: The chemical reaction task contains forward prediction (Fwd major: major-product prediction, and Fwd by: by-product prediction), resynthesis prediction (Retro), reaction condition prediction (Con- dition), and reaction mechanism prediction (NEPP: next element-step product prediction, MechSel: reaction mechanism selection prediction). FTS: molecule fingerprint similarity with reference. ModelsFwd major Fwd by Retro Condition NEPP MechSel Top-1 FTS ↑Top-1 FTS ↑Top-1 FTS ↑Top-1 FTS ↑Top-1 FTS ↑Acc.↑ W/ Thinking Gemini-2.5-pro-think 0.72 0.89 0.20 0.51 0.20 0.45 0.20 0.33 0.58 0.53 0.62 Claude3.7-sonnet-think 0.73 0.87 0.25 0.31 0.12 0.27 0.14 0.22 0.24 0.79 0.49 DeepSeek-R1 0.48 0.71 0.21 0.45 0.07 0.41 0.23 0.30 0.15 0.55 0.46 o3-mini@20250103 0.52 0.71 0.20 0.27 0.11 0.39 0.19 0.19 0.18 0.58 0.49 o1-mini@20240912 0.26 0.31 0.11 0.17 0.02 0.15 0.08 0.22 0.09 0.33 0.44 Qwen3-235B-A22B-think 0.03 0.54 0.0 0.07 0.01 0.42 0.20 0.27 0.09 0.63 0.41 Qwen3-32B-think 0.11 0.33 0.09 0.18 0.02 0.24 0.14 0.20 0.08 0.67 0.46 Llama-Nemo-49B-think 0.09 0.18 0.04 0.18 0.0 0.05 0.18 0.19 0.04 0.21 0.47 W/o Thinking GPT-4o@20241120 0.28 0.58 0.04 0.20 0.03 0.43 0.0 0.08 0.12 0.71 0.43 DeepSeek-V3 0.36 0.62 0.04 0.30 0.03 0.44 0.08 0.16 0.20 0.70 0.45 Gemini-2.0-flash 0.19 0.56 0.01 0.07 0.05 0.41 0.07 0.08 0.13 0.68 0.53 Qwen3-235B-A22B 0.04 0.57 0.0 0.06 0.0 0.30 0.07 0.14 0.07 0.59 0.40 Qwen3-32B 0.06 0.57 0.0 0.13 0.0 0.43 0.01 0.10 0.08 0.67 0.46 Qwen2.5-72B-Instruct 0.04 0.49 0.0 0.13 0.01 0.35 0.01 0.07 0.06 0.60 0.46 Qwen2.5-32B-Instruct 0.01 0.43 0.0 0.12 0.0 0.29 0.02 0.10 0.05 0.50 0.45 Llama-3.3-70B-Instruct 0.02 0.35 0.0 0.08 0.0 0.34 0.06 0.13 0.06 0.41 0.39 Llama-Nemo-49B 0.04 0.40 0.0 0.08 0.0 0.30 0.03 0.05 0.05 0.41 0.46 Gemma-2-27b-it 0.01 0.55 0.0 0.04 0.0 0.48 0.03 0.10 0.04 0.53 0.43 Phi-4-14B 0.01 0.27 0.03 0.10 0.0 0.39 0.0 0.03 0.05 0.57 0.39 OLMo2-32B-Instruct 0.0 0.10 0.0 0.07 0.0 0.10 0.0 0.03 0.01 0.13 0.32 Text+Chem T5 0.44 0.74 0.0 0.07 0.06 0.24 0.0 0.09 0.0 0.0 0.10 Unrealized Promise of Hybrid Thinking in Open-Source Models for Chemistry without Domain- Specific Data: Current open-source models featuring hybrid thinking modes, such as Llama-3.3- Nemotron [ 2] and Qwen3 [ 72], achieve substantial, often efficient, performance in general domains like code and mathematics. However, their advanced reasoning capabilities, intended to be general, do not effectively transfer to specialized scientific fields like chemistry. We attribute this shortfall to a critical lack of domain-specific reasoning training data. Our empirical results are stark (Tables 1-3): enabling the reasoning modes in these models yields no significant performance improvement on chemical tasks compared to their non-reasoning counterparts. This finding strongly suggests that general reasoning architectures require specialized data to adapt to new domains. 4.4 Evaluating Strategies to Enhance Chemical Reasoning in Open-Source LLMs Our preceding analyses underscored the critical role of advanced reasoning capabilities (or "slow thinking") for tackling complex chemical tasks. This motivates an investigation into efficient methods for bolstering these capabilities within open-source LLMs. Challenges in Distilling Chemical Reasoning:
|
https://arxiv.org/abs/2505.21318v1
|
Distilling CoT capabilities from advanced LLMs (e.g., using DeepSeek-R1-generated samples [ 13,71]) is a common strategy to enhance reasoning in smaller models. However, this approach proves significantly limited for specialized chemical reasoning. Our experiments (Fig.4) show that Qwen2.5-Instruct models distilled for CoT exhibit little to no improvement on ChemCoTBench chemical subtasks compared to their non-distilled counterparts; indeed, smaller base models (1.5B-32B) often perform comparably or better. While effective for general domains like code and math (Fig.4), this distillation strategy falters in chemistry, likely due to insufficient volume or specificity of chemical CoT samples in the distillation process, hindering the development of robust step-by-step chemical reasoning. Moreover, smaller distilled models (<7B) frequently produce lengthy, repetitive, and irrelevant (hallucinatory) thought processes. These findings suggest that direct CoT distillation, without substantial domain-specific adaptation, is an ineffective standalone method for improving chemical reasoning in open-source models. 8 Mol Optimization PhysChem -LevelMol Optimization Target -LevelReaction Mechanism Route PredictionMath -500 and LiveCodeBenchMol Understanding SMILES Equivalence Mol Editing W/O CoT CoT Template CoT Process Qwen -2.5-Instruct 1.5B 7B 14B 32B010203040Success Rate (%) Mol Optimization Target -LevelDistilled -Qwen -2.5 1.5B 7B 14B 32B10305070 Mol EditingPass@1(%)60 40 20 1.5B 7B 14B 32B1.2 0.8 0.4Mol UnderstandingMean Absolute Error1.0 0.6 0.2Figure 4: The top two rows compare the reasoning performance of the Qwen-2.5-Instruct series against its DeepSeek-R1-distilled versions. The bottom row illustrates performance improvements in Qwen-2.5-Instruct when enhanced with the CoT template and detailed CoT process. Domain-Specific Chemical CoT Data Augmentation Boosts Reasoning: Given the limitations of direct distillation, we explored enhancing chemical reasoning using our high-quality, domain-specific ChemCoTDataset via prompting. This dataset was meticulously curated to minimize hallucinations and align with expert thought processes, which we posited would be vital for chemical reasoning tasks. We tested this by evaluating two CoT prompting strategies: one providing only coarse strategic guidance (CoT templates), and another augmented with detailed step-by-step reasoning processes from our dataset. The results in the bottom line of Fig. 4 consistently demonstrate that our large-scale chemical CoT dataset significantly enhances the chemical reasoning capabilities of Qwen-2.5 models across various scales (1.5B to 32B) when used in this way. Augmentation with detailed CoT processes yielded stable and substantial performance gains across all evaluated tasks. Notably, while CoT templates stably benefited small models (<14B), larger LLMs develop rigid code & math reasoning templates that constrain their ability to effectively leverage the chemical patterns for performance improvement. This highlights an interplay between model scale and the granularity of CoT guidance required for optimal benefit. 5 Conclusion and Discussion This paper introduces ChemCoTBench, a new chemical reasoning benchmark to evaluate the complex chemical problem-solving ability of LLMs. Compared to existing Scientific benchmarks that focus on simple knowledge retrieval, our ChemCoTBench establishes a step-by-step, application-oriented, and high-quality benchmark by gathering samples from both foundational and applicational chemical tasks, including molecule understanding, editing, optimization, and reaction prediction. Furthermore, a 14k large chemical CoT dataset is also provided for enhancing chemical reasoning ability of LLMs. Extensive experiments across 22 chemical tasks in ChemCoTBench demonstrate that current open-source and distillation-based reasoning LLMs still have significant room for improvement in
|
https://arxiv.org/abs/2505.21318v1
|
complex chemical reasoning, while also validating the boosting effect of our large chemical CoT dataset on chemical reasoning capabilities. ChemCoTBench bridges the gap between LLM reasoning capabilities and real-world chemical problem-solving needs, offering researchers a standardized 9 evaluation platform for complex chemical reasoning. Future works could continue with designing policy optimization and distillation strategies to enhance the chemical reasoning capability of LLMs. Chemical-aware reward mechanisms warrant further exploration. We also focus on extending ChemCoTBench and its chemical CoT dataset to larger biochemical domains and scale. References [1]Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Xin Wang, Rachel Ward, Yue Wu, Dingli Yu, Cyril Zhang, and Yi Zhang. Phi-4 technical report, 2024. [2]Akhiad Bercovich, Itay Levy, Izik Golan, Mohammad Dabbah, Ran El-Yaniv, Omri Puny, Ido Galil, Zach Moshe, Tomer Ronen, Najeeb Nabwani, et al. Llama-nemotron: Efficient reasoning models. arXiv preprint arXiv:2505.00949 , 2025. [3]Andres M Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Chemcrow: Augmenting large-language models with chemistry tools. arXiv preprint arXiv:2304.05376 , 2023. [4]Andres M Bran, Theo A Neukomm, Daniel P Armstrong, Zlatko Jon ˇcev, and Philippe Schwaller. Chemical reasoning in llms unlocks steerable synthesis planning and reaction mechanism elucidation. arXiv preprint arXiv:2503.08537 , 2025. [5]Dimitrios Christofidellis, Giorgio Giannone, Jannis Born, Ole Winther, Teodoro Laino, and Mat- teo Manica. Unifying molecular and textual representations via multi-task language modelling. InInternational Conference on Machine Learning , pages 6140–6157. PMLR, 2023. [6]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [7]Joseph L. Durant, Burton A. Leland, Douglas R. Henry, and James G. Nourse. Reoptimization of MDL keys for use in drug discovery. Journal of Chemical Information and Computer Sciences , 42(6):1273–1280, 2002. [8] Elsevier. Reaxys, 2024. [9]Chaoran Feng, Wangbo Yu, Xinhua Cheng, Zhenyu Tang, Junwu Zhang, Li Yuan, and Yonghong Tian. Ae-nerf: Augmenting event-based neural radiance fields for non-ideal conditions and larger scene. arXiv preprint arXiv:2501.02807 , 2025. [10] Hanyu Gao, Thomas J. Struble, Connor W. Coley, Yuran Wang, William H. Green, and Klavs F. Jensen. Using machine learning to predict suitable conditions for organic reactions. ACS Central Science , 4:1465 – 1476, 2018. [11] Anna Gaulton, Anne Hersey, Michał Nowotka, A Patricia Bento, Jon Chambers, David Mendez, Prudence Mutowo, Francis Atkinson, Louisa J Bellis, Elena Cibrián-Uhalte, et al. The chembl database in 2017. Nucleic acids research , 45(D1):D945–D954, 2017. [12] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [13] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-R1: Incentivizing reasoning capability in LLMs via
|
https://arxiv.org/abs/2505.21318v1
|
reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [14] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y . Wu, Y . K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. Deepseek-coder: When the large language model meets programming – the rise of code intelligence, 2024. 10 [15] Kehan Guo, Bozhao Nan, Yujun Zhou, Taicheng Guo, Zhichun Guo, Mihir Surve, Zhenwen Liang, Nitesh Chawla, Olaf Wiest, and Xiangliang Zhang. Can llms solve molecule puzzles? a multimodal benchmark for molecular structure elucidation. Advances in Neural Information Processing Systems , 37:134721–134746, 2024. [16] Taicheng Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo, Nitesh Chawla, Olaf Wiest, Xi- angliang Zhang, et al. What can large language models do in chemistry? a comprehensive benchmark on eight tasks. Advances in Neural Information Processing Systems , 36:59662– 59688, 2023. [17] Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008 , 2024. [18] Jiazhen He, Huifang You, Emil Sandström, Eva Nittinger, Esben Jannik Bjerrum, Christian Tyr- chan, Werngard Czechtizky, and Ola Engkvist. Molecular optimization by capturing chemist’s intuition using deep neural networks. Journal of cheminformatics , 13:1–17, 2021. [19] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [20] Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. [21] Zan Huang, Hsinchun Chen, Zhi-Kai Chen, and Mihail C Roco. International nanotechnology development in 2003: Country, institution, and technology field analysis based on uspto patent database. Journal of nanoparticle Research , 6:325–354, 2004. [22] Zhen Huang, Zengzhi Wang, Shijie Xia, Xuefeng Li, Haoyang Zou, Ruijie Xu, Run-Ze Fan, Lyumanshan Ye, Ethan Chern, Yixin Ye, et al. Olympicarena: Benchmarking multi-discipline cognitive reasoning for superintelligent ai. Advances in Neural Information Processing Systems , 37:19209–19253, 2024. [23] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jia- jun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186 , 2024. [24] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. GPT-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [25] John J Irwin, Teague Sterling, Michael M Mysinger, Erin S Bolstad, and Ryan G Coleman. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling , 52(7):1757–1768, 2012. [26] Yunhui Jang, Jaehyung Kim, and Sungsoo Ahn. Chain-of-thoughts for molecular understanding, 2024. [27] Joonyoung F Joung, Mun Hong Fong, Nicholas Casetti, Jordan P Liles, Ne S Dassanayake, and Connor W Coley. Electron flow matching for generative reaction mechanism prediction obeying conservation laws. arXiv preprint arXiv:2502.12979 , 2025. [28] Joonyoung F Joung, Mun Hong Fong, Jihye Roh,
|
https://arxiv.org/abs/2505.21318v1
|
Zhengkai Tu, John Bradshaw, and Connor W Coley. Reproducing reaction mechanisms with machine-learning models trained on a large-scale mechanistic dataset. Angewandte Chemie International Edition , 63(43):e202411296, 2024. [29] Daniel Kahneman. Thinking, fast and slow . macmillan, 2011. [30] Sunghwan Kim, Paul A Thiessen, Evan E Bolton, Jie Chen, Gang Fu, Asta Gindulyte, Lianyi Han, Jane He, Siqian He, Benjamin A Shoemaker, et al. Pubchem substance and compound databases. Nucleic acids research , 44(D1):D1202–D1213, 2016. 11 [31] Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre-antoine Gourraud, Mickaël Rouvier, and Richard Dufour. BioMistral: A collection of open-source pretrained large language models for medical domains. In 62th Annual Meeting of the Association for Computational Linguistics (ACL’24) , 2024. [32] Greg Landrum, Paolo Tosco, Brian Kelley, Ricardo Rodriguez, David Cosgrove, Ric- cardo Vianello andSriniker, Gedeck, Gareth Jones, Nadine Schneider, Eisuke Kawashima, Dan Nealschneider, Andrew Dalke, Matt Swain, Brian Cole, Samo Turk, Aleksandr Savelev, Alain Vaucher, Maciej Wójcikowski, Ichiru Take, Vincent F. Scalfani, Rachel Walker, Kazuya Ujihara, Daniel Probst, Guillaume Godin, Axel Pahl, Tadhurst-cdd, Juuso Lehtivarjo, Francois Berenger, and Jason D Biggs. RDKit: Open-source cheminformatics and machine learning, May 2024. [33] Hao Li, Da Long, Li Yuan, Yu Wang, Yonghong Tian, Xinchang Wang, and Fanyang Mo. Decoupled peak property learning for efficient and interpretable electronic circular dichroism spectrum prediction. Nature Computational Science , pages 1–11, 2025. [34] Jiatong Li, Junxian Li, Yunqing Liu, Dongzhan Zhou, and Qing Li. Tomg-bench: Evaluating llms on text-based open molecule generation. arXiv preprint arXiv:2412.14642 , 2024. [35] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785 , 2025. [36] Daniel Mark Lowe. Extraction of chemical structures and reactions from the literature . PhD thesis, 2012. [37] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems , 35:2507–2521, 2022. [38] Xingyu Lu, He Cao, Zijing Liu, Shengyuan Bai, Leqing Chen, Yuan Yao, Hai-Tao Zheng, and Yu Li. Moleculeqa: A dataset to evaluate factual accuracy in molecular comprehension. arXiv preprint arXiv:2403.08192 , 2024. [39] Yizhen Luo, Jiahuan Zhang, Siqi Fan, Kai Yang, Yushuai Wu, Mu Qiao, and Zaiqing Nie. BioMedGPT: Open multimodal generative pre-trained transformer for biomedicine. arXiv preprint arXiv:2308.09442 , 2023. [40] Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. Reft: Reasoning with reinforced fine-tuning. arXiv preprint arXiv:2401.08967 , 3, 2024. [41] Liuzhenghao Lv, Hao Li, Yu Wang, Zhiyuan Yan, Zijun Chen, Zongying Lin, Li Yuan, and Yonghong Tian. Navigating chemical-linguistic sharing space with heterogeneous molecular encoding. arXiv preprint arXiv:2412.20888 , 2024. [42] J. Mayfield, D. Lowe, and R. Sayle. Pistachio - search and faceting of large reaction databases. ACS Fall 2017 , 2017. [43] Adrian Mirza, Nawaf Alampara, Sreekanth Kunchapu, Martiño Ríos-García, Benedict Emoek- abu, Aswanth Krishnan, Tanya Gupta, Mara Schilling-Wilhelmi, Macjonathan Okereke, Anagha Aneesh, et al. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475 , 2024. [44] Team OLMo. 2 olmo 2
|
https://arxiv.org/abs/2505.21318v1
|
furious, 2025. [45] Siru Ouyang, Zhuosheng Zhang, Bing Yan, Xuan Liu, Yejin Choi, Jiawei Han, and Lianhui Qin. Structured chemistry reasoning with large language models. arXiv preprint arXiv:2311.09656 , 2023. [46] Nadine Schneider, Roger A. Sayle, and Gregory A. Landrum. Get your atoms in order—an open- source implementation of a novel and robust molecular canonicalization algorithm. Journal of Chemical Information and Modeling , 55(10):2111–2120, 2015. 12 [47] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [48] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [49] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, et al. Vlm-r1: A stable and generalizable r1-style large vision-language model. arXiv preprint arXiv:2504.07615 , 2025. [50] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. Scieval: A multi-level large language model evaluation benchmark for scientific research. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 19053–19061, 2024. [51] Xiangru Tang, Tianyu Hu, Muyang Ye, Yanjun Shao, Xunjian Yin, Siru Ouyang, Wangchunshu Zhou, Pan Lu, Zhuosheng Zhang, Yilun Zhao, et al. Chemagent: Self-updating library in large language models improves chemical reasoning. arXiv preprint arXiv:2501.06590 , 2025. [52] Zhenyu Tang, Junwu Zhang, Xinhua Cheng, Wangbo Yu, Chaoran Feng, Yatian Pang, Bin Lin, and Li Yuan. Cycle3d: High-quality and consistent image-to-3d generation via generation- reconstruction cycle. arXiv preprint arXiv:2407.19548 , 2024. [53] Anthropic Team. Claude-3.7-sonnet: Hybrid reasoning model. [54] CodeGemma Team, Heri Zhao, Jeffrey Hui, Joshua Howland, Nam Nguyen, Siqi Zuo, Andrea Hu, Christopher A. Choquette-Choo, Jingyue Shen, Joe Kelley, Kshitij Bansal, Luke Vilnis, Mateo Wirth, Paul Michel, Peter Choy, Pratik Joshi, Ravin Kumar, Sarmad Hashmi, Shubham Agrawal, Zhitao Gong, Jane Fine, Tris Warkentin, Ale Jakse Hartman, Bin Ni, Kathy Korevec, Kelly Schaefer, and Scott Huffman. Codegemma: Open code models based on gemma, 2024. [55] DeepMind Team. Gemini 2.5 pro preview: even better coding performance. [56] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. [57] Gemma Team. Gemma 2: Improving open language models at a practical size, 2024. [58] OpenAI Team. O1-mini: advancing cost-efficient reasoning. [59] OpenAI Team. Openai o3-mini. [60] Qwen Team. Qwen3: Think deeper, act faster. [61] Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without human demonstrations. Nature , 625(7995):476–482, 2024. [62] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935 , 2023. [63] Xiaoxuan Wang,
|
https://arxiv.org/abs/2505.21318v1
|
Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635 , 2023. [64] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [65] Guowei Xu, Peng Jin, Li Hao, Yibing Song, Lichao Sun, and Li Yuan. Llava-o1: Let vision language models reason step-by-step. arXiv preprint arXiv:2411.10440 , 2024. 13 [66] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [67] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. [68] Wangbo Yu, Chaoran Feng, Jiye Tang, Xu Jia, Li Yuan, and Yonghong Tian. Evagaussians: Event stream assisted gaussian splatting from blurry images. arXiv preprint arXiv:2405.20224 , 2024. [69] Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, et al. Chemllm: A chemical large language model. arXiv preprint arXiv:2402.06852 , 2024. [70] Junwu Zhang, Zhenyu Tang, Yatian Pang, Xinhua Cheng, Peng Jin, Yida Wei, Xing Zhou, Munan Ning, and Li Yuan. Repaint123: Fast and high-quality one image to 3d generation with progressive controllable repainting. In European Conference on Computer Vision , pages 303–320. Springer, 2025. [71] Han Zhao, Haotian Wang, Yiping Peng, Sitong Zhao, Xiaoyu Tian, Shuaiting Chen, Yunjie Ji, and Xiangang Li. 1.4 million open-source distilled reasoning dataset to empower large language model training. arXiv preprint arXiv:2503.19633 , 2025. [72] Xingyu Zheng, Yuye Li, Haoran Chu, Yue Feng, Xudong Ma, Jie Luo, Jinyang Guo, Haotong Qin, Michele Magno, and Xianglong Liu. An empirical study of qwen3 quantization. arXiv preprint arXiv:2505.02214 , 2025. 14 Appendix A Full Related Works 1 A.1 LLM Chain-of-Thoughts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 B Data Construction Details 1 B.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 B.2 Dataset Composition and Filtering Strategies . . . . . . . . . . . . . . . . . . . . 2 B.3 Rationale for Task Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 C Experimental Details 3 C.1 Hardware Requirements . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.21318v1
|
. . . . . . . . . . . . . 3 C.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 C.3 Count Distribution Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 D Case Study for Tasks in ChemCoTBench 5 D.1 Case Study for Molecule Understanding . . . . . . . . . . . . . . . . . . . . . . . 5 D.2 Case Study for Molecule Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 D.3 Case Study for Molecule Optimization . . . . . . . . . . . . . . . . . . . . . . . . 6 E Task Example 9 A Full Related Works A.1 LLM Chain-of-Thoughts. The evolution of large language models (LLMs) has transitioned from basic text generation to sophisticated reasoning systems, exemplified by [ 64] Chain-of-Thought methodology that facili- tates systematic problem decomposition through deliberate cognitive paradigms. These advanced reasoning architectures demonstrate exceptional proficiency in domains demanding structured ana- lytical capabilities, particularly in mathematical computation and programming tasks. Benchmark evaluations on MATH [ 19] and GSM8K [ 6] reveal significant achievements by models including DeepSeek-R1 [13], Gemini [56], and Anthropic Claude. LLM Reasoning on Multimodal Domain. With the rapid development of vision-language domain, reasoning on images and videos are increasingly important [ 65]. Visual-RFT [ 35], VLM-R1 [ 49] establish the visual chain-of-thoughts data construction pipeline and RL-based post-training strategies. Vision-R1 [ 20] further propose the cold start strategy for better multimodal reasoning. In the 3D domain, [ 9,52,68,70] apply chain-of-thoughts to point clouds and 3D objects to achieve LLM reasoning. LLM Reasoning on Chemical Domain. Emerging applications in chemical sciences demonstrate LLM capabilities in spectra analysis [ 33], synthesis planning [ 4], and computational chemistry [ 26, 45,51]. Also, LLMs [ 41] show outstanding multi-task generalization ability on the molecule domain and protein domain. However, current research lacks a comprehensive assessment of chemical reasoning capacities encompassing spatial cognition, domain knowledge assimilation, and complex logical inference processes. B Data Construction Details In this section, we propose the detailed information during our benchmark and dataset construction process, including the data source description, dataset composition, filtering strategies, and the 1 Table 4: The Dataset Statistics of ChemCoTBench and its Large CoT Dataset. We visualize the sample numbers for every subtask in ChemCoTBench. The data distribution of molecule understanding & editing, molecule optimization, and reaction prediction is nearly average. #Mol-Understanding Mol-Edit Mol-Optimization Reaction Func-Group Scaffold SMILES Add Del Sub Physico Protein Fwd Retro Cond Mech Bench mark120 100 100 20 20 60 300 300 200 100 90 275 CoT
|
https://arxiv.org/abs/2505.21318v1
|
Dataset6400 4500 3000 rationale for dataset construction. In Table. 4, we also visualize the data distribution of subtasks in ChemCoTBench. B.1 Data Collection The raw molecular structures used for understanding, editing, and optimization are obtained from sev- eral published datasets, including PubChem [ 30], ChEMBL [ 11], ZINC [ 25], and Deep-Mol-Opt [ 18]. Chemical reaction data are separately collected from patent databases, including USPTO [ 21], Pista- chio [ 42], and Reaxys [ 8]. For reaction mechanism annotation, we followed the processing pipeline described in [28]. B.2 Dataset Composition and Filtering Strategies Molecular Samples (25% of Benchmark): Although the ZINC database contains 250,000 molecules, we observed that its molecular weight distribution is relatively concentrated. To en- sure diversity, we carefully selected molecules from PubChem, ChEMBL, and ZINC based on molecular weight and structural complexity. This filtering process resulted in a smaller but more representative molecular subset for our benchmark. Molecular Optimization Pairs (38% of Benchmark): The Deep-Mol-Opt dataset provided 198,559 molecular pairs with property annotations. However, we excluded pairs with minimal property improvement ( ∆< 0.3) or those containing complex polycyclic structures that might challenge LLM comprehension. The remaining high-quality pairs were retained for molecular optimization tasks. Chemical Reaction Samples (19% of Benchmark): Reaction equations (including reactants, products, conditions, and catalysts) were sourced from USPTO, Pistachio, and Reaxys. To avoid redundancy, we balanced the selection across these databases by reaction type and catalyst diversity. For reaction mechanism annotation, we incorporated 275 manually curated examples from [ 28], which were chosen for their high quality and balanced distribution. B.3 Rationale for Task Construction Molecular Understanding and Editing Tasks: Molecular understanding and editing tasks are designed as closed-ended problems with deterministic answers. Since these tasks rely on well-defined chemical properties and structures, we directly sampled molecules from PubChem, ChEMBL, and ZINC as the source data. The corresponding ground-truth answers, including molecular properties and SMILES transformations, are programmatically extracted using RDKit, ensuring accuracy and reproducibility. Molecular Optimization Task Design: Unlike fixed-answer tasks, molecular optimization is inherently open-ended, where multiple valid optimization paths may exist for a given input molecule. To construct this dataset, we considered two sampling strategies: •Baseline Model-Generated Optimizations: Advantage : Enables sampling large-scale and multi-step optimization paths for source molecules; Limitation : Existing models often fail to preserve scaffold consistency, a critical requirement in drug design. 2 •Predefined Molecular Pairs: Advantage : Ensures chemically meaningful transformations with verified property improvements; Limitation : limited molecule samples. To maintain the scaffold consistency, we adopt the second strategy for our ChemCoTBench, sourcing molecular pairs from Deep-Mol-Opt [ 18]. We perform Murcko scaffold similarity analysis to validate scaffold consistency, confirming that the selected pairs maintain structural integrity while optimizing target properties. Reaction Prediction Task Design: Reaction prediction is a cornerstone of chemical research and industrial applications. From an academic standpoint, it is fundamental to understanding chemical reactivity, discovering novel transformations, and advancing the design of new molecules. In practical applications, accurate reaction prediction accelerates drug discovery, facilitates materials science innovation, optimizes chemical manufacturing processes, and enables the automation of chemical synthesis. Our benchmark aims to
|
https://arxiv.org/abs/2505.21318v1
|
evaluate LLMs’ capabilities in this multifaceted domain rigorously. •Forward Reaction Prediction : This task, pivotal for academic discovery and industrial applications like drug development, evaluates an LLM’s ability to predict both major products and, uniquely in our benchmark, byproducts from given reactants and reagents. Data is sourced from 100 distinct reaction classes from Pistachio. To enhance difficulty and assess deeper reasoning, the reaction type is deliberately omitted, requiring the model to first infer the plausible reaction type and then deduce potential products, thereby providing a comprehensive understanding of reaction outcomes crucial for optimization. •Retrosynthesis Prediction : Essential for planning the synthesis of novel compounds, this task assesses an LLM’s understanding of reverse chemical logic, specifically its capacity to identify strategic bond disconnections and propose valid precursor structures. We focus on single-step retrosynthesis, considering multi-step planning a more complex hybrid task, to directly evaluate core retrosynthetic reasoning. Data comprises 100 reaction classes from Pistachio, and problem formulation includes providing reagents alongside the target product to help narrow the solution space and guide the LLM towards chemically relevant disconnections. •Reaction Condition Prediction : Predicting optimal reaction conditions (catalysts, solvents, reagents) is critical for synthesis success, efficiency, and selectivity. This task tests an LLM’s knowledge of how these components influence reaction pathways. Following Gao et al. [ 10] for data construction from USPTO [ 36] (retaining reactions with at most one catalyst, two solvents, and two reagents), we uniquely model this as a SMILES sequence generation task for catalyst, solvent, and reagent prediction, offering a more rigorous challenge than simple MCQ formats by requiring specific chemical structure (In SMILES) generation. •Mechanism Prediction : Understanding reaction mechanisms—the step-by-step sequence of elemen- tary reactions—is fundamental to chemistry, providing the "why" and "how" behind transformations and enabling rational design and optimization. This task evaluates an LLM’s grasp of core mecha- nistic principles such as electron flow, intermediate stability, bond-making/breaking sequences, and the influence of conditions on pathways, addressing a significant gap in current LLM assessments, which often treat reactions as black boxes. Inspired by prior works [ 27,28] but aiming for a more holistic probe, we introduce two subtasks: "Next Elementary Step Product Prediction," where the LLM, given a sequence of annotated elementary steps, predicts the subsequent product, testing its ability to comprehend and extrapolate mechanistic progression; and "Reaction Mechanism Selection (MCQ type)," where the LLM chooses the most plausible mechanism from several alternatives for a given reaction (reactants, conditions, reagents), assessing its capacity to discern how subtle changes in reagents or conditions dictate specific mechanistic routes, thereby evaluating both sequential understanding and discriminative judgment of mechanistic pathways. C Experimental Details C.1 Hardware Requirements The experimental workload was supported by a dedicated GPU cluster comprising three high- performance computing nodes: an NVIDIA RTX A6000 (48GB VRAM) and an RTX 3090 (24GB VRAM) for LLM API scheduling and deployment of smaller models (1.5B/7B parameters), comple- mented by an NVIDIA A100 (80GB VRAM) node dedicated to large-scale LLM inference. This 3 heterogeneous configuration achieved optimal resource allocation, with the A100’s tensor cores and high-bandwidth memory handling memory-intensive model inferences while
|
https://arxiv.org/abs/2505.21318v1
|
the A6000/3090 pair efficiently managed concurrent API requests and lighter workloads. Storage requirements remained modest at approximately 1GB, encompassing benchmark datasets (SMILES strings and annotations), quantized model checkpoints, and evaluation logs, all hosted on an NVMe-backed filesystem for rapid data access. C.2 Evaluation Metrics To comprehensively assess model performance, we employ the following metrics: Accuracy: The proportion of correctly predicted outcomes, providing a baseline measure of overall correctness. For reaction prediction tasks (e.g., forward reaction prediction), we choose the Top-1 accuracy, which specifically means the model’s highest-ranked prediction exactly matches the true product(s). Mean Absolute Error: Quantifies the average magnitude of errors in continuous predictions, offering insight into precision for regression tasks (e.g., molecular property prediction). Scaffold Similarity: Measured via the Tanimoto coefficient of molecular scaffolds, this evaluates structural conservation between generated and reference molecules. Values range from 0 to 1, representing scaffolds without similarity to correct scaffolds, with higher scores indicating better preservation of core frameworks. Improvement: Absolute gains in target properties, reported as: Mean improvement: Average uplift across all samples. Max/min improvement: Extreme cases highlighting model potential and limitations. Success Rate: The fraction of generated molecules exceeding a predefined threshold (e.g., > 0.8 for solubility), reflecting practical utility. Validity: Measures the proportion of generated SMILES strings that are syntactically correct and can be successfully parsed into a chemical structure by RDKit [32]. C.3 Count Distribution Analysis Error Distribution for Ring Counting Task Error Distribution for Functional -Group Counting TaskAbsolute Error Per Sample Absolute Error Per Sample Figure 5: Error distribution analysis for ring counting and functional-group counting tasks. For the two counting tasks under molecule understanding—ring counting and functional-group counting—we evaluated model performance using the Mean Absolute Error in the main experimental section to quantify overall accuracy. To provide a more granular analysis of LLMs’ capabilities in these molecule-specific counting tasks, we further examined the error distribution across different models. As illustrated in Fig. 5, the ring counting task proves significantly more challenging than the functional- group counting task. This is evident from the error distributions: For functional-group counting, the majority of errors fall within the 0.0–1.0 range, indicating relatively high accuracy. In contrast, ring counting exhibits higher errors, with most models (except Gemini-2.5-pro) showing an average MAE > 1.0. Gemini-2.5-pro stands out as the only model achieving consistently low errors in this task, suggesting superior structural reasoning capabilities. This disparity highlights the inherent difficulty of ring counting, which requires precise identification of cyclic structures—a more complex task 4 than detecting localized functional groups. The results underscore the need for further refinement of LLMs in handling intricate molecular topologies. D Case Study for Tasks in ChemCoTBench To provide a more detailed analysis of the performance of different types of LLMs across various tasks in ChemCoTBench, we supplement the quantitative findings in the Experiment section with visualizations of model outputs. In the following three subsections, we present case visualizations from distinct subtasks: molecule understanding, molecule editing, and molecule optimization. Table 5: This is a case study for molecule understanding. We visualize the Murcko Scaffold generation task in molecule understanding because it
|
https://arxiv.org/abs/2505.21318v1
|
can provide detailed information compared to number prediction tasks and correction distinguishing tasks. Source Molecule GT-Scaffold Gemini-2.5-pro Llama3.3-70B 100% 41.8% 27.8% 100% 38.6% 0.0% 100% 56.8% 15.4% 100% 33.3% 13.3% D.1 Case Study for Molecule Understanding The molecule understanding task in ChemCoTBench contains three types of subtasks, including number prediction subtasks (functional-group counting and ring counting), distinguish subtasks (ring system distinguish, SMILES consistency distinguish), and scaffold generation subtask (murcko scaffold generation). To visualize the detailed molecule structure generated by different types of LLMs, we select the Murcko scaffold generation subtask as the case visualization source. Table. 5 presents four examples featuring distinct ring structures and functional groups. Through comparative analysis, we identify two key advantages of commercial LLMs over smaller open-source LLMs: 5 Superior SMILES Parsing Accuracy. Commercial LLMs(e.g., Gemini-2.5-Pro) correctly interpret molecular SMILES structures, with predicted structures closely matching the source molecules (only 1–2 bond position errors). In contrast, open-source models like LLaMA-3.1 generate structures largely inconsistent with the source molecules. Robust Instruction-Following for Murcko Scaffolds. When tasked with extracting Murcko scaffolds—defined as the maximal connected framework retaining ring systems while removing non-critical functional groups—commercial LLMs adhere to the provided instructions and generate connected scaffolds. Llama-3.1, however, often outputs fragmented substructures, highlighting its limitations in instruction comprehension. D.2 Case Study for Molecule Editing The molecule editing task in ChemCoTBench contains three parts: adding a target functional group to the molecule, removing a target functional group from the molecule, and substituting a functional group with a target functional group from the molecule. In Table. 6, we visualize samples from each subtask with different types of target functional groups. Two key observations emerge from the analysis: Functional Group Recognition Directly Impacts Task Performance. Gemini-2.5-Pro demon- strates high precision in functional group identification, enabling accurate molecular editing. While Qwen3-235B correctly identifies functional groups, it frequently fails to execute valid molecular modifications. LLaMA-3.1 struggles with basic functional group recognition, severely limiting its task completion capability. This trend aligns with the models’ performance in the functional-group counting subtask under molecule understanding, confirming a strong correlation between recognition accuracy and downstream success. 2D Molecular Structure Parsing Poses a Significant Challenge. Due to the inherently linear nature of SMILES notation, LLMs generally perform well on molecules with extended one-dimensional chains. However, their accuracy declines sharply when processing complex polycyclic systems with intricate 2D topologies. D.3 Case Study for Molecule Optimization Molecular Optimization Tasks involve improving three physicochemical properties (QED, Solubility, LogP) and three protein-related activation capabilities (DRD2, JNK3, GSK3- β). Since large language models perform poorly in optimizing protein-related activations, we focus on their ability to optimize physicochemical properties. Table 3 presents the optimization results of three LLMs, including Gemini-2.5-pro, Qwen3-235B, and llama3.3-70B, revealing two key observations: LLMs exhibit significant potential in this task. Despite the inherent difficulty of molecular optimization, LLMs exhibit significant potential in this task. We observed that these models introduce diverse functional groups, including halogens, aldehydes, hydroxyls, and amines, indicating broad chemical adaptability. However, some modifications led to negative optimization, likely due to limited understanding of the underlying physicochemical principles—a gap that could be
|
https://arxiv.org/abs/2505.21318v1
|
addressed through targeted training. Commercial LLMs demonstrate bolder optimization strategies compared to open-source models . For instance, Gemini-2.5-pro frequently performs skeleton-level modifications (e.g., additions or deletions), whereas Qwen3-235B and llama3.3 tend toward conservative insertions with minimal structural changes. This contrast highlights the greater flexibility and potential of commercial LLMs in molecular optimization. 6 Table 6: The case study for functional-group addition, deletion, and substitution in the molecule editing task. For better comparison, we visualize the predicted results from Gemini-2.5-pro (reasoning LLM), Qwen3-235B (non-reasoning LLM), and llama3.3-70B (non-reasoning LLM) and show the outstanding chemical reasoning ability of Gemini compared to other open-sourced LLMs. Instruction Source Molecule Gemini-2.5-pro Qwen3-235B Llama3.3-70B Add Functional Groups Add the amide group while keeping the molecule scaf- fold unchanged. Add the amine group while keeping the molecule scaf- fold unchanged. Invalid SMILES Add the benzene ring group while keeping the molecule scaf- fold unchanged. Delete Functional Groups Delete aldehyde group while keeping the molecule scaf- fold unchanged. Delete hydroxyl group while keeping the molecule scaf- fold unchanged. Delete nitro group while keeping the molecule scaf- fold unchanged. Invalid SMILES Substitute Functional Groups Remove aldehyde group and add halo group for the molecule. Invalid SMILES Remove aldehyde group and add halo group for the molecule. 7 Table 7: The case study for Molecule Optimizations. Source Molecule Gemini-2.5-pro QWen3-235B Llama3.3-70B LogP Optimization ∆ = 1 .16 ∆ = 0 .51 ∆ =−3.76 ∆ = 1 .68 ∆ = 0 .68 ∆ =−0.39 ∆ = 0 .68 ∆ = 0 .01 ∆ = 0 .0 QED Optimization ∆ = 0 .38 ∆ = 0 .01 ∆ =−0.03 ∆ = 0 .34 ∆ = 0 .0 ∆ =−0.03 Solubility Optimization ∆ = 3 .47 ∆ = 0 .87 ∆ = 0 .48 ∆ = 1 .08 ∆ = 0 .87 ∆ = 0 .52 8 E Task Example To better demonstrate the data structure of ChemCoTBench and the large-scale CoT dataset, we conducted visualizations of representative samples from four distinct tasks: molecule understanding, molecule editing, molecule optimization, and reaction prediction. As illustrated in Figure. 6, Figure. 7, Figure. 8, and Figure. 10, each figure presents sample cases from different tasks, with text highlighted in red indicating the chemical-specific prompt design. Question example for Molecule Understanding You are a chemical assistent . Please Determine whether the ring_system_scaffold is in the Molecule. Input: a molecule's SMILES string, a Ring System Scaffold. Output: yes / no. Definition: The ring system scaffold consists of one or more cyclic (ring -shaped) molecular structures Source Molecule: CC(C)n1cnc2c(NCc3ccc( -c4ccccc4)cc3)nc(N(CCO)CCO)nc21, IUPAC of Source Molecule: 2 -[2-hydroxyethyl -[6-[(4-phenylphenyl)methylamino] -9-propan -2- ylpurin -2-yl]amino]ethanol. Ring system scaffold: c1ccc( -c2ccccc2)cc1. Your response must be directly parsable JSON format: {{ "input_structure ": "original input structure", "molecule_structure_analysis ": "describe the structure of the input Molecule", "scaffold_analysis ": "describe the ring system scaffold", "matching_analysis ": "matching the scaffold with the molecule", "output": "Yes / No" }} DO NOT output other text except for the answer. If your response includes ``` json ```, regenerate it and output ONLY the pure JSON content. Figure 6: Task example for molecule understanding
|
https://arxiv.org/abs/2505.21318v1
|
subtask: Ring System Counting Task. 9 Question example for Molecule Editing You are a chemical assistant. Given the SMILES structural formula of a molecule, help me add a specified functional group and output the improved SMILES sequence of the molecule. Input: Molecule SMILES string, Functional Group Name. Output: Modified Molecule SMILES string. Source Molecule: O=S(=O)(Cc1nc( -c2cccs2)no1)c1ccc2ccccc2n1, Instrcution: Modify the molecule by adding a aldehyde. Your response must contain the step -by-step reasoning, and must be directly parsable JSON format: {{ "molecule_analysis ": "[your reasoning] Analyze the functional groups and other components within the molecule", "function_group_introduce_strategy ": "[your reasoning] Determine how and at which site the new group can be most reasonably added", "feasibility_analysis ": "[your reasoning] Assess the chemical viability of the proposed modification", "output": "Modified Molecule SMILES" }} DO NOT output other text except for the answer. If your response includes ``` json ```, regenerate it and output ONLY the pure JSON content..Figure 7: Task example for molecule editing subtask: Functional-Group Adding Task. Question example for Molecule Optimization You are a chemical assistent , Optimize the Source Molecule to improve the GSK3 -beta property (Glycogen Synthase Kinase 3 -beta Inhibition) while following a structured intermediate optimization process. IUPAC names are provided to resolve ambiguities in SMILES. For functional groups, IUPAC takes priority over SMILES. Note these key group distinctions which are difficult to distinguish (1) Piperazine (1,4 - diazacyclohexane): C1CNCCN1 (2) Piperidine (azinane): C1CCNCC1 (3) Pyrrole (azole): C1=CC=CN1 Source Molecule: c1ccc( -c2cc(NCc3cccnc3)n3nccc3n2)cc1, IUPAC of Source Molecule: 5-phenyl -N-(pyridin -3-ylmethyl)pyrazolo[1,5 -a]pyrimidin -7-amine. Always output in strict, raw JSON format. Do NOT include any Markdown code block wrappers (e.g., ``` json ``` or ```). Your response must be directly passable JSON format: \n {{ "Structural Analysis of Source Molecule": "", "Property Analysis": "", "Limitation in Source Molecule for Property": "" "Optimization for Source Molecule": "", "Final Target Molecule": "SMILES", }} DO NOT output other text except for the answer. If your response includes ``` json ```, regenerate it and output ONLY the pure JSON content. Figure 8: Task example for molecule optimization subtask: Optimizing GSK-3 βTask. 10 Question example for Next Elementary-step Product PredictionWe have one typical reaction (reaction class: 'Bromo Sonogashiracoupling', starting reactants:'CCOC(=O)C(OC(C)(C)C)c1c(C)cc2ccc(Br)cc2c1-c1ccc(Cl)cc1.C#CC(C)(C)O', reagents:'CCN(CC)CC.C1CCOC1.CCOC(=O)C(OC(C)(C)C)c1c(C)cc2ccc(Br)cc2c1-c1ccc(Cl)cc1.C#CC(C)(C)O.[Cl-].[Cu]I.[NH4+]', reaction condition: 'Reaction with Pd coordinated with 3 or 4 ligands'). Here are the previous elementary reaction steps: Elementary Step 1: { "reactants": c1ccc([PH](c2ccccc2)(c2ccccc2)[Pd]([PH](c2ccccc2)(c2ccccc2)c2ccccc2)([PH](c2ccccc2)(c2ccccc2)c2ccccc2)[PH](c2ccccc2)(c2ccccc2)c2ccccc2)cc1, "products": c1ccc([PH](c2ccccc2)(c2ccccc2)[Pd]([PH](c2ccccc2)(c2ccccc2)c2ccccc2)[PH](c2ccccc2)(c2ccccc2)c2ccccc2)cc1.c1ccc(P(c2ccccc2)c2ccccc2)cc1, "step annotation": Ligand leaving, } Elementary Step 2: { "reactants": c1ccc([PH](c2ccccc2)(c2ccccc2)[Pd]([PH](c2ccccc2)(c2ccccc2)c2ccccc2)[PH](c2ccccc2)(c2ccccc2)c2ccccc2)cc1, "products": c1ccc([PH]([Pd][PH](c2ccccc2)(c2ccccc2)c2ccccc2)(c2ccccc2)c2ccccc2)cc1.c1ccc(P(c2ccccc2)c2ccccc2)cc1, "step annotation": Ligand leaving, } Now, we want to predict the next elementary reaction step.Currently we know the basic information: "current_step_info": { "reactants": [Cu]I.C#CC(C)(C)O, "step annotation": Copper activation, } Under the same reaction condition and reagents, please give me the products of the next step element reaction. Just return the SMILES of prediction. Your response must contains directly parsableJSON format: { "pred_smi": str }Figure 9: Task example for mechanism prediction subtask: Next Elementary-step Product Prediction. 11 Question example for Mechanism Route SelectionFor reaction class: 'Carboxylic acid + amine condensation', under the condition of 'Condensation using BOP' and given reagents (written in SMARTS format) '[#8]=[#6]-[#8].[#7,#16,#8].[#7]-[#8]-[P+]', which following description is the
|
https://arxiv.org/abs/2505.21318v1
|
correct elementary reaction stages description, considering the mechanism of this type of reaction?Choices: A: Carboxylic acid deprotonation →Reaction of carboxylic acid and HATU/HBTU →Addition of HOBt(1-hydroxybenzotriazole) into carboxylic acid-HATU/HBTU →Amine attacks HOBt-carboxylic acid complex →Proton exchange between amide and HOBtB: Proton exchange →Formation of a single bond between carboxylic acid and protonated DCC →Addition of amine (thiol) into carboxylic acid-DCC complex →Cleavage into amide and urea →Proton exchange between amide and urea C: Carboxylic acid deprotonation →Reaction of carboxylic acid and CDI →Addition of imidazole into carboxylic acid-CDI →Amine attacks imidazole-carboxylic acid complex →Proton exchange between amide and imidazole D: Addition of alcohol under the acidic conditions / deprotonation of alcohol →Neutralization of protonated ester / Addition of alcohol under the basic conditions E: Proton exchange →Formation of a single bond between carboxylic acid and protonated DCC →Addition of HOBt(1-hydroxybenzotriazole) into carboxylic acid-DCC complex →Amine attacks HOBt-carboxylic acid complex →Proton exchange between amide and HOBtF: Deprotonation of carboxylic acid →Nucleophilic substitution G: Carboxylic acid deprotonation →Reaction of carboxylic acid and BOP →Addition of HOBt(1-hydroxybenzotriazole) into carboxylic acid-HATU/HBTU →Amine attacks HOBt-carboxylic acid complex →Proton exchange between amide and HOBtH: Addition of amine into carboxylic acid →Deprotonation of amine →Hydroxide ion leaves I: Addition to thionyl chloride →Addition of chloride →Pseudo-pericyclic expulsion of SO2, HCl →Nucleophilic addition →Nucleophilic addition →Deprotonation J: Protonation of carbonyl or deprotonation of alcohol →Alcohol addition to carbonyl →Protonation or deprotonation of complex →Water or hydroxide ion leaving →Proton exchange.Return the choice (capital letter) in JSON format: { "choice": str # (e.g. 'A'/'B') }Figure 10: Task example for mechanism prediction subtask: Mechanism Route Selection (MechSel). 12
|
https://arxiv.org/abs/2505.21318v1
|
arXiv:2505.21322v1 [cs.AI] 27 May 2025Proceedings of Machine Learning Research 288:1–19, 2025 Assured Autonomy with Neuro-Symbolic Perception R. Spencer Hallyburton SPENCER .HALLYBURTON @DUKE .EDU Miroslav Pajic MIROSLAV .PAJIC @DUKE .EDU Duke University Keywords: Perception. Autonomy. Cyber-physical system security. Abstract Many state-of-the-art AI models deployed in cyber-physical systems (CPS), while highly accu- rate, are simply pattern-matchers. With limited security guarantees, there are concerns for their reliability in safety-critical and contested domains. To advance assured AI, we advocate for a paradigm shift that imbues data-driven perception models with symbolic structure, inspired by a human’s ability to reason over low-level features and high-level context. We propose a neuro- symbolic paradigm for perception (NeuSPaPer) and illustrate how joint object detection and scene graph generation (SGG) yields deep scene understanding. Powered by foundation models for of- fline knowledge extraction and specialized SGG algorithms for real-time deployment, we design a framework leveraging structured relational graphs that ensures the integrity of situational aware- ness in autonomy. Using physics-based simulators and real-world datasets, we demonstrate how SGG bridges the gap between low-level sensor perception and high-level reasoning, establishing a foundation for resilient, context-aware AI and advancing trusted autonomy in CPS. 1. Introduction Over the past decade, AI research has been primarily focused on optimizing black-box models with vast domain-specific training data. While benchmark performance has improved, the effort spent tuning traditional models has not led to significant assuredness guarantees. It is well known that even minor perturbations to the input data of AI models, both natural and adversarial, have led to high- profile unintended and sometimes catastrophic failures (e.g., from Eykholt et al. (2018), Finlayson et al. (2019)), raising concerns about AI’s reliability in safety-critical cyber-physical systems (CPS). Defensive techniques such as adversarial training Shafahi et al. (2020), distillation Papernot et al. (2016), and ensembling Jia et al. (2019) contend to secure models. However, adaptive adver- saries consistently overcome such defenses Carlini and Wagner (2017), suggesting that deep neural networks (DNNs) are statistical pattern-matchers rather than true high-level reasoners. Current se- curity analyses remain incomplete, focusing largely on structured noise like Lpnorm perturbations that fail to capture real-world adversary complexities. Examining recent attacks on multi-sensor fusion, such as the frustum attack Hallyburton et al. (2022), we argue that traditional DNN architec- tures face innate and unavoidable vulnerabilities to attacks that alter a scene’s semantic structure. Achieving robust perception necessitates moving beyond reactive defenses to existing archi- tectures and integrating structured reasoning with statistical learning. In contrast to DNNs, human perception seamlessly integrates low-level feature recognition with high-level contextual and com- monsense reasoning, enabling us to interpret ambiguous, noisy, or incomplete data because of an ability to infer object relationships, detect inconsistencies, and reason about cause-and-effect in- teractions in a scene. Given fundamental vulnerabilities of existing DNNs, we advocate for an incorporation of symbolic reasoning into perception models to enhance reliability and robustness. © 2025 R.S. Hallyburton & M. Pajic. HALLYBURTON PAJIC In particular, we propose a paradigm shift in sensor fusion from vulnerable pattern-matching DNNs to logically-grounded reasoning algorithms that combine neural and symbolic components. Transcending black-box algorithms, a neuro-symbolic approach
|
https://arxiv.org/abs/2505.21322v1
|
allows for incorporating logical constraints and commonsense knowledge – an approach that promises to enhance robustness in safety-critical applications such as autonomous driving (AD) and unmanned aerial vehicles (UA Vs). Our neuro-symbolic approach to sensor fusion commences with a joint detection and graph generation step leveraging advancements in scene graph generation (SGG), a promising backbone for grounding black-box inference in high level logical relationships. SGG algorithms build graph- ical representations of scenes by identifying objects (nodes) and illuminating salient interactions between them (edges). Applied to perception, SGG can enrich situational awareness with contextu- alized high- and low-level concepts that traditional object detectors are ill-suited to discern. To secure single- and multi-sensor fusion with SGG requires the design of specialized integrity algorithms for scene-graph-based anomaly detection. We propose a two-stage integrity framework where graphs from each sensor are first evaluated against physics-based knowledge bases to ensure compliance with commonsense understanding (per-sensor). Graphs from all sensors are then sent to a multi-sensor graph consistency evaluator that considers the compatibility of nodes and edges across graphs (cross-sensor). Insights from graph-based integrity are used to flag anomalous infer- ence results and weight information updates in a downstream graph-informed sensor fusion step. In this early work, we present feasibility case studies to illustrate the promising potential of neuro-symbolic sensor fusion. We walk through case studies based on both real-world datasets (nuScenes, Caesar et al. (2020)) and physics-based simulators (CARLA, Dosovitskiy et al. (2017)). In the camera domain, we use a foundation model to predict graphs from RGB camera images. In the LiDAR domain, we use a rule-based approach because the data already fully resolve 3D Cartesian space. Object detections and graphs are then compared to illuminate any inconsistencies between them. We show that even when considering challenging attacks such as the frustum attacks against the LiDAR sensor, we can easily identify incompatible scene graphs between the camera and Li- DAR. To the best of our knowledge, this is the first single-platform approach demonstrated to detect such attacks, and results suggest the potential for neuro-symbolic methods to significantly improve security guarantees for perception in CPS. These findings motivate planned future research in de- signing full-stack neuro-symbolic perception and integrity. Contributions. In summary, the contributions of this work include: •Vulnerability analysis: demonstration of the vulnerability of DNN models to stealthy, unde- tectable attacks on sensing that alter the semantic structure of the scene. •Neuro-symbolic perception: Design of a novel neuro-symbolic paradigm for perception jointly performing detection, classification, and graph-building from sensor data. •Neuro-symbolic integrity: Architecting of neuro-symbolic integrity to reason over per- sensor and cross-sensor consistency against commonsense and physics-informed knowledge. •Feasibility study: Case studies in real-world and simulated datasets demonstrating first single-platform detection of challenging attacks, previously thought to be stealthy. 2. Sensor Fusion in Autonomy Perception. DNNs are the state-of-the-art in object detection. Widely used algorithms/architectures for images (img) include convolutional neural networks (CNNs), Faster R-CNN Ren et al. (2016) 2 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION andYOLO Redmon (2016), while for point clouds (pc) analysis, PointPillars Lang et al. (2019) and PV-RCNN Shi et al. (2020) are
|
https://arxiv.org/abs/2505.21322v1
|
used. Recent transformer-based detectors , such as DETR Carion et al. (2020), offer an end-to-end approach minimizing the use of hand-crafted heuristics. Multi-sensor fusion. Fusing data improves observability, robustness, and attack resilience. Multi- sensor fusion typically occurs at the semantic level Durrant-Whyte and Henderson (2016). Such approaches are used in AD Baresi and Tamburri (2023) and UA Vs Fei et al. (2023). Appendix A formalizes the semantic-level multi-sensor fusion problem. 3. Vulnerability of Perception & Sensor Fusion In this section, we discuss how attacks on perception that alter the semantic structure of a scene are indefensible by reactive hardening of algorithms due to the equivariance of DNNs. We illustrate that this is of significant concern showing a stealthy frustum attack with high degree of attack success. 3.1. Equivariance Vulnerability In response to DNNs’ vulnerability to out of distribution attacks (e.g., Lpnorm), certified robustness techniques including sub-sampling/ensembling Jia et al. (2019) were proposed, yielding security guarantees. However, despite the effectiveness of certified robustness against Lpattacks, they fail to defend many real-world attacks that fundamentally alter the semantic structure of the scene. Specifically, point-based (e.g., LiDAR) DNNs are designed so that features derived on a col- lection of points are invariant to spatial translations (see Bronstein et al. (2021) on equivariance). Thus, no amount of sub-sampling or ensembling can mitigate translation attacks because the orig- inal features are retained when points from a legitimate object are (adversarially) moved to a new location. Spoofing attacks where points are injected patterning a legitimate object yield similar out- comes. While equivariance enhances model accuracy and generalization, it unfortunately introduces a vulnerability that adversaries can exploit, rendering certified robustness techniques ineffective. 3.2. Stealthy Attacks On Sensor Fusion Many systems combine dense 2D image data with sparse 3D point clouds. Prior work showed at- tacks on 3D data in 2D-3D fusion are stealthy if the attacker retains consistency with unattacked 2D data Hallyburton et al. (2023a). We consider an optimal frustum attack Hallyburton et al. (2022) (derived in Appendix B.4) that exploits the DNN equivariance property to obtain such a stealthy outcome. Fig. 1 shows an adversary shifts a car’s 3D bounding box while being stealthy by preserv- ing high overlap with 2D detections. The intersection over union (IoU) threshold determines the maximum translation, enabling displacements over 40mwhile maintaining IoU >0.9. Due to the retained consistency with unaltered 2D detections, single-frame integrity checks fail, making the attack undetectable. Such attacks pose serious risks for path planning, leading to safety incidents. 4. Neuro-Symbolic AI: A New Paradigm for Perception The widespread effectiveness of attacks on perception underscores the limitations of traditional algorithms in defending against manipulations to the semantic structure of a scene. Thus, motivated to overcome the pattern-matching nature of DNNs, we propose and evaluate a novel neuro-symbolic approach to perception that jointly detects objects and reasons over their semantic relationships. 3 HALLYBURTON PAJIC Figure 1: Attacker can alter the semantic understanding of the scene while being stealthy to multi- sensor fusion. Translating existing 3D objects (denoted with white box) backwards or forwards (resulting in the detected ’moved’ red
|
https://arxiv.org/abs/2505.21322v1
|
boxes) from ego maintains consistency with 2D frustum in image plane. Attacker runs optimization to move object as far back as possible while retaining at least a minimum IoU (overlap) when projected into 2D image. 4.1. Overview of Approach Unlike DNNs, human perception seamlessly integrates low-level feature recognition with high-level reasoning. Humans can infer relationships, identify inconsistencies, and reason about cause-and- effect from vision alone. To safeguard the future of AI-driven autonomy, it is imperative to develop assured perception algorithms that incorporate enhanced contextual awareness and reasoning. One promising approach to neuro-symbolic perception is scene graph generation (SGG). In addition to detecting objects, SGG yields inter-object relationships describing scene composition. Whereas previous attempts to secure perception relied on hardening DNNs to specific attacks, we propose reasoning over semantic scene graphs to evaluate general data integrity. Such graphical models are able to hold high-level semantic insights that 2D object detectors alone fail to capture. In this section, we discuss graph generation, integrity evaluation, and graph-informed sensor fusion. 4.2. System Components The proposed neuro-symbolic framework consists of the following components, depicted in Fig. 2. 4.2.1. J OINT PERCEPTION AND GRAPH GENERATION Graphs codify the structure of a scene, and SGG explicitly infers objects and their relationships to yield high-level context-driven scene understanding, often lifting 3D-like relationships (e.g., rela- tive positions) directly from 2D data such as images. Early CNN-based SGG approaches struggled to effectively capture relationships due to the inability of CNNs to maintain features between spa- tially separated objects Johnson et al. (2015). However, recent transformer-based methods have demonstrated superior capabilities in modeling interactions across the input space, significantly ad- vancing SGG performance Carion et al. (2020). We describe approaches to SGG below and present examples in Figs. 3, 4, and 5. Additional discussion on SGG algorithms is provided in Appendix C. 4 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION Figure 2: Neuro-symbolic paradigm for perception performs object detection, classification, and scene graph generation jointly, enabling context-based reasoning over e.g., spatial relationships from multi-modal data. Reasoning over the graphical models is informed by physics-based knowledge bases and happens both for each sensor and between sensors before impacting sensor fusion. Geometric Functions (Rule-Based). Geometric functions over a 3D Cartesian space define spa- tial relationships between objects. 3D objects detected from 3D point clouds are passed to manually- defined functions to build relationships such as proximity, adjacency, occlusion, and orientation. This approach is not effective for 2D image data given a camera’s lack of explicit distance resolu- tion. Fig. 3 and Appendix C.3 describe rule-based SGG from 3D inputs. Foundation Models. Foundation models, including vision-language transformers like CLIP Rad- ford et al. (2021) and ViLT Kim et al. (2021), facilitate off-the-shelf SGG by leveraging extensive multi-modal knowledge acquired during training. These models naturally bridge visual and linguis- tic information, effectively capturing relational semantics, global context, and object interactions. Fig. 4 demonstrates SGG using a vision-language foundation model on camera images. To the best of our knowledge, foundation models are not yet capable of SGG from point cloud data. Specialized SGG Models. Transformers excel at modeling complex object
|
https://arxiv.org/abs/2505.21322v1
|
relationships and global contexts, making them ideal for SGG. Recent specialized models, such as EGTR Im et al. (2024) and SGTR Li et al. (2022), integrate object proposals with relationship prediction heads to generate comprehensive scene graphs. Fig. 5 illustrates the pipeline and output of a specialized SGG model. SGG models are capable of operating on image or point cloud data, if sufficiently trained. 4.2.2. P ER-SENSOR GRAPH -BASED INTEGRITY REASONING By structuring perception inference outcomes into graphs, SGG captures spatial, semantic relation- ships from real-world sensor data. To evaluate the self-consistency of graph structures to support assured decision-making and safety in autonomy, downstream integrity then reasons over the graph 5 HALLYBURTON PAJIC (a) Bird’s eye view (BEV) of 3D box detections on point cloud. car, ID: 1 left_of front_ofleft_of front_of left_ofbus, ID: 2 front_of front_of left_of farfront_of left_of fartruck, ID: 4 front_of nearcar, ID: 3 left_of(b) Rule-based scene graph using as input 3D box detections. Figure 3: Scene pairs with below images. (a) BEV projection of LiDAR point cloud from nuScenes dataset shown with box detections. (b) Geometric rules build scene graphs using 3D boxes. Nodes (blue) connected via edge relations (red). (a) Raw image used to jointly detect objects and build graph. Red V an on beside Truck onBuildings beside StreetTraffic Lights aboveCrosswalk onSky aboveScaffolding attached toBus behind(b) Foundation model builds a scene graph using raw image as input. Figure 4: (a) Raw camera image feeds foundation model that (b) directly builds scene graph. Node and edge types differ from rule-based approach because foundation model yields nodes and rela- tionships based on large multi-modal training process. (a) EGTR specialized SGG model trained to detect nodes and predict edges. Figure 5: Joint perception and scene graph generation performed using EGTR model from Im et al. (2024). Regressed graph connects nodes with relations in (subject, predicate, object) format. 6 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION contents. We introduce an approach to reasoning leveraging knowledge graph embeddings (KGEs) and constraint satisfaction evaluation (CSE), both concepts highlighted in Dimasi (2023). KGEs are structured representations of commonsense information and can enhance the assess- ment of scene graph integrity, Chen et al. (2020). Anomaly detectors compare extensive knowledge from KGEs with outcomes of SGG to validate real-world observed concepts against generated re- lationships. Integrating knowledge graphs allows models to infer and correct potentially erroneous scene interpretations based on well-established semantic relationships. This approach enhances the robustness and integrity of SGG, leading to more accurate representations of scenes. Constraint satisfaction provides an effective means for assessing the integrity of scene graph outputs by enforcing logical and semantic requirements on the relationships among detected objects. By applying domain-specific constraints, anomaly detectors can either reject unrealistic scene graph outputs or guide corrective measures, ultimately ensuring outputs align more closely with real-world knowledge and expectations, such as the logical requirements outlined in Giunchiglia et al. (2023). 4.2.3. C ROSS -SENSOR GRAPH -BASED INTEGRITY REASONING Classical anomaly detectors such as χ2innovation tests on state estimators or inter-sensor assign- ment between detections from multiple sensors only evaluate consistency at the individual detection level.
|
https://arxiv.org/abs/2505.21322v1
|
In contrast, our neuro-symbolic cross-sensor integrity function takes into account the full graph of detections and their relationships with other detections. This approach to cross-sensor in- tegrity is particularly effective in securing perception to attacks that alter the semantic understanding of the scene, such as a false positive/negative or translation attack. Even if graphs are self-consistent, they may not be consistent across sensors; cross-sensor evaluation enhances security robustness if attackers cannot consistently compromise all sensors at once. Strategies for cross-sensor graph integrity reasoning include both brute-force and learned-inference algorithms. With the brute-force approach, nodes are matched between graphs, and edges travel- ing between nodes are then matched based on the node matching. Any edges without a match are evaluated to determine if the lack of agreement is a product of noisy sensor data or an attack. A complementary approach is to feed scene graphs to graph neural networks (GNNs) to evaluate for consistency. The result of graph integrity is a per-node, per-edge classification of (in)consistent. Beyond anomaly detection, cross-sensor integrity can be used to hypothesize the exact pertur- bations responsible for any observed inconsistencies , such as identifying adversarial manipulations affecting spatial relationships. This deeper reasoning and threat identification enhances responsive- ness to complex adversarial scenarios and presents first-of-a-kind resilience in autonomy. 4.2.4. G RAPH -INFORMED SENSOR FUSION While traditional sensor fusion algorithms update situational awareness with detections and per- object features, providing fusion with relationships from graphs can significantly enhance perfor- mance and robustness to challenging natural and adversarial circumstances. Scene graphs illuminate high-level semantics that object detectors alone are incapable of providing. Such information can help align heterogeneous data (e.g., resolve conflicts in inter-sensor object assignment) and reduce ambiguity due to e.g., occlusions. Integrating graphical information into fusion facilitates enhanced contextualization of knowledge and ensures consistency across diverse sensor modalities, ultimately improving situational awareness and reliability in perception systems. 7 HALLYBURTON PAJIC 5. Feasibility Study We present a feasibility study demonstrating how neuro-symbolic methods can detect previously stealthy attacks on perception presented in Section 3.2. We employ ground-vehicle datasets from both the physics-based simulator CARLA and the real-world nuScenes dataset. While multiple scenes were analyzed, due to space limitations, here we present detailed results from one represen- tative CARLA scene. Additional analyses and case studies are provided in Appendix D. Figs. 6( a) and 6( b) present benign camera and LiDAR data from a scene that includes four detected objects: a nearby car, a mid-distance van and bicycle, and a distant truck. Traditional camera- and LiDAR-based detectors accurately regress bounding boxes for these objects. 5.1. Scene Graph Generation We employ a foundation model to construct graphs from images through natural language prompts. Despite using 2D images, these models can effectively infer 3D spatial relationships between ob- jects. Each image is analyzed with the query: "Build a scene graph from this image." The foundation model outputs (subject, predicate, object) triplets where subject andobject are detected instances in the scene and predicate falls within the set of relation- ships such as in front of ,near ,occluding ,following . The set of considered
|
https://arxiv.org/abs/2505.21322v1
|
relation- ships for this work is described in Appendix C. An example is illustrated in Fig. 6( a). For LiDAR data, scene graphs are generated by first detecting 3D bounding boxes using classical detectors and then passing boxes to rule-based geometric relationship functions. This study focuses on proximal relationships (e.g., front of ,left of ,near/far ). Appendix C.3 describes all considered rules. Fig. 6( b) illustrates a LiDAR-derived scene graph. 5.2. Adversary Threat Model Section 3 described stealthy attacks exploiting differences in sensor resolution, such as frustum attacks, where an object is repositioned in 3D space yet maintains consistency with 2D data, thus passing through multi-sensor fusion undetected. Fig. 6( c) demonstrates such an attack where the van’s position is significantly translated in LiDAR data. This attack is stealthy to traditional DNN- based detectors due to the retained consistency when 3D detections are projected to the image. 5.3. Integrity Evaluation via Scene Graphs The scene graph of the 2D image (Fig.6( a)) aligns with the original, unattacked LiDAR-derived graph (Fig.6( b)). In contrast, the graph from compromised LiDAR data (Fig.6( c)) shows clear in- consistencies with the image’s graph. Cross-sensor integrity in Fig.6( d) highlights that inconsisten- cies in the van-truck relationship across sensors are easily identifiable. Neuro-symbolic perception and integrity extracts meaningful semantic information from sensor data, enabling attack detection even though the manipulated 3D box remains consistent with the unattacked 2D boxes. The neuro-symbolic SGG pipeline offers the first method that secures perception against attacks exploiting asymmetric sensor resolutions, such as the frustum attack. While we use this style of attack as a case study, perception can benefit broadly from neuro-symbolic reasoning. While DNNs detecting objects from images only capture 2D information, SGG lifts relationships from the image that contain 3D insights by leveraging context in the scene. The extraction of relational scene graphs facilitates a more insightful comparison between data and supports secure and assured autonomy. 8 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION (a) Detections and scene graph built from foundation model on camera input. (b) DNN yields detections on benign LiDAR data, rules construct scene graph. (c) Adversary manipulates Van, translating it (red box) away from ego. When projected to front-view, Van is still consistent with camera. (d) Reasoning on subgraphs illuminates in- consistencies in semantic concepts be- tween image graph and attacked LiDAR graph. Figure 6: (a) Foundation model jointly detects objects and builds scene graph from image. (b,c) Perception yields rule-based scene graph from LiDAR. (c) Attacker translates Van away from the ego - attacked box when projected to camera is still consistent with 2D detections, so camera detec- tions alone cannot detect the attack. (d) Graph-building lifts purely 2D camera data to relational 3D space by inferring positional relationships with context. Inconsistencies identified between camera and LiDAR graphs allow for attack detection of previously thought-to-be stealthy attacks. 9 HALLYBURTON PAJIC 6. Challenges to Realizing Neuro-Symbolic Perception Real-time constraints. Deploying foundation models (e.g., LLMs, vision transformers) in CPS is challenging due to high computational demands and cloud latency. To address this, we propose training specialized SGG algorithms
|
https://arxiv.org/abs/2505.21322v1
|
on autonomy-specific datasets for efficient, real-time SGG. Dataset construction. Constructing high-quality datasets while handling edge cases (e.g., zero- /few-shot) and real-world complexity is a key challenge for neuro-symbolic algorithms Gilpin and Ilievski (2021). We employed a dataset generation pipeline using A Vstack Hallyburton et al. (2023b) and CARLA from Hallyburton and Pajic (2023) to being construction of the first neuro-symbolic datasets in autonomy. See Appendix C.4 for details. 7. Conclusion and Future Research Directions To address the lack of security guarantees in existing AI models that operate as simple pattern- matchers, we proposed a paradigm shift toward neuro-symbolic perception in safety-critical do- mains, integrating logical reasoning and commonsense knowledge with deep neural networks for tasks including object detection. By leveraging SGG and foundation models for structured environ- ment understanding, our approach bridges the gap between low-level sensor perception and high- level reasoning. Through feasibility studies in physics-based simulators and real-world datasets, we demonstrated that SGG enhances resilience, interpretability, and security in AI-driven autonomy, paving the way for more trustworthy and robust perception in safety-critical applications. Future Research Direction Advancing neuro-symbolic sensor fusion requires substantial investment. The following are identi- fied key areas that will drive the maturation of this technology toward deployment in real systems. Temporal Graph Integrity. Evaluating the longitudinal consistency of graphs ensures that nodes and their relationships evolve consistent with physics. Inexplicable temporal discontinuities will draw integrity scrutiny. Algorithms in this area can draw insights from object tracking. Knowledge Graphs. This work proposed per-sensor integrity that integrates KGEs with SGG. While our case study focused on multi-sensor integrity, future research should develop and imple- ment graph consistency evaluations coupled with KGEs. Multi-Sensor Integrity. We used a brute-force evaluation of all subgraphs to detect inconsisten- cies in multi-sensor reasoning. Unfortunately, uncertain or incomplete graphs due to noisy data can yield inaccurate conclusions. Future effort should be spent designing inference algorithms that are robust to noise and capable of probabilistic reasoning over uncertain graphs. Specialized SGG. Our presented case study relied on foundation models which are computation- ally expensive, memory-intensive, prone to latency, and unbounded in output space. Future efforts will train specialized SGG models with first-of-a-kind autonomy-related neuro-symbolic datasets. LiDAR-Based SGG. The current LiDAR-based neuro-symbolic inference pipeline is serialized, first detecting objects before building scene graphs with geometric functions. To ensure consis- tency with image-based inference and to advance neuro-symbolic capabilities, future research will develop models that directly ingest LiDAR data and jointly perform object detection and SGG. 10 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION Acknowledgments This work is sponsored in part by the ONR under agreement N00014-23-1-2206, AFOSR under the award number FA9550-19-1-0169, and by the NSF under NAIAD Award 2332744 as well as the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant CNS-2112562. References Luciano Baresi and Damian A Tamburri. Architecting artificial intelligence for autonomous cars: The openpilot framework. In European Conference on Software Architecture , pages 189–204. Springer, 2023. Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veli ˇckovi ´c. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478 ,
|
https://arxiv.org/abs/2505.21322v1
|
2021. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh V ora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 11621–11631, 2020. Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z Morley Mao. Adversarial sensor attack on lidar-based perception in autonomous driving. In Proceedings of the 2019 ACM SIGSAC conference on computer and communications security , pages 2267–2281, 2019. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European conference on computer vision , pages 213–229. Springer, 2020. Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security , pages 3–14, 2017. Xiaojun Chen, Shengbin Jia, and Yang Xiang. A review: Knowledge reasoning over knowledge graph. Expert systems with applications , 141:112948, 2020. Paolo Emmanuel Ilario Dimasi. Scene Graph Generation in Autonomous Driving: a Neuro- symbolic approach . PhD thesis, Politecnico di Torino, 2023. Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. In Conference on robot learning , pages 1–16. PMLR, 2017. Hugh Durrant-Whyte and Thomas C Henderson. Multisensor data fusion. Springer handbook of robotics , pages 867–896, 2016. Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1625–1634, 2018. 11 HALLYBURTON PAJIC Shuaipeng Fei, Muhammad Adeel Hassan, Yonggui Xiao, Xin Su, Zhen Chen, Qian Cheng, Fuyi Duan, Riqiang Chen, and Yuntao Ma. Uav-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precision agriculture , 24(1):187–212, 2023. Samuel G Finlayson, John D Bowers, Joichi Ito, Jonathan L Zittrain, Andrew L Beam, and Isaac S Kohane. Adversarial attacks on medical machine learning. Science , 363(6433):1287–1289, 2019. Leilani H Gilpin and Filip Ilievski. Neuro-symbolic reasoning in the traffic domain. J AI Res , 15 (3):123–145, 2021. Eleonora Giunchiglia, Mihaela C ˘at˘alina Stoian, Salman Khan, Fabio Cuzzolin, and Thomas Lukasiewicz. Road-r: the autonomous driving dataset with logical requirements. Machine Learn- ing, 112(9):3261–3291, 2023. R Spencer Hallyburton and Miroslav Pajic. Datasets, models, and algorithms for multi-sensor, multi-agent autonomy using avstack. arXiv preprint arXiv:2312.04970 , 2023. R Spencer Hallyburton, Yupei Liu, Yulong Cao, Z Morley Mao, and Miroslav Pajic. Security anal- ysis of camera-lidar fusion against black-box attacks on autonomous vehicles. In 31st USENIX Security Symposium (USENIX Security 22) , pages 1903–1920, 2022. R Spencer Hallyburton, Qingzhao Zhang, Z Morley Mao, and Miroslav Pajic. Partial-information, longitudinal cyber attacks on lidar in autonomous vehicles. arXiv preprint arXiv:2303.03470 , 2023a. Robert Spencer Hallyburton, Shucheng Zhang, and Miroslav Pajic. Avstack: An open-source, re- configurable platform for autonomous vehicle development. In Proceedings of the ACM/IEEE 14th International Conference on Cyber-Physical Systems (with CPS-IoT
|
https://arxiv.org/abs/2505.21322v1
|
Week 2023) , pages 209–220, 2023b. Jinbae Im, JeongYeon Nam, Nokyung Park, Hyungmin Lee, and Seunghyun Park. Egtr: Extracting graph from transformer for scene graph generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 24229–24238, 2024. Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. arXiv preprint arXiv:1912.09899 , 2019. Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li Fei-Fei. Image retrieval using scene graphs. In Proceedings of the IEEE conference on com- puter vision and pattern recognition , pages 3668–3678, 2015. Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convo- lution or region supervision. In International conference on machine learning , pages 5583–5594. PMLR, 2021. Alex H Lang, Sourabh V ora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Point- pillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 12697–12705, 2019. 12 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION Rongjie Li, Songyang Zhang, and Xuming He. Sgtr: End-to-end scene graph generation with trans- former. In proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 19486–19496, 2022. Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Visual relationship detection with language priors. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14 , pages 852–869. Springer, 2016. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP) , pages 582–597. IEEE, 2016. Jonathan Petit and Steven E Shladover. Potential cyberattacks on automated vehicles. IEEE Trans- actions on Intelligent transportation systems , 16(2):546–556, 2014. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PmLR, 2021. J Redmon. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition , 2016. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence , 39(6):1137–1149, 2016. Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, Larry S Davis, and Tom Goldstein. Uni- versal adversarial training. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 34, pages 5636–5643, 2020. Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping Shi, Xiaogang Wang, and Hongsheng Li. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10529–10538, 2020. Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. Unbiased scene graph generation from biased training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 3716–3725, 2020. James Tu, Mengye Ren, Sivabalan
|
https://arxiv.org/abs/2505.21322v1
|
Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, and Raquel Urtasun. Physically realizable adversarial examples for lidar object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 13716–13725, 2020. Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. Neural motifs: Scene graph parsing with global context. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 5831–5840, 2018. 13 HALLYBURTON PAJIC Appendix A. Data Fusion One approach to multi-sensor data fusion in autonomy is to obtain detected objects from each sensor in parallel, perform an assignment between the objects detected in pairs of sensors, and fuse the detection data for each assignment set in a state estimator (e.g., Kalman filter). We briefly review the general form of data association (“assignment problem”) as it applies to multi-sensor data fusion. We denote detected objects from perception as D2D←percep (img)andD3D←percep (pc) where percep are algorithms that take in sensor data and return bounding boxes around objects. Given two sets, S1,S2, as well as a weighting function C:S1× S 2→R, the assignment problem finds a bijection f:S1→ S 2such that a cost function X s∈S1C(s, f(s)) is minimized. Even if the weighting function is nonlinear, the problem is viewed as linear because the cost is a linear sum. In the case of 2D (e.g., image) and 3D (e.g., LiDAR, radar) detections, sets areS1:=D2DandS2:=D3D. With asymmetric sensor resolution (i.e., 2D/3D), most often Cis the intersection over union (IoU) of the 2D/3D bounding boxes in the image plane, i.e., C(d2D i, d3D j) =IoU d2D i,project (d3D j) , where project is the operation projecting a 3D box to the 2D image plane and d2D i, d3D jare individual detections from each set. In practice, the weight function is used to construct a cost matrix enumerating over all pairs of detected objects in 2D and 3D as A[i, j]←C(d2D i, d3D j), and a linear sum optimizer is then applied to Ato yield the optimal bipartite solution to the as- signment problem. Candidate assignment algorithms include the Hungarian and Jonker-V olgenant varieties. A threshold is often used so that low affinity pairs (e.g., small overlap) are not accepted. Appendix B. Adversary Framework and Optimal Frustum Attack While prior works considered Lpnorm perturbations, we instead consider a general attacker objec- tive. This objective allows modeling physically-realizable attacks such as the introduction of a false object while abstracting the implementation details (e.g., spoofing vs. cyber-attack vs. backdoor). B.1. Attacker Knowledge & Capability We consider a generic adversary that has knowledge of existing objects in a scene. The attacker is also able to manipulate any component of existing bounding box detections. A 2D bounding box is a tuple of location (u, v)and box size (h, w). A 3D bounding box is a tuple of position (x, y, z ), box size(h, w, l ), and orientation θ. Attacks can be realized by methods that include sensor spoofing attacks formalized in Cao et al. (2019), physical adversarial objects such as from Tu et al. (2020), and cyber-based attacks that exploit pipeline
|
https://arxiv.org/abs/2505.21322v1
|
vulnerabilities including Trojans in Hallyburton et al. (2023a); Petit and Shladover (2014). For example, frustum-type attacks lead to translations of existing objects as illustrated in Hallyburton et al. (2022). 14 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION B.2. Attacker Goal: Optimal Frustum Attack One particular attack objective is to move the 3D bounding box detections as far as possible from their original locations while retaining the same assignment pairs as in the unattacked case so as to remain stealthy to detection from any uncompromised 2D data from e.g., image-based detections. B.3. Practical Constraints Despite an attacker able to manipulate object detections, certain sensible guidelines must be in place to prevent attacks from being easily detectable. These include: •Box volume. Manipulated object bounding box volume is to be bounded between a [Vmin, Vmax] with bounds set from plausible real-world scenarios; e.g., a semi-truck volume is 150m3. •Box dimensions. Any individual dimension of a box is to be bounded consistent with ob- served data between [(hmin, wmin, lmin),(hmax, wmax, lmax)]. •Vertical position. When tracking ground vehicles, the objects of interest must be on the ground. Thus the attacker should not significantly manipulate the vertical position. •Orientation. Ground vehicles are constrained to be coplanar with the ground. Thus, only the yaw angle, θ, is to be manipulated while pitch and roll are fixed. B.4. Optimizing the Frustum Attack Many deployed systems combine dense 2D image data with sparse 3D point clouds. Prior work Hally- burton et al. (2023a) demonstrated that attacks on such 2D-3D fusion are particularly devastating because intelligent attacks on 3D data can hide in a stealthy null space in the unattacked 2D data. A powerful attacker can execute an optimal frustum attack as the solution to ¯D3D= argmax D3DX pj∈D3D ¯p3D j−p3D j s.t., IoU d2D i,project (¯d3D j) ≥ζmin and Vmin≤¯hjׯljׯwj≤Vmax,(1) where pjis the position of detection dj,¯pjand¯djthe attacker-manipulated position/detection, ζmin a minimum intersection over union (IoU) threshold set to maintain consistency between the attacked 3D detections and the unaltered 2D data (for stealthiness), and project the operation projecting 3D data to the 2D image plane. Appendix C. Scene Graph Generation C.1. Shortcoming of CNNs for SGG Scene graph generation (SGG) requires identifying objects, attributes, and relationships within an image. While CNNs excel at visual feature extraction, they struggle with relational reasoning, spatial precision, and contextual ambiguity. CNNs capture local features but fail to model global relationships critical for SGG, necessitating additional mechanisms like graph neural networks, at- tention, or transformers. Pooling operations further degrade spatial resolution, making precise ob- ject relationships difficult to define. Moreover, CNNs lack the ability to resolve visual ambiguities, limiting their effectiveness in complex relational reasoning. These limitations highlight the need for alternative architectures to enhance SGG performance Johnson et al. (2015); Lu et al. (2016). 15 HALLYBURTON PAJIC C.2. Modern Approaches for SGG While CNNs serve as a foundational step for object detection and feature extraction, effective scene graph generation (SGG) necessitates additional relational modeling techniques. Graph Neural Net- works (GNNs) and Transformers have emerged as leading approaches due to their ability to capture relational structures and
|
https://arxiv.org/abs/2505.21322v1
|
contextual dependencies Zellers et al. (2018); Tang et al. (2020). GNNs represent objects as graph nodes and relationships as edges, enabling message passing that propagates semantic and spatial information throughout the graph for enhanced relational rea- soning. Transformers utilize self-attention mechanisms to dynamically model global relationships and contextual interactions without requiring explicit graph construction. By integrating GNNs’ structured relational representation with Transformers’ flexible contextual modeling, recent meth- ods achieve superior accuracy and generalization in SGG Im et al. (2024); Li et al. (2022). This hy- brid approach effectively captures complex object interactions that CNN-based architectures strug- gle to model, advancing the state-of-the-art in scene graph generation. C.3. Hand-Coded Rule-Based Scene Graphs To build scene graphs from 3D bounding box detections, we define spatially-oriented relation functions. Ultimately, these functions aid in building datasets and training SGG inference al- gorithms. Each of the functions ingests two objects, O1(subject) and O2(object), such that a function e.g., front of (O1, O2)would test that “ O1is in front of O2”. “Symmetric” relations are those whereby there exists a complement relation such that, e.g., front of (O1, O2)⇐⇒ behind (O2, O1). Note that the complement does not strictly need to be included in the graph because it is implied by the former. Therefore, we call a graph “reduced” if it contains only one of any complement pairs. The implementations are in the source code that will be released online. •front of (complement: behind ) •left of (complement: right of ) •occluding (complement: occluded by ) •following (complement: followed by ) •far from (complement: self) •close to (complement: self) •next to (complement: self) C.4. Building Neuro-Symbolic Datasets We utilized the CARLA Dosovitskiy et al. (2017) and nuScenes Caesar et al. (2020) datasets, along with geometric functions above, to construct the first neuro-symbolic dataset for scene graph gen- eration. CARLA, a high-fidelity autonomous driving simulator, provided synthetic yet realistic urban driving scenarios, while nuScenes offered large-scale real-world driving data with detailed 3D annotations. By leveraging these datasets, we extracted object-centric representations, capturing spatial, semantic, and kinematic properties of dynamic and static elements within the scene. Using geometric functions, we computed precise spatial relationships such as distances, occlusions, and patterns between objects, ensuring an explicit and structured encoding of scene interactions. This integration of real-world and simulated data, combined with formal geometric reasoning, enabled the creation of a neuro-symbolic dataset that bridges visual perception with structured relational reasoning, setting a foundation for robust scene graph generation for future research in autonomy. 16 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION (a) Detections, scene graph built from foundation model on camera input. (b) DNN yields detections on LiDAR data, rules construct scene graph. (c) Adversary manipulates pedestrian, translating it away from ego. When pro- jected to front-view, pedestrian is still consistent with camera. (d) Reasoning on subgraphs illuminates inconsistencies in semantics between image and attacked LiDAR graph. Figure 7: Case study of using scene graph generation to secure multi-sensor fusion from attacks on sensing. Analysis procedure follows that of Figure 6. 17 HALLYBURTON PAJIC (a) Detections, scene graph built from foundation model on camera input.
|
https://arxiv.org/abs/2505.21322v1
|
(b) DNN yields detections on LiDAR data, rules construct scene graph. (c) Adversary manipulates pedestrian, translating it away from ego. When pro- jected to front-view, pedestrian is still consistent with camera. (d) Reasoning on subgraphs illuminates in- consistencies in semantics between image and attacked LiDAR graph. Figure 8: Case study of using scene graph generation to secure multi-sensor fusion from attacks on sensing. Analysis procedure follows that of Figure 6. 18 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION Appendix D. Supplemental Case Studies A case study walking through scene graph generation and integrity reasoning from the CARLA simulator was presented in Figure 6. We illustrate a case where an attacker translates the detection of a pedestrian in a scene from the nuScenes dataset in Figure 7. Similarly, evaluating the graphs through an integrity function illuminates inconsistencies between the semantics in the perception results. Finally, Figure 8 describes another scene from nuScenes where a van is adversarially trans- lated away from the ego vehicle. Scene graph generation and graph-based integrity is able to detect inconsistencies in the inference results. 19
|
https://arxiv.org/abs/2505.21322v1
|
MME-Reasoning MME-Reasoning: A Comprehensive Benchmark for Logical Reasoning in MLLMs Jiakang Yuan1,3,∗, Tianshuo Peng2,3,∗, Yilei Jiang2, Yiting Lu4, Renrui Zhang2, Kaituo Feng2, Chaoyou Fu5, Tao Chen1,†, Lei Bai3, Bo Zhang3,†, Xiangyu Yue2,3 1Fudan University2MMLab, The Chinese University of Hong Kong 3Shanghai AI Laboratory4University of Science and Technology of China5Nanjing University https://alpha-innovator.github.io/mmereasoning.github.io/ https://github.com/Alpha-Innovator/MME-Reasoning https://huggingface.co/datasets/U4R/MME-Reasoning Abstract Logical reasoning is a fundamental aspect of human intelligence and an essential capability for multimodal large language models (MLLMs). Despite the significant advancement in multimodal reasoning, existing benchmarks fail to comprehen- sively evaluate their reasoning abilities due to the lack of explicit categorization for logical reasoning types and an unclear understanding of reasoning. To ad- dress these issues, we introduce MME-Reasoning , a comprehensive benchmark designed to evaluate the reasoning ability of MLLMs, which covers all three types of reasoning ( i.e., inductive, deductive, and abductive) in its questions. We care- fully curate the data to ensure that each question effectively evaluates reasoning ability rather than perceptual skills or knowledge breadth, and extend the eval- uation protocols to cover the evaluation of diverse questions. Our evaluation reveals substantial limitations of state-of-the-art MLLMs when subjected to holistic assessments of logical reasoning capabilities. Even the most advanced MLLMs show limited performance in comprehensive logical reasoning, with notable perfor- mance imbalances across reasoning types. In addition, we conducted an in-depth analysis of approaches such as “thinking mode” and Rule-based RL, which are commonly believed to enhance reasoning abilities. These findings highlight the critical limitations and performance imbalances of current MLLMs in diverse logi- cal reasoning scenarios, providing comprehensive and systematic insights into the understanding and evaluation of reasoning capabilities. 1 Introduction Logical reasoning (Liu et al., 2025a), a fundamental cognitive process of analyzing premises and evidence to reach valid conclusions, serves as the cornerstone of human intelligence. Multimodal reasoning (Jaech et al., 2024) enables humans to integrate information from different modalities, such as visual and text, which is essential for tackling complex tasks. Recently, with the emergence of reasoning large language models (LLMs) (Dubey et al., 2024; Yang et al., 2024a) such as DeepSeek- R1 (DeepSeek-AI, 2025), injecting reasoning capability into multimodal large language models (MLLMs) (OpenAI, 2024; Qwen Team, 2025a; Li et al., 2024) has begun to be explored (Peng et al., ∗Equal contribution,†Corresponding authors. 1arXiv:2505.21327v1 [cs.AI] 27 May 2025 MME-Reasoning ThinkingModelsChatModels Figure 1: Performance comparison between thinking and chat models on MME-Reasoning. 2025b; Zhang et al., 2025a; Huang et al., 2025). Despite the significant progress in reasoning MLLMs, a comprehensive evaluation of their capabilities still remains an open challenge. Therefore, it is particularly important to establish a fair and robust evaluation benchmark for assessing the reasoning capabilities of MLLMs and further accelerate the development of this field. Currently, most benchmarks (Fu et al., 2023; Wang et al., 2024a; Lu et al., 2023; Yue et al., 2024a;b; Gong et al., 2024; He et al., 2024) designed for multimodal reasoning primarily focus on knowledge- driven tasks. For example, MathVista (Lu et al., 2023) and MathVerse (Zhang et al., 2024) provide comprehensive evaluations of MLLMs’ mathematical reasoning abilities. OlympiadBench (He et al.,
|
https://arxiv.org/abs/2505.21327v1
|
2024) and EMMA (Hao et al., 2025) expand the scope to include additional subjects, such as physics and chemistry. Apart from knowledge-driven tasks, some works (Song et al., 2025; Chia et al., 2024; Zhang et al., 2025b) have begun to decouple knowledge from logical reasoning, aiming to assess the reasoning abilities of MLLMs independent of specific domain knowledge. For instance, SciVerse (Guo et al., 2025b) and VisualPuzzles (Song et al., 2025) focus on reasoning-focused, knowledge-light tasks. Despite recent advances, existing benchmarks still suffer from several problems as outlined below. First, lacking explicit categorization of reasoning and insufficient coverage of reasoning types. In logic, reasoning is typically classified into three types: abduction, deduction, and induction (Peirce, 2014). Most existing benchmarks primarily concentrate on evaluating MLLMs’ inductive and deductive reasoning ability. For example, most of the questions in MathVerse (Lu et al., 2023) belong to deductive reasoning, which uses rules and premises to derive conclusions. PuzzleVQA (Chia et al., 2024) only contains questions of inductive reasoning, which learns rules based on premises and conclusions. However, abductive reasoning ability ( i.e., exploring premises to explain a conclusion based on the conclusion and rules) is rarely evaluated. Second, the concept of reasoning is not clear enough, which is reflected in confusing perception with reasoning or equating reasoning with the complexity of the required knowledge. For example, MathVista (Lu et al., 2023) contains many questions that can be answered through visual perception, while OlympiadBench (He et al., 2024) includes questions that require advanced domain knowledge, which the model may not have access to. This may lead to an inaccurate evaluation of MLLMs’ reasoning ability. Table 1: Response token length on different datasets. Model MathVista MathVerse MME-R. Qwen2.5-VL-7B 209.5 207.6 442.8 GPT-4o 162.6 157.3 328.0 Claude-3.7-Sonnet-T 519.4 563.2 979.2To address these issues, we introduce MME-Reasoning, a comprehensive benchmark specifically designed to evaluate the reasoning capability of MLLMs. MME-Reasoning consists of 1,188 carefully curated questions that systematically cover types of logical 2 MME-Reasoning InductiveDeductiveAbductive representedbyEore.OneofthesediseaseshasagenelocusontheXchromosome.What'stheprobabilitythatindividual4hasthesamegenotypeasindividual2?Q:ThegenefordiseaseAisrepresentedbyAora,andthegenefordiseaseBis Q:Therulesofthegameareasfollows:(1)Theplayerwhogetsthecardsarrangestheminfrontofthemselvesinascendingorderfromlefttoright.…whatarethenumbersonthecardsmarkedbystar? Q:ObservethefollowinggraphicpatternandchoosetheoptionthatbestfitsthelogicfromA,B,C,Dtofillinthequestionmark(?). Q:Choosethemostappropriateoptionfromthegivenfourchoicestofillinthequestionmark,sothatitpresentsacertainregularity Q:FindthearrangementpatternofnumbersinthepyramidandcalculatethevalueofA,B,andC. Q:Foldandsubtractpartsalongthedottedlinesshowninthefigure,andchoosetheshapethatismostsimilartotheunfoldedshapeofFigureZamongA,B,C,andD. Q:Stack64smallcubestoformalargecube.Punchholesinthepartmarkedwithblackdots,asshowninthepicturebelow.Howmanysmallcubesarepierced? Q: I is the universal set, and A, B, and C are its subsets. Which set does the shaded region represent? Figure 2: Example of questions in MME-Reasoning which covers comprehensive reasoning types. reasoning ( i.e., inductive, deductive, and abductive), while spanning a range of difficulty levels, as illustrated in Fig. 2. Besides, we identify 5 key abilities related to multimodal reasoning, includ- ing calculation, planning and exploring, spatial-temporal, pattern analysis, and casual chaining analysis, and annotate the type of ability assessed by each question. To ensure a true evaluation of reasoning ability, MME-Reasoning eliminates questions that can be answered purely through perception or require complex domain knowledge, thereby focusing on the core reasoning skills of the model. We report the average response lengths of three representative models across different datasets in Tab. 1. Results show that responses on MME-Reasoning are significantly longer than those on previous reasoning benchmarks, indicating its challenging and rigorous demands on model reasoning. Furthermore, MME-Reasoning incorporates a variety of evaluation methods, including multiple-choice, free-form, and rule-based ( e.g., Sudoku Puzzles) questions. Employing 3
|
https://arxiv.org/abs/2505.21327v1
|
MME-Reasoning multiple evaluation methods enables a wider variety of question types, thereby facilitating a more comprehensive evaluation of models’ capabilities. Experiments were conducted on state-of-the-art MLLMs, covering Chat and Thinking types of both open-source and closed-source, as presented in Fig. 1. Evaluations with MME-Reasoning reveal these key findings: •MLLMs exhibit significant limitations and pronounced imbalances in reasoning ca- pabilities. Even the most advanced MLLMs achieve only limited results under holistic logical reasoning evaluation, with Gemini-Pro-2.5-Thinking scoring only 60.19%, followed by Seed1.5-VL (59.85) and o4-mini (57.49%). These results indicate that MME-Reasoning, through its comprehensive evaluation of all the logical reasoning types, establishes a systematic and challenging benchmark for multimodal reasoning. •Abductive reasoning remains a major bottleneck for current MLLMs. While most models demonstrate competent deductive reasoning, their abductive reasoning lags significantly. Closed-source models exhibit an average gap of 5.38 points between deductive and abduc- tive tasks, which further widens to 9.81 among open-source models, making abductive reasoning a key bottleneck. Since it underpins many real-world tasks, addressing this gap is crucial for improving overall reasoning. •Reasoning length scales with task difficulty, benefiting performance but accompanied by marginal effects and decreasing token efficiency. Thinking Models exhibit longer reasoning chains, particularly on more difficult questions, demonstrating adaptive inference budgeting and enhanced depth of reasoning. A positive correlation between average token count (ATC) and accuracy supports the effectiveness of extended outputs, especially in complex tasks. However, this performance gain plateaus beyond a certain length, revealing diminishing returns. 2 Related Works 2.1 Multimodal Reasoning Chain-of-thought (CoT) reasoning (Wei et al., 2022) has emerged as a key paradigm for enhancing the reasoning capability of LLMs. By generating intermediate steps before the final answer, CoT enables more transparent and accurate decision-making, especially in complex tasks such as arithmetic, logical deduction, and commonsense reasoning. Inspired by its success in text-only settings, CoT has recently been extended to MLLMs, giving rise to multimodal chain-of-thought (MCoT) reasoning (Jiang et al., 2024; Zhang et al., 2023; Chen et al., 2023; Peng et al., 2024; Lu et al., 2025; Xia et al., 2024a). Early approaches such as Multimodal-CoT (Zhang et al., 2023) and IPVR (Chen et al., 2023) demonstrate that generating intermediate reasoning steps significantly improves model performance in visual question answering. Other methods such as HoT (Yao et al., 2023), BDoG (Zheng et al., 2024), and VisualSketchpad (Hu et al., 2024) introduce graph structures, debating agents, and visual intermediate states to further enhance interpretability and reasoning depth. More recently, following the success of Deepseek-R1, the Generalized Reinforcement Preference Optimization (GRPO) algorithm has gained traction in the development of multimodal models. Methods such as MM-EUREKA (Meng et al., 2025), Vt-R1 (Zhou et al., 2025), LMM-R1 (Yingzhe et al., 2025), and R1-V (Chen et al., 2025) adapt GRPO to solve mathematical geometry tasks, demonstrating promising reflective reasoning capabilities. Other works, including VLM-R1 (Shen et al., 2025), Visual-RFT (Liu et al., 2025b), and Seg-Zero (Yuqi et al., 2025), apply GRPO to enhance 4 MME-Reasoning Online ResourcesTextbooks and ExamsLogical Reasoning books Synthetic QuestionsExisting BenchmarksSelf-designed Questions How many blue thingsarein this image? Let O be the Earth's center,
|
https://arxiv.org/abs/2505.21327v1
|
…If the particle entering directly toward point O can just reach the Earth's surface, what's the particle's velocity? Black’s turn. Mate in one ReasoningTypeEvaluationDiverseDataSourceDeductiveCapabilityInductiveAbductive PatternAnalysisPlanning&ExploringSpatial&TemporalCalculationCasualChainAnalysis ChoiceFree-formRule-basedData Curation Figure 3: The overall construction process of MME-Reasoning. visual competencies such as grounding, object detection, and classification. The algorithm has also been extended to video and audio modalities through models such as Video-R1 (Feng et al., 2025), and R1-Omni (Zhao et al., 2025). 2.2 Multimodal Reasoning Benchmarks Recent benchmarks have advanced the evaluation of multimodal reasoning, particularly in visual- language settings. Early works such as CLEVR (Johnson et al., 2016) and GQA (Hudson & Man- ning, 2019) assess compositional and spatial reasoning, while more recent benchmarks such as MathVista (Lu et al., 2024), PuzzleBench (Zhang et al., 2025b), ChartX (Xia et al., 2024b) and PuzzleVQA (Chia et al., 2024) emphasize symbolic logic or pattern discovery. However, these benchmarks typically focus on narrow subtypes of reasoning—especially inductive logic—and fail to offer a holistic evaluation across deductive, inductive, and abductive paradigms. Furthermore, many existing datasets conflate perception with reasoning. Tasks solvable via recogni- tion or superficial pattern matching are often labeled as reasoning challenges, while high-difficulty benchmarks such as GPQA (Rein et al., 2023), OlympiaBench (He et al., 2024) and MME-CoT (Jiang et al., 2025) overly depend on domain-specific knowledge rather than logical inference. Evaluation protocols are also limited—most rely on multiple-choice formats and lack support for open-ended or rule-based assessment. In contrast, our benchmark provides a fine-grained evaluation of visual reasoning, explicitly covering the three classical reasoning types. 3 The MME-Reasoning Benchmark We introduce MME-Reasoning, a comprehensive benchmark designed to evaluate the reasoning ability of MLLMs. MME-Reasoning consists of 1,188 questions, including 1,008 newly collected items. MME-Reasoning comprehensively covers three types of reasoning ( i.e., inductive, deductive, and abductive) and includes three question types ( i.e., multiple-choice, free-form, and rule-based). We further divided MME-Reasoning into three difficulty levels ( i.e., easy, medium, and hard). The key statistics and construction pipeline of MME-Reasoning are shown in Tab. 2 and Fig. 3. 5 MME-Reasoning Table 2: Statistics of MME-Reasoning. Statistics Number Total 1188 (100%) - Newly-add questions 84.85% - Sampled questions 15.15% Question Type - Multi-choice questions 58.50% - Free-form questions 31.57% - Rule-based questions 9.93% Image Type - Single-image questions 58.50% - Multi-image questions 31.57% Disciplinary - Disciplinary questions 31.48% - Non-discipl. questions 68.52% Deductive(38.64%)Inductive(27.86%)Abductive(33.50%)Casual Chain Analysis(12.12%)Calculation(31.06%)Planning & Exploring(30.47%)Pattern Analysis(36.20%)Spatial & Temporal(23.74%)Easy(30.81%)Medium(39.31%)Hard(29.88%)Figure 4: Overview of MME-Reasoning. 3.1 Design Principles of MME-Reasoning To ensure a comprehensive evaluation of multimodal reasoning and address issues present in previ- ous benchmarks, such as incomplete coverage of reasoning types, unclear definitions of reasoning, and insufficient evaluation methods, MME-Reasoning is guided by the following principles: 1) Comprehensiveness. According to Charles Sanders Peirce’s classification of reasoning, deduction, induction, and abduction can be distinguished based on different arrangements of rule, case, and result. Therefore, a comprehensive evaluation of reasoning ability should include all three types of reasoning tasks. 2) Beyond Perception. Each question should be carefully designed to ensure that the answer is obtained through a reasoning process instead of simple visual recognition. 3) Minimizing
|
https://arxiv.org/abs/2505.21327v1
|
Knowledge Reliance. It is essential to ensure that the questions do not require complex domain knowledge, thereby preventing models from being penalized for the absence of specialized information. In MME-Reasoning, the domain expertise is limited to K12 or below. 4) Diverse evaluation formats. The benchmark should consist of diverse question types, avoiding incomplete evaluation caused by a narrow range of task types. 3.2 Data Collection and Curation Data Collection. We initiate by collecting questions related to multimodal reasoning from a variety of sources, including 1) Textbooks can provide subject exam questions ( e.g., mathematics, physics, chemistry, and biology). To evaluate reasoning ability, the chemistry and biology questions mainly focus on reaction process inference, and genetic lineage inference. 2) Online resources, books on logical practice, and Chinese Civil Service Examination (Logic Test) primarily includes IQ test questions, logic games ( e.g., Mate-in-one), and other tasks highly related to logical reasoning. 3) Synthetically generated questions. Some visual reasoning problems, such as Number Bridge, Sudoku, and mazes, can be generated based on specific rules. We develop code to produce a wide variety of such logic puzzles, covering different types and a range of difficulty levels. 4) Questions from existing benchmarks. We sample 80 questions from PuzzleVQA (Chia et al., 2024) and 100 questions from MMIQ (Cai et al., 2025), excluding questions based on shape size identification, as such questions may not effectively assess the model’s reasoning ability. 5) Self-designed questions. We mainly construct questions related to spatial and temporal reasoning. The spatial reasoning 6 MME-Reasoning Q:Hashiisalogicpuzzle….Thebridgesmustfollowcertaincriteria:…PleasecompletethesolutionfortheHashipuzzleinthediagram,…To solve the Hashi puzzle, …Here's the step-by-step solution:### 1. Identify Islands and Their Positions …### 2. Connect Islands Based on Bridge Count Rules …### 3. Verify Connectivity and Rules …### Final Bridge Connections (List of Pairs with Bridge Count) | Island Pair | Bridge Count | Direction ||-------------------|--------------|-----------------|| (a,1) ↔(e,1) | 1 | Horizontal (row 1) || (e,1) ↔(e,3) | 1 | Vertical (column e) || (e,3) ↔(b,3) | 2 | Horizontal (row 3) || (b,3) ↔(b,5) | 1 | Vertical (column b) || (e,3) ↔(e,6) | 1 | Vertical (column e) || (e,6) ↔(a,6) | 1 | Horizontal (row 6) | hashi_extraction_prompt:Extractallbridgeconnections…1.**Input**:…2. **Output Requirements*:-Return a JSON list of dictionaries in this format:[{{"start": "a1", "end": "b1", "number": 2}}, ...]…Examples:…[{"start": "a1", "end": "e1", "number": 1},{"start": "e1", "end": "e3", "number": 1},{"start": "e3", "end": "b3", "number": 2}, {"start": "b3", "end": "b5", "number": 1}, {"start": "e3", "end": "e6", "number": 1}, {"start": "e6", "end": "a6", "number": 1}] defjudge_function_hashi(response):…defis_valid_hashi(init_islands):… Rule-based Question Model ResponseAnswer Extraction Function Verify Figure 5: Evaluation of rule-based questions. questions involve tasks such as determining relative spatial relationships and navigation, with the question design methodology inspired by VSIBench (Yang et al., 2024b). For temporal reasoning, the questions mainly focus on sequence judgment. We sample frames from videos in YouCook2 (Zhou et al., 2018) and VideoMME (Fu et al., 2024) as the sources of images. Note that for questions with well-defined rules such as Number Bridge Puzzles, we include the corresponding rules as part of each question. The composition of MME-Reasoning is shown in Fig. 4 and please refer to the Appendix for
|
https://arxiv.org/abs/2505.21327v1
|
more details about the question source and type. Data Curation. We initially collect around 4k questions from various sources mentioned above. Following the design principles of MME-Reasoning, we conduct a careful manual curation process to ensure the quality of the benchmark. Specifically, we exclude questions that depend solely on visual recognition, require complex domain-specific knowledge, too easy to evaluate the reasoning ability. This curation process ensures that the remaining questions are well-aligned with our goal of evaluating visual reasoning ability, rather than perceptual skills or the breadth of specialized knowledge. For questions with multiple possible answers, we first try to convert them into rule-based (will be introduced in Sec. 3.3) or multiple-choice questions; otherwise, discard them. Additionally, we remove questions that place excessive demands on instruction-following ability. Finally, to comprehensively evaluate the multimodal reasoning ability, we balance the distribution of questions across the three reasoning types. This approach prevents the benchmark from being overly biased towards evaluating the ability of any single reasoning type. Through this data curation process, we filter 1,008 questions from the initially collected questions. Metadata Annotation. Further, we annotate questions in MME-Reasoning with information includ- ing question type ( i.e., multiple-choice, free-form, and rule-based), difficulty ( i.e., easy, medium, hard), capability ( i.e., pattern analysis, planning and exploring, spatial and temporal, calculation, casual chain analysis), and reasoning type ( i.e., deductive, inductive, and abductive). For specific rules for annotating metadata, please refer to our appendix. 3.3 Evaluation Protocols Following MathVista (Lu et al., 2023), the evaluation consists of two steps: extracting answers and judging answers. For different types of questions ( i.e., multiple-choice, free-form, and rule-based), we designed specific prompts for GPT to extract answers. These prompts are composed of extraction rules and examples that are similar to MathVista (Lu et al., 2023). For multiple-choice questions, we match the extracted answers with the reference answers. For free-form questions, we use GPT to judge the consistency between the extracted answers and the reference answers following MathVerse (Zhang et al., 2024). For rule-based questions, we first use GPT to extract answers and 7 MME-Reasoning convert them into an intermediate format, which is then judged using specific scripts. For example, in a Number Bridge problem, we first use GPT to extract the start and end points of each bridge, then convert the answers into a specific matrix format, and finally determine correctness based on predefined rules, as illustrated in Fig. 5. 4 Experiments 4.1 Experimental Settings We conduct extensive evaluations on state-of-the-art MLLMs include: Thinking Models. We first evaluate several thinking MLLMs that focus on improving the models’ multimodal reasoning which can be divided into Close-source models including (1) GPT-o1 (Jaech et al., 2024), and o4-mini (, 2025); (2) Gemini-2.5-Flash-Thinking and Gemini-2.5-Pro-Thinking (Gem- ini et al., 2023); (3) Claude-3.7-Sonnet-Thinking, Claude-4-Sonnet-Thinking (Anthropic, 2022); (4) Seed1.5-VL-Thinking (Guo et al., 2025a); and Open-source models including (1) QvQ-72B- Preview (Team, 2024); (2) Kimi-VL-A3B-Thinking (Team et al., 2025b); (3)LlamaV-o1 (Thawakar et al., 2025); (4) Virgo-72B (Du et al., 2025). Chat Models. Further, we also evaluate SoTA chat models as follows. Close-source models : (1) GPT-4o (OpenAI,
|
https://arxiv.org/abs/2505.21327v1
|
2024); (2) Claude-3.7-Sonnet (Anthropic, 2022) (3) Kimi-latest (Team et al., 2025a); (4) Seed1.5-VL (Guo et al., 2025a). Open-source models : (1) Qwen-2.5-VL (7B, 32B, 72B) (Qwen Team, 2025a); (2) InternVL-3 (8B, 38B, 78B) (Zhu et al., 2025); (3) LLaVA-Onevision-72B (Li et al., 2024); (4) Molmo (7B-O, 7B-D, 72B) (Deitke et al., 2024); (5) Kimi-VL-A3B-Instruct (Team et al., 2025b). Rule based RL Models. Rule-based Reinforcement Learning (RL) has been shown to be a highly promising strategy for eliciting reasoning paradigms in models. Therefore, we further evalu- ated MLLMs trained using Rule-based RL, including: (1) R1-VL (Zhang et al., 2025a), (2) R1- Onevision (Yang et al., 2025), (3) Vision-R1 (Huang et al., 2025), (4) MM-Eureka (7B, 32B) (Meng et al., 2025), (5) VL-Rethinker (7B, 72B) (Wang et al., 2025). We use GPT-4o-mini to extract answers from model responses. Due to rate limits, we sample 302 questions to construct mini-set with the same distribution for o1’s evaluation, all other models are evaluated on the entire benchmark. 4.2 Main Results Tab. 3 shows the performance comparison of different MLLMs and prompting strategies. MME-Reasoning poses significant challenges for vision-language reasoning. The best-performing model, Gemini-2.5-Pro-Thinking, achieved an average score of 60.2%. The latest MLLM, Seed1.5-VL, achieved a comprehensive score of 59.9. Representative reasoning models o4-mini and o1 obtained scores of 57.5 and 45.7, respectively. Qwen2.5-VL and Claude-3.7-Sonnet achieved scores of 35.9 and 57.2 on OlympiadBench, yet only reached 34.1 on MME-Reasoning. These results indicate that the benchmark sets stringent standards for evaluating models’ logical reasoning capabilities by comprehensively assessing three distinct reasoning types. Prominent bias in logical reasoning performance within MLLMs. In almost all cases, models exhibit dominant deductive reasoning performance, while abductive reasoning is considerably weaker. Closed-source models demonstrate an average deductive advantage of 5.38 over abduc- tive reasoning, which widens to 9.81 among open-source models, making abductive reasoning 8 MME-Reasoning Table 3: Performance comparison of state-of-the-art MLLMs on MME-Reasoning. The top three are highlighted in blue .†indicates the model was evaluated on the mini-set. “T" represents “Thinking". ModelModel Capability Reasoning TypeA VG. CAL. P& E. PA. S&T. CCA. DED. IND. ABD. Close-source & Thinking Gemini-2.5-Pro-T 68.0 64.4 53.7 52.1 90.3 64.0 51.7 62.8 60.2 Seed1.5-VL-T 67.2 62.7 56.0 47.2 82.6 64.5 52.3 60.8 59.9 o4-mini 63.1 58.3 57.2 50.4 59.0 60.6 51.4 59.0 57.5 o1†50.0 38.5 41.5 43.7 52.4 50.8 42.3 42.3 45.7 Claude-4-Sonnet-T 33.3 35.9 33.0 36.2 47.9 39.4 32.0 35.7 36.1 Claude-3.7-Sonnet-T 30.4 27.6 32.3 38.3 46.5 34.6 36.2 31.7 34.1 Gemini-2.5-Flash-T 19.8 21.3 20.9 33.0 38.9 28.1 22.1 24.6 25.2 Close-source & Chat Seed1.5-VL 52.0 42.0 38.4 44.0 72.9 54.9 45.0 41.0 47.5 GPT-4o 21.4 22.1 30.5 38.6 36.8 29.0 34.7 27.9 30.2 Claude-3.7-Sonnet 29.0 24.6 32.8 35.5 46.5 35.7 38.7 26.1 33.3 Kimi-Latest 21.4 17.4 19.8 29.1 41.0 27.7 25.4 19.9 24.4 Open-source & Thinking QVQ-72B-Preview 37.4 27.1 28.8 35.8 57.6 41.6 33.5 29.1 35.2 Virgo-72B 30.4 22.9 26.1 36.2 47.2 37.7 32.6 24.4 31.8 VL-Rethinker-72B 33.6 28.4 31.4 37.2 59.7 39.0 36.0 31.9 35.8 VL-Rethinker-7B 24.7 17.7 23.5 39.4 42.4 34.4 29.9 22.9 29.3 MM-Eureka-Qwen-32B 23.0 25.7 25.6 36.2 50.7 32.9
|
https://arxiv.org/abs/2505.21327v1
|
30.5 28.1 30.6 MM-Eureka-Qwen-7B 27.1 19.3 22.3 31.9 50.0 32.7 28.7 22.6 28.2 R1-VL-7B 16.3 11.6 17.7 30.9 26.4 25.3 21.8 15.8 21.1 Vision-R1-7B 18.2 18.0 17.9 34.4 36.1 27.4 26.3 18.1 24.0 R1-Onevision-7B-RL 19.5 12.2 20.0 31.6 27.1 27.7 24.8 14.6 22.5 Kimi-VL-A3B-T 28.7 16.0 19.5 32.3 35.4 33.3 25.1 18.1 25.9 Open-source & Chat Qwen2.5-VL-72B 31.7 25.1 27.2 37.9 53.5 39.0 32.3 29.9 34.1 Qwen2.5-VL-32B 32.2 26.8 24.4 39.0 52.1 40.5 27.5 29.6 33.2 Qwen2.5-VL-7B 22.2 18.2 21.9 35.1 36.1 31.4 27.5 20.9 26.8 InternVL3-78B 26.0 24.0 26.5 41.8 50.0 35.1 33.8 27.1 32.1 InternVL3-38B 23.0 18.5 23.0 38.3 41.7 33.5 29.0 22.1 28.4 InternVL3-8B 19.5 19.6 22.6 31.6 41.0 28.1 29.9 21.4 26.4 Molmo-72B 12.5 11.9 14.7 28.7 28.5 23.1 18.4 14.3 18.9 Molmo-7B-D 11.7 8.6 8.1 27.3 23.6 20.7 10.9 11.1 14.7 LLaVA-OV-72B 17.1 18.0 23.9 32.3 38.9 27.4 30.5 19.9 25.8 Kimi-VL-A3B 18.7 11.9 21.4 34.0 27.8 25.9 26.3 17.1 23.1 a significant bottleneck in comprehensive logical reasoning performance. Deductive reasoning maintains a high proportion in the training corpus due to its widespread distribution. Abductive reasoning processes usually involve larger exploration spaces and richer assumptions, hypotheses, and reflections, making its data challenging to scale. However, non-deductive reasoning plays a central role in general reasoning scenarios and many scientific discoveries. These findings highlight 9 MME-Reasoning Easy Hard Medium Difficulty Level0300600Average Token CountInductive Deductive Abductive Total Easy Medium Hard Difficulty Level0300600Average Token Coun t Inductive Deductive Abductive TotalEasy Hard Medium Difficulty Level03k6kAverage Token CountInductive Deductive Abductive Total Easy Medium Hard Difficulty Level4008001.2kAverage Token Count Inductive Deductive Abductive Total Easy Medium Hard Difficulty Level05k10kAverage Token CountInductive Deductive Abductive Total o4-mini Seed -1.5-VL-T Claude -3.7-Sonnet -T Easy Hard Medium Difficulty Level1.2k1.6k2.0kAverage Token CountInductive Deductive Abductive Total Gemini -2.5-Flash -T QwenVL -2.5-72B GPT-4o Figure 6: Comparison of Difficulty Level and Average Token Count on MME-Reasoning. 020406080 Seed1.5 VL-TGemini 2.5-Pro-To4-mini Qwen2.5 VL-72B73.573.577.377.3 73.273.2 535356.556.554.754.752.252.2 27.627.650.150.1 51.451.448.748.7 23.123.12025/5/16 11:0 Easy Medium Hard7 bar-data-color (2).html file:///Users/yuanjiakang/Downloads/bar-data-color (2).html 1/2 Figure 7: Results within different difficulty levels. 101001000153045Acc LLaVA-OVGPT-4oQwen-2.5-VLVL-RethinkerClaude-3.7-Sonnet-ThinkingVirgoGemini-Flash-2.5-ThinkingSeed-1.5-VLThinkingo4-mini tokens Gemini-Pro-2.5-Thinking Figure 8: Response tokens vs. Performance. the necessity for researchers to develop a more comprehensive understanding of models’ logical reasoning abilities to facilitate their application in real-world scenarios. Moreover, the models’ scores under different reasoning types are typically score below 40, indicating that MME-Reasoning provides a promising metric for evaluating reasoning capabilities from multiple perspectives. Limited performance in open-ended reasoning scenarios. Models generally demonstrate relative advantages in Casual Chain Analysis but perform poorly on tasks involving Plan & Exploration. This may benefit from the autoregressive paradigm continuously aiding models in learning causal dependencies within input sequences. However, it also highlights a critical shortcoming: current state-of-the-art models struggle with planning and exploration in open-ended problem-solving spaces. To advance models in solving difficult practical problems, it is critical to innovate learning paradigms and strategies generation mechanisms suitable for open scenarios. Thinking capability directly contributes to enhanced logical reasoning. Models employing "thinking mode" typically generalize test-time scaling to reasoning scenarios through generating longer chains-of-thought (CoT), reflections, and self-corrections. In most cases, "thinking models" significantly outperform their base version. QvQ
|
https://arxiv.org/abs/2505.21327v1
|
improved by 1.1 compared to Qwen2.5-VL, and VL-Rethinker improved by 1.7 compared to Qwen2.5-VL. This effect is more pronounced among closed-source models: Seed1.5-VL-T outperformed Seed1.5-VL by 12.4, and o1 exceeded GPT-4o by 15.5. Further experiments concerning thinking models will be elaborated in subsequent sections. 10 MME-Reasoning … Black’s turn. … Mate in on. … Gemini-2.5-Pro-T<think>…It's Black's turn. The white king is on d3.Let's look for checks:1. **Queen moves (Qg2):**…* White Queen e2 can capture Qf2 (Qxf2). This is not a mate.…* Qe4#: Check. The White pawn c3 can block by moving to d4 if Qd4 was the checking piece. But here it is Qe4.* The king is on d3. Qe4 checks.* Escape squares for Kd3:* c2 -attacked by Qe4 …* c4 -attacked by Black Bishop c5* Looks like no escape squares. .…* So, Qe4#. Let's verify.…* All escape squares are covered.* This looks like a mate! Qe4#.Let's double check other possible moves, just in case.2. **Rook moves (Rh5):**…Let's re-evaluate Qe4#.…So, the King has no escape squares.…Move: Qe4#Consider any other tricky moves.…No escape.…So, "Qe4#".</think>The move in standard algebraic notation is:Qe4#6.5k tokenso4-miniAnswer: Qxc3#<think>24.6k tokens </think> Seed1.5-VLAnswer: Qg2d2#<think>7.7k tokens </think> I'llanalyze…Lookingattheboard,Icansee:…Afterexaminingtheposition…Theblackbishoponf5isattackingthee4square,andif…f6movestoe4,itwoulddelivercheckmatebecause:1.Theknightwouldattacktheking…Therefore,thesolutionis:**Ne4#**Claude-3.7-Sonnet-T 1k tokens Figure 9: Case study of a Mate-in-one problem. Rule-based RL does not always work. Rule-based RL has shown significant potential in activating the "thinking mode" of foundational models, encouraging longer output and reflection to tackle hard problems. However, we observed that methods adopting rule-based RL do not consistently outperform their base models. Most models at the 7B scale experienced performance degradation. This suggests that the potential of rule-based RL remains inadequately realized, failing to effectively extend advantages demonstrated in LLMs into multimodal domains and possibly reducing gener- alization. Thus, innovation in training paradigms, rather than merely replicating R1, is urgently needed. 4.3 Fine Grained Analysis of Reasoning Behavior Does increasing the length of the reasoning process help? To investigate whether increased output length consistently leads to improved accuracy, we selected 10 representative models, including Chat Models ( e.g., GPT-4o) and Thinking Models ( e.g., o4-mini). In Fig. 8, we present the semi-log plot of average token count (ATC) versus accuracy. The overall trend reveals that models with longer outputs tend to achieve higher scores, indicating the effectiveness of extending the reasoning process to enhance logical reasoning performance. As the token number increases, model performance exhibits a exponential growth pattern, suggesting diminishing returns from simply increasing output length. Compared to Thinking Models, Chat Models demonstrate higher token efficiency. These findings highlight the computational cost associated with scaling up inference for improved performance. Balancing reasoning efficiency and model effectiveness remains a challenge for future research. Is the length of the reasoning process strongly correlated with task difficulty? To examine whether models spontaneously allocate more inference budget to more challenging questions, we conducted research on using representative Thinking Models such as o4-mini and Chat Models such as GPT-4o. We first analyzed the accuracy of different models across varying levels of difficulty, as shown in Fig. 7. With increasing difficulty, model performance declines significantly, confirming the validity of MME-Reasoning’s difficulty stratification and providing a foundation for
|
https://arxiv.org/abs/2505.21327v1
|
subsequent analyses. Besides, Fig. 6 illustrates the trend of ATC across different reasoning types and difficulty levels. It reveals a consistent pattern: overall, output length increases steadily with rising difficulty. This trend holds across varying output lengths, model categories, and reasoning types. Compared 11 MME-Reasoning to Chat Models, Thinking Models exhibit a more pronounced increase in ATC as difficulty rises. For instance, the ATC of Seed1.5-VL increases by up to 3k tokens, and o4-mini by up to 5k tokens. In contrast, the ATC increase for Qwen2.5-VL and GPT-4o remains within 300 tokens. 4.4 Case Study In Fig. 9, we present an example of abductive reasoning which demands planning and exploration. From this case, several key observations can be identified: (1)Long reasoning process : The selected models generated over 1k tokens in response, with o4-mini producing up to 24.6k tokens. This demonstrates that MME-Reasoning constitutes a highly challenging benchmark for multimodal reasoning. (2)Planning in the problem-solving process : The response includes multiple iterations of“hypothesis generation (possible movement) – feasibility verification (check escape squares) – check ” , indicating that the model spontaneously engages in structured planning and reflection to explore solutions within an open-ended problem-solving spaces. (3)Repetitive reflection : We observed that the model tends to revisit and reflect on the same reasoning paths multiple times—up to 7 instances in some cases. This behavior may result in significant computational overhead and informational redundancy. Balancing reasoning efficiency with performance remains a critical issue to be addressed. 5 Conclusion We introduce MME-Reasoning, a comprehensive benchmark designed to evaluate MLLMs’ logical reasoning abilities across inductive, deductive, and abductive reasoning types. Through careful data curation and an expanded evaluation protocol, our benchmark provides a holistic assessment of reasoning capabilities, beyond simple perception or high-level knowledge. Our experiments reveal that existing MLLMs still face significant challenges and exhibit notable performance imbalances across different reasoning types. These findings underscore the need for further research and development to enhance the reasoning abilities of MLLMs, paving the way for more generalizable AI systems. 12 MME-Reasoning References OpenAI (2025). Openai o3 and o4-mini system card, 2025. URL https://openai.com/index/ o3-o4-mini-system-card/. https://www.anthropic.com/index/introducing-claude Anthropic. Claude, 2022. URL https: //www.anthropic.com/index/introducing-claude. Gilad Baruch, Zhuoyuan Chen, Afshin Dehghan, Tal Dimry, Yuri Feigin, Peter Fu, Thomas Gebauer, Brandon Joffe, Daniel Kurz, Arik Schwartz, et al. Arkitscenes: A diverse real-world dataset for 3d indoor scene understanding using mobile rgb-d data. arXiv preprint arXiv:2111.08897 , 2021. Huanqia Cai, Yijun Yang, and Winston Hu. Mm-iq: Benchmarking human-like abstraction and reasoning in multimodal models. arXiv preprint arXiv:2502.00698 , 2025. Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025. Accessed: 2025-02-02. Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Hao Zhang, and Chuang Gan. See, think, confirm: Interactive prompting between vision and language models for knowledge-based visual reasoning. arXiv preprint arXiv:2301.05226 , 2023. Yew Ken Chia, Vernon Toh Yan Han, Deepanway Ghosal, Lidong Bing, and Soujanya Poria. Puzzlevqa: Diagnosing multimodal reasoning challenges of language models with abstract visual patterns. arXiv preprint arXiv:2403.13315 , 2024. Angela Dai, Angel X Chang,
|
https://arxiv.org/abs/2505.21327v1
|
Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 5828–5839, 2017. DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948. Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang, Jae Sung Park, Moham- madreza Salehi, Niklas Muennighoff, Kyle Lo, Luca Soldaini, et al. Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models. arXiv preprint arXiv:2409.17146 , 2024. Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openvlthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352 , 2025. Yifan Du, Zikang Liu, Yifan Li, Wayne Xin Zhao, Yuqi Huo, Bingning Wang, Weipeng Chen, Zheng Liu, Zhongyuan Wang, and Ji-Rong Wen. Virgo: A preliminary exploration on reproducing o1-like mllm. arXiv preprint arXiv:2501.01904 , 2025. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. ArXiv preprint , abs/2407.21783, 2024. URL https://arxiv.org/abs/2407.21783. Kaituo Feng, Kaixiong Gong, Bohao Li, Zonghao Guo, Yibing Wang, Tianshuo Peng, Benyou Wang, and Xiangyu Yue. Video-r1: Reinforcing video reasoning in mllms. arXiv preprint arXiv:2503.21776 , 2025. 13 MME-Reasoning Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394 , 2023. Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evalua- tion benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075 , 2024. Gemini, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. ArXiv preprint , abs/2312.11805, 2023. URL https://arxiv.org/abs/2312. 11805. Kaixiong Gong, Kaituo Feng, Bohao Li, Yibing Wang, Mofan Cheng, Shijia Yang, Jiaming Han, Benyou Wang, Yutong Bai, Zhuoran Yang, et al. Av-odyssey bench: Can your multimodal llms really understand audio-visual information? arXiv preprint arXiv:2412.02611 , 2024. Dong Guo, Faming Wu, Feida Zhu, Fuxing Leng, Guang Shi, Haobin Chen, Haoqi Fan, Jian Wang, Jianyu Jiang, Jiawei Wang, et al. Seed1.5-vl technical report. arXiv preprint arXiv:2505.07062 , 2025a. Ziyu Guo, Ray Zhang, Hao Chen, Jialin Gao, Dongzhi Jiang, Jiaze Wang, and Pheng-Ann Heng. Sciverse: Unveiling the knowledge comprehension and visual reasoning of lmms on multi-modal scientific problems. arXiv preprint arXiv:2503.10627 , 2025b. Yunzhuo Hao, Jiawei Gu, Huichen Will Wang, Linjie Li, Zhengyuan Yang, Lijuan Wang, and Yu Cheng. Can mllms reason in multimodality? emma: An enhanced multimodal reasoning benchmark. arXiv preprint arXiv:2501.05444 , 2025. Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008 , 2024. Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth,
|
https://arxiv.org/abs/2505.21327v1
|
Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403 , 2024. Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. Drew A. Hudson and Christopher D. Manning. GQA: A new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019 , pp. 6700–6709. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019. 00686. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Hudson_GQA_A_New_ Dataset_for_Real-World_Visual_Reasoning_and_Compositional_CVPR_2019_paper.html. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. 14 MME-Reasoning Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanwei Li, Yu Qi, Xinyan Chen, Liuhui Wang, Jianhan Jin, Claire Guo, Shen Yan, Bo Zhang, Chaoyou Fu, Peng Gao, and Hongsheng Li. Mme-cot: Benchmarking chain-of-thought in large multimodal models for reasoning quality, robustness, and efficiency, 2025. URL https://arxiv.org/abs/2502.09621. Yilei Jiang, Yingshui Tan, and Xiangyu Yue. Rapguard: Safeguarding multimodal large language models via rationale-aware defensive prompting, 2024. URL https://arxiv.org/abs/2412.18826. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning, 2016. URL https://arxiv.org/abs/1612.06890. Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024. Hanmeng Liu, Zhizhang Fu, Mengru Ding, Ruoxi Ning, Chaoli Zhang, Xiaozhang Liu, and Yue Zhang. Logical reasoning in large language models: A survey. arXiv preprint arXiv:2502.09100 , 2025a. Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785 , 2025b. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https: //openreview.net/forum?id=KUNzEQMWU7. Yiting Lu, Jiakang Yuan, Zhen Li, Shitian Zhao, Qi Qin, Xinyue Li, Le Zhuo, Licheng Wen, Dongyang Liu, Yuewen Cao, et al. Omnicaptioner: One captioner to rule them all. arXiv preprint arXiv:2504.07089 , 2025. Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, Ping Luo, Yu Qiao, Qiaosheng Zhang, and Wenqi Shao. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning, 2025. URL https://github.com/ModalMinds/MM-EUREKA. OpenAI. Hello gpt4-o. https://openai.com/index/hello-gpt-4o/, 2024. URL https://openai.com/ index/hello-gpt-4o/. Charles Sanders Peirce. Illustrations of the Logic of Science . Open Court, 2014.
|
https://arxiv.org/abs/2505.21327v1
|
Tianshuo Peng, Mingsheng Li, Hongbin Zhou, Renqiu Xia, Renrui Zhang, Lei Bai, Song Mao, Bin Wang, Conghui He, Aojun Zhou, et al. Chimera: Improving generalist model with domain- specific experts. arXiv preprint arXiv:2412.05983 , 2024. Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536 , 2025a. 15 MME-Reasoning Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536 , 2025b. Qwen Team. Qwen2.5-vl, January 2025a. URL https://qwenlm.github.io/blog/qwen2.5-vl/. Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025b. URL https://qwenlm.github.io/blog/qwq-32b/. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. Gpqa: A graduate-level google-proof qa benchmark, 2023. URL https://arxiv.org/abs/2311.12022. Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. Vlm-r1: A stable and generalizable r1-style large vision-language model, 2025. URL https://arxiv.org/abs/2504.07615. Yueqi Song, Tianyue Ou, Yibo Kong, Zecheng Li, Graham Neubig, and Xiang Yue. Visualpuz- zles: Decoupling multimodal reasoning evaluation from domain knowledge. arXiv preprint arXiv:2504.10342 , 2025. Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025a. Kimi Team, Angang Du, Bohong Yin, Bowei Xing, Bowen Qu, Bowen Wang, Cheng Chen, Chenlin Zhang, Chenzhuang Du, Chu Wei, et al. Kimi-vl technical report. arXiv preprint arXiv:2504.07491 , 2025b. Qwen Team. Qvq: To see the world with wisdom, December 2024. URL https://qwenlm.github.io/ blog/qvq-72b-preview/. Omkar Thawakar, Dinura Dissanayake, Ketan More, Ritesh Thawkar, Ahmed Heakl, Noor Ahsan, Yuhao Li, Mohammed Zumri, Jean Lahoud, Rao Muhammad Anwer, et al. Llamav-o1: Rethinking step-by-step visual reasoning in llms. arXiv preprint arXiv:2501.06186 , 2025. Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, and Wenhu Chen. Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning. arXiv preprint arXiv:2504.08837 , 2025. Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hong- sheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. Advances in Neural Information Processing Systems , 37:95095–95169, 2024a. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024b. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. Renqiu Xia, Mingsheng Li, Hancheng Ye, Wenjie Wu, Hongbin Zhou, Jiakang Yuan, Tianshuo Peng, Xinyu Cai, Xiangchao Yan, Bin Wang, et al. Geox: Geometric problem solving through unified formalized vision-language pre-training. arXiv preprint arXiv:2412.11863 , 2024a.
|
https://arxiv.org/abs/2505.21327v1
|
16 MME-Reasoning Renqiu Xia, Bo Zhang, Hancheng Ye, Xiangchao Yan, Qi Liu, Hongbin Zhou, Zijun Chen, Peng Ye, Min Dou, Botian Shi, et al. Chartx & chartvlm: A versatile benchmark and foundation model for complicated chart reasoning. arXiv preprint arXiv:2402.12185 , 2024b. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. ArXiv preprint , abs/2407.10671, 2024a. URL https://arxiv.org/abs/2407.10671. Jihan Yang, Shusheng Yang, Anjali W Gupta, Rilyn Han, Li Fei-Fei, and Saining Xie. Thinking in space: How multimodal large language models see, remember, and recall spaces. arXiv preprint arXiv:2412.14171 , 2024b. Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615 , 2025. Fanglong Yao, Changyuan Tian, Jintao Liu, Zequn Zhang, Qing Liu, Li Jin, Shuchao Li, Xiaoyu Li, and Xian Sun. Thinking like an expert: Multimodal hypergraph-of-thought (hot) reasoning to boost foundation modals. arXiv preprint arXiv:2308.06207 , 2023. Huanjin Yao, Jiaxing Huang, Wenhao Wu, Jingyi Zhang, Yibo Wang, Shunyu Liu, Yingjie Wang, Yuxin Song, Haocheng Feng, Li Shen, et al. Mulberry: Empowering mllm with o1-like reasoning and reflection via collective monte carlo tree search. arXiv preprint arXiv:2412.18319 , 2024. Peng Yingzhe, Zhang Gongrui, Zhang Miaosen, You Zhiyuan, Liu Jie, Zhu Qipeng, Yang Kai, Xu Xingzhong, Geng Xin, and Yang Xu. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl, 2025. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of CVPR , 2024a. Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Botao Yu, Ge Zhang, Huan Sun, et al. Mmmu-pro: A more robust multi-discipline multimodal understanding benchmark. arXiv preprint arXiv:2409.02813 , 2024b. Liu Yuqi, Peng Bohao, Zhong Zhisheng, Yue Zihao, Lu Fanbin, Yu Bei, and Jia Jiaya. Seg-zero: Reasoning-chain guided segmentation via cognitive reinforcement, 2025. URL https://arxiv.org/ abs/2503.06520. Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, and Dacheng Tao. R1-vl: Learning to reason with multimodal large language models via step-wise group relative policy optimization. arXiv preprint arXiv:2503.12937 , 2025a. Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision , pp. 169–186. Springer, 2024. Zeyu Zhang, Zijian Chen, Zicheng Zhang, Yuze Sun, Yuan Tian, Ziheng Jia, Chunyi Li, Xiaohong Liu, Xiongkuo Min, and Guangtao Zhai. Puzzlebench: A fully dynamic evaluation framework for large multimodal models on puzzle solving. arXiv preprint arXiv:2504.10885 , 2025b. 17 MME-Reasoning Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal
|
https://arxiv.org/abs/2505.21327v1
|
chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 , 2023. Jiaxing Zhao, Xihan Wei, and Liefeng Bo. R1-omni: Explainable omni-multimodal emotion recogni- tion with reinforcement learning, 2025. Changmeng Zheng, Dayong Liang, Wengyu Zhang, Xiao-Yong Wei, Tat-Seng Chua, and Qing Li. A picture is worth a graph: A blueprint debate paradigm for multimodal reasoning. In Proceedings of the 32nd ACM International Conference on Multimedia , pp. 419–428, 2024. Hengguang Zhou, Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, and Cho-Jui Hsieh. R1-zero’s "aha moment" in visual reasoning on a 2b non-sft model, 2025. URL https://arxiv.org/ abs/2503.05132. Luowei Zhou, Chenliang Xu, and Jason Corso. Towards automatic learning of procedures from web instructional videos. In Proceedings of the AAAI conference on artificial intelligence , volume 32, 2018. Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Yuchen Duan, Hao Tian, Weijie Su, Jie Shao, et al. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models. arXiv preprint arXiv:2504.10479 , 2025. 18 MME-Reasoning Technical Appendices and Supplementary Material for MME-Reasoning A More Experimental Results 20 A.1 Full Results on MME-Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.2 Full Results on Mini-set of MME-Reasoning . . . . . . . . . . . . . . . . . . . . . . . . 20 A.3 Human Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.4 Results on Different Question Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.5 Results with Test-Time Compute Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.6 Results with CoT Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.7 Token Usage of Thinking Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.8 Results of Captioner & LLMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 B Details of Annotation 25 B.1 Difficult Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.21327v1
|
. . 25 B.2 Reasoning Type Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 B.3 Capability Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 C Details of Implementation 27 D Details of Evaluation 28 D.1 Prompts for Answer Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 E Examples of MME-Reasoning 28 F Limitation 28 19 MME-Reasoning A More Experimental Results A.1 Full Results on MME-Reasoning We present the performance of more baselines on MME-Reasoning in Tab 4, including OpenVL- Thinker (Deng et al., 2025), LMM-R1-MGT-PerceReason (Peng et al., 2025a), Mulberry (Yao et al., 2024), LlamaV-o1 (Thawakar et al., 2025) and Qwen2-VL series (Wang et al., 2024b). A.2 Full Results on Mini-set of MME-Reasoning We randomly sampled 25% of the questions and conducted manual review to ensure that the diversity of image types was maintained. These sampled questions were then used to to construct the Mini-set. We also analyzed the question distributions of both the Mini-set and the Full-set to ensure the sampled questions retained the same distribution. The statistical results are presented in Tab 5. We provide the performance of all baseline models on the Mini-set in Tab. 6. All baseline models achieved similar performance on both the Full-set and the Mini-set, further demonstrating the consistency of Mini-set and the comparability of model performance across different splits. A.3 Human Performance To evaluate expert-level performance on MME-Reasoning, we further report human performance on the mini-set of MME-Reasoning. As shown in Tab. 6, the human expert achieved an overall score of 83.4—significantly outperforming the best-performing thinking model, Seed1.5-VL-T, which scored 62.6. Looking deeper into the reasoning types, the human expert scored 85.8, 76.9, and 85.6 on deductive, inductive, and abductive reasoning respectively, all of which are notably higher than the scores of the best-performing model. Moreover, the human expert demonstrated a particularly strong ability in abductive reasoning, with performance comparable to that in deductive reasoning—which is the key focus in current multi-modal reasoning research. This strength aligns with a few top-performing models, but stands in contrast to most baseline models, which show clear weaknesses in abductive reasoning. These results highlight the significant gap that still exists between current thinking & chat models and human-level performance in comprehensive multimodal reasoning evaluation. Expanding complex reasoning tasks beyond domain-specific knowledge questions to include a broader range of reasoning types and more diverse tasks will be a crucial step toward addressing these current limitations. A.4 Results on Different Question Types We also evaluated the model’s performance across different question types and present the results in Tab. 7. A.5 Results with Test-Time Compute Scaling To evaluate whether
|
https://arxiv.org/abs/2505.21327v1
|
the use of Test-Time Compute Scaling (TTS) methods can improve model performance on MME-Reasoning, we take Qwen2.5-VL-7B as an example and use Qwen2.5-VL-32B as the Reward Model. The evaluation is conducted using the Monte Carlo Tree Search (MCTS) algorithm, with the settings: branch = 3 and max-iteration = 18 . The results are shown in Table 8. Under the MCTS-based setting, the model’s performance dropped noticeably across all reasoning types. We attribute this decline to two main factors: (1) Questions in MME-Reasoning often involve 20 MME-Reasoning Table 4: Performance comparison of state-of-the-art MLLMs on MME-Reasoning. The top three are highlighted in blue . “T" represents “Thinking". ModelModel Capability Reasoning TypeA VG. CAL. P& E. PA. S&T. CCA. DED. IND. ABD. Close-source & Thinking Gemini-2.5-Pro-T 68.0 64.4 53.7 52.1 90.3 64.0 51.7 62.8 60.2 Seed1.5-VL-T 67.2 62.7 56.0 47.2 82.6 64.5 52.3 60.8 59.9 o4-mini 63.1 58.3 57.2 50.4 59.0 60.6 51.4 59.0 57.5 Claude-4-Sonnet-T 33.3 35.9 33.0 36.2 47.9 39.4 32.0 35.7 36.1 Claude-3.7-Sonnet-T 30.4 27.6 32.3 38.3 46.5 34.6 36.2 31.7 34.1 Gemini-2.5-Flash-T 19.8 21.3 20.9 33.0 38.9 28.1 22.1 24.6 25.2 Close-source & Chat Seed1.5-VL 52.0 42.0 38.4 44.0 72.9 54.9 45.0 41.0 47.5 GPT-4o 21.4 22.1 30.5 38.6 36.8 29.0 34.7 27.9 30.2 Claude-3.7-Sonnet 29.0 24.6 32.8 35.5 46.5 35.7 38.7 26.1 33.3 Kimi-Latest 21.4 17.4 19.8 29.1 41.0 27.7 25.4 19.9 24.4 Open-source & Thinking QVQ-72B-Preview 37.4 27.1 28.8 35.8 57.6 41.6 33.5 29.1 35.2 Virgo-72B 30.4 22.9 26.1 36.2 47.2 37.7 32.6 24.4 31.8 VL-Rethinker-72B 33.6 28.4 31.4 37.2 59.7 39.0 36.0 31.9 35.8 VL-Rethinker-7B 24.7 17.7 23.5 39.4 42.4 34.4 29.9 22.9 29.3 MM-Eureka-Qwen-32B 23.0 25.7 25.6 36.2 50.7 32.9 30.5 28.1 30.6 MM-Eureka-Qwen-7B 27.1 19.3 22.3 31.9 50.0 32.7 28.7 22.6 28.2 R1-VL-7B 16.3 11.6 17.7 30.9 26.4 25.3 21.8 15.8 21.1 Vision-R1-7B 18.2 18.0 17.9 34.4 36.1 27.4 26.3 18.1 24.0 R1-Onevision-7B-RL 19.5 12.2 20.0 31.6 27.1 27.7 24.8 14.6 22.5 Kimi-VL-A3B-T 28.7 16.0 19.5 32.3 35.4 33.3 25.1 18.1 25.9 OpenVLThinker-7B 19.8 14.6 19.3 35.8 34.7 30.7 24.8 17.3 24.6 LMM-R1-MGT-PerceReason 22.2 16.0 23.7 37.9 34.0 30.3 32.3 20.1 27.4 Mulberry 14.6 13.3 18.8 33.7 31.3 23.8 25.4 17.6 22.1 LlamaV-o1 14.9 7.7 16.5 28.0 25.0 22.4 21.5 12.3 18.8 Open-source & Chat Qwen2.5-VL-72B 31.7 25.1 27.2 37.9 53.5 39.0 32.3 29.9 34.1 Qwen2.5-VL-32B 32.2 26.8 24.4 39.0 52.1 40.5 27.5 29.6 33.2 Qwen2.5-VL-7B 22.2 18.2 21.9 35.1 36.1 31.4 27.5 20.9 26.8 Qwen2.5-VL-3B 17.6 15.5 19.0 39.7 32.6 28.5 27.5 19.6 25.6 Qwen2-VL-72B 19.2 19.3 24.9 36.2 44.4 28.8 32.3 22.1 27.5 Qwen2-VL-7B 15.7 12.4 19.8 37.9 30.5 25.5 25.7 19.7 23.4 Qwen2-VL-2B 13.0 8.1 19.3 31.6 19.4 22.7 25.7 11.8 19.9 InternVL3-78B 26.0 24.0 26.5 41.8 50.0 35.1 33.8 27.1 32.1 InternVL3-38B 23.0 18.5 23.0 38.3 41.7 33.5 29.0 22.1 28.4 InternVL3-8B 19.5 19.6 22.6 31.6 41.0 28.1 29.9 21.4 26.4 Molmo-72B 12.5 11.9 14.7 28.7 28.5 23.1 18.4 14.3 18.9 Molmo-7B-D 11.7 8.6 8.1 27.3 23.6 20.7 10.9 11.1 14.7 Molmo-7B-O 8.1 5.5 11.6 22.7 15.3 16.6 16.0 7.5 13.4 LLaVA-OV-72B 17.1 18.0 23.9 32.3 38.9 27.4 30.5 19.9 25.8
|
https://arxiv.org/abs/2505.21327v1
|
Kimi-VL-A3B 18.7 11.9 21.4 34.0 27.8 25.9 26.3 17.1 23.1 21 MME-Reasoning Table 5: Comparison of statistics between full and mini-set of MME-Reasoning. SplitReasoning Type Question Type Difficulty Level DED. IND. ABD. Open MCQ Rule. Easy Medium Hard Mini 39.7% 25.8% 34.4% 32.4% 58.3% 9.3% 31.8% 39.4% 28.8% Full 38.6% 27.9% 33.5% 31.6% 58.5% 9.9% 30.8% 39.3% 29.9% complex parallel reasoning, hypothesis generation, and reflection, rather than simple linear logical progression. These characteristics may not be effectively captured by the Reward Model. (2) The limited capabilities of the Reward Model result in guidance that lacks practical utility. We leave further exploration of TTS methods for reasoning to future work and hope that MME- Reasoning can serve as a representative benchmark for developing more general and comprehensive TTS algorithms in reasoning tasks. A.6 Results with CoT Prompt Chain-of-Thought (CoT) prompting increases output length by encouraging explicit output of the thought process, thereby enhancing reasoning performance. To investigate the impact of CoT on performance in MME-Reasoning, we evaluated the Qwen2.5-VL and InternVL3 series using CoT prompts shown in Tab. 10. The results are presented in Tab. 9. We observed that the Qwen2.5-VL models naturally tend to generate their reasoning process, so adding a CoT prompt did not significantly increase output length. In contrast, InternVL3 models, under default settings, tend to directly output the final answer, and the CoT prompt substantially increased output length. In terms of performance, adding the CoT prompt consistently led to performance degradation for the Qwen2.5-VL series. For InternVL3, performance dropped for the 7B model but improved for the larger 38B and 78B models. One possible hypothesis is that for models already inclined to produce long outputs, explicit CoT instructions might introduce noise into the reasoning process. Conversely, for models that tend to answer questions directly, smaller models struggle to produce helpful and correct CoT outputs, but as model size increases, they begin to benefit noticeably from relatively accurate reasoning processes. A.7 Token Usage of Thinking Models In Fig. 10, we present the average token length of different thinking models on MME-Reasoning. Overall, there is a clear trend indicating that better model performance is often associated with longer reasoning paths. However, we also observe diminishing returns between output length and performance in both open-source and closed-source models. Additionally, although current rule-based reinforcement learning (RL) models show a promising trend of increased output length during training, no significant length gains were observed on MME-Reasoning. This limitation may stem from the limited types and inappropriate complexity of the reasoning tasks. Therefore, exploring how different types of reasoning tasks can better stimulate the effectiveness of RL in reasoning may be a valuable direction for future research. 22 MME-Reasoning Table 6: Performance comparison of state-of-the-art MLLMs on mini set of MME-Reasoning. The top three are highlighted in blue . “T" represents “Thinking". ModelModel Capability Reasoning TypeA VG. CAL. P& E. PA. S&T. CCA. DED. IND. ABD. Human Performance Human Expert 75.0 84.4 84.9 80.3 88.1 85.8 76.9 85.6 83.4 Close-source & Thinking Gemini-2.5-Pro-T 66.0 63.5 58.5 49.3 85.7 60.8 55.1 65.4 60.9 Seed1.5-VL-T 68.0 67.7
|
https://arxiv.org/abs/2505.21327v1
|
58.5 49.3 83.3 67.5 48.7 67.3 62.6 o4-mini 64.0 58.3 56.6 45.1 54.8 57.5 51.3 60.6 57.0 o1 50.0 38.5 41.5 43.7 52.4 50.8 42.3 42.3 45.7 Claude-4-Sonnet-T 33.0 30.2 35.8 39.4 50.0 42.5 37.2 33.7 38.1 Claude-3.7-Sonnet-T 30.0 17.7 36.8 38.0 38.1 31.7 42.3 27.9 33.1 Gemini-2.5-Flash-T 18.0 16.7 15.1 39.4 33.3 27.5 19.2 26.0 24.8 Close-source & Chat Seed1.5-VL 50.0 42.7 34.9 40.8 69.0 57.5 39.7 39.4 46.7 GPT-4o 20.0 24.0 24.5 40.8 33.3 31.7 28.2 27.9 29.5 Claude-3.7-Sonnet 27.0 22.9 34.0 31.0 42.9 31.7 38.5 27.9 32.1 Kimi-Latest 22.0 17.7 17.9 29.6 33.3 30.8 23.1 19.2 24.8 Open-source & Thinking QVQ-72B-Preview 36.0 24.0 34.0 33.8 47.6 38.3 37.2 29.8 35.1 Virgo-72B 28.0 18.8 27.4 43.7 38.1 37.5 41.0 21.2 32.8 VL-Rethinker-72B 23.0 25.0 29.2 39.4 42.9 34.2 32.1 31.7 32.8 VL-Rethinker-7B 23.0 16.7 21.7 47.9 40.5 35.8 28.2 26.0 30.5 MM-Eureka-Qwen-32B 23.0 20.8 26.4 38.0 38.1 32.5 34.6 25.0 30.5 MM-Eureka-Qwen-7B 28.0 17.7 21.7 32.4 50.0 32.5 32.1 22.1 28.8 R1-VL-7B 10.0 10.4 16.0 35.2 16.7 23.3 19.2 16.3 19.9 Vision-R1-7B 14.0 12.5 18.9 39.4 31.0 26.7 29.5 16.3 23.8 R1-Onevision-7B-RL 15.0 10.4 22.6 35.2 19.0 22.5 30.8 16.3 22.5 Kimi-VL-A3B-T 30.0 9.4 19.8 26.8 31.0 28.3 26.9 16.3 23.8 OpenVLThinker-7B 14.0 14.6 14.2 33.8 28.6 29.2 16.7 16.3 21.5 LMM-R1-MGT-PerceReason 27.0 14.6 23.6 38.0 33.3 35.8 33.3 18.3 29.1 Mulberry 19.0 15.6 18.9 33.8 33.3 28.3 23.1 18.3 23.5 LlamaV-o1 15.0 8.3 17.9 31.0 26.2 23.3 23.1 15.4 20.5 Open-source & Chat Qwen2.5-VL-72B 31.0 19.8 25.5 38.0 42.9 39.2 32.1 26.0 32.8 Qwen2.5-VL-32B 31.0 28.1 28.3 40.8 45.2 41.7 34.6 27.9 35.1 Qwen2.5-VL-7B 19.0 16.7 24.5 38.0 33.3 32.5 30.8 21.2 28.1 Qwen2.5-VL-3B 21.0 14.6 21.2 39.4 31.0 30.0 30.8 21.2 27.2 Qwen2-VL-72B 20.0 19.8 28.3 38.0 38.1 34.2 39.7 19.2 30.5 Qwen2-VL-7B 16.0 9.4 25.5 33.8 26.2 22.5 34.6 16.3 23.5 Qwen2-VL-2B 12.0 9.4 17.9 29.6 19.0 23.3 23.1 11.5 19.2 InternVL3-78B 25.0 22.9 33.0 42.3 40.5 36.7 43.6 24.0 34.1 InternVL3-38B 19.0 19.8 26.4 36.6 38.1 31.7 33.3 23.1 29.1 InternVL3-8B 19.0 20.8 29.2 23.9 35.7 26.7 35.9 21.2 27.2 Molmo-72B 11.0 13.5 16.0 35.2 31.0 26.7 21.8 17.3 22.2 Molmo-7B-D 12.0 8.3 12.3 28.2 16.7 22.5 15.4 9.6 16.2 Molmo-7B-O 7.0 4.2 11.3 25.4 14.3 19.2 17.9 4.8 13.9 LLaVA-OV-72B 13.0 19.8 25.5 23.9 35.7 25.0 30.8 17.3 23.8 Kimi-VL-A3B 18.0 8.3 18.9 29.6 9.5 23.3 23.1 11.5 19.2 23 MME-Reasoning Table 7: Performance across different question types on MME-Reasoning. The top three are highlighted in green .†indicates the model was evaluated on the mini-set. “T" represents “Thinking". ModelChoice Open Rule DED. IND. ABD. ALL DED. IND. ABD. ALL ABD.&ALL Close-source & Thinking Gemini-2.5-Pro-T 58.0 49.8 63.6 55.7 75.9 61.5 60.0 66.9 66.1 Seed1.5-VL-T 57.3 54.2 60.2 56.5 78.5 44.2 59.4 65.3 63.5 o4-mini 57.3 48.7 61.9 54.7 67.1 67.3 48.5 58.9 71.3 o1†46.2 42.4 53.3 46.0 60.0 45.5 36.2 46.9 40.7 Claude-4-Sonnet-T 41.1 33.2 40.7 37.9 36.5 25.0 35.2 34.3 31.3 Claude-3.7-Sonnet-T 38.0 39.7 46.6 40.1 28.5 17.3 31.5 28.3 16.5 Gemini-2.5-Flash-T 31.7 23.8 37.3 29.5 21.5 11.5 23.0 20.8 13.9 Close-source & Chat
|
https://arxiv.org/abs/2505.21327v1
|
Seed1.5-VL 54.0 46.2 59.3 51.8 57.0 38.5 42.4 48.0 20.0 GPT-4o 36.3 38.3 48.3 39.1 15.2 17.3 27.9 21.1 7.0 Claude-3.7-Sonnet 38.7 42.2 37.3 39.9 30.4 21.2 27.9 28.0 12.2 Kimi-Latest 31.3 29.6 38.1 31.8 20.9 3.8 20.0 18.1 0.9 Open-source & Thinking QVQ-72B-Preview 43.7 35.0 45.8 40.6 38.0 26.9 31.5 33.6 8.7 Virgo-72B 39.7 36.8 47.5 39.9 34.2 11.5 22.4 25.9 3.5 VL-Rethinker-72B 43.3 38.3 53.4 43.0 31.0 25.0 29.1 29.3 13.9 VL-Rethinker-7B 41.3 33.9 51.7 40.1 21.5 9.6 16.4 17.6 2.6 MM-Eureka-Qwen-32B 36.7 33.2 49.2 37.4 25.9 17.3 26.1 24.8 9.6 MM-Eureka-Qwen-7B 36.7 32.9 41.5 36.0 25.3 7.7 21.8 21.3 4.3 R1-VL-7B 31.7 24.9 34.7 29.5 13.3 5.8 12.7 12.0 0.9 Vision-R1-7B 32.3 29.6 33.9 31.5 18.4 9.6 16.4 16.3 4.3 R1-Onevision-7B-RL 34.3 27.8 31.4 31.2 15.2 9.6 12.1 13.1 0.9 Kimi-VL-A3B-T 33.0 27.4 34.7 31.1 34.2 13.5 14.5 22.7 6.1 OpenVLThinker-7B 38.7 28.2 41.5 35.0 15.8 7.7 11.5 12.8 0.9 LMM-R1-MGT-PerceReason 36.3 36.1 44.9 37.7 19.0 13.5 15.8 16.8 0.9 Mulberry 30.0 28.9 42.4 31.7 12.0 7.7 11.5 11.2 0.9 LlamaV-o1 26.7 25.3 29.7 26.6 14.6 1.9 7.9 9.9 0.9 Open-source & Chat Qwen2.5-VL-72B 41.0 34.3 55.1 40.7 34.8 23.1 26.1 29.3 9.6 Qwen2.5-VL-32B 44.0 30.0 50.0 39.4 34.2 15.4 27.3 28.5 12.2 Qwen2.5-VL-7B 39.0 30.0 41.5 35.8 17.1 15.4 18.2 17.3 3.5 Qwen2.5-VL-3B 33.3 30.7 46.6 34.5 19.6 11.5 13.9 16.0 0.0 Qwen2-VL-72B 35.3 36.5 46.6 37.7 16.5 11.5 18.8 16.8 1.7 Qwen2-VL-7B 32.7 28.9 45.8 33.4 12.0 9.6 12.7 12.0 0.9 Qwen2-VL-2B 30.0 30.3 31.4 30.4 8.9 1.9 6.1 6.7 0.0 InternVL3-78B 40.0 37.9 54.2 41.6 25.9 13.5 23.0 22.9 5.2 InternVL3-38B 36.7 32.1 48.3 36.8 27.8 13.5 17.0 21.1 2.6 InternVL3-8B 29.7 35.7 47.5 35.1 25.3 0.0 17.0 18.1 0.9 Molmo-72B 30.0 20.2 30.5 26.2 10.1 9.6 11.5 10.7 1.7 Molmo-7B-D 27.0 12.6 23.7 20.7 8.9 1.9 9.7 8.3 0.0 Molmo-7B-O 23.0 18.4 16.9 20.1 4.4 3.8 6.1 5.1 0.0 LLaVA-OV-72B 33.7 35.4 39.0 35.3 15.8 5.8 18.8 15.7 1.7 Kimi-VL-A3B 31.3 30.0 42.4 32.7 15.8 7.7 9.1 11.7 2.6 24 MME-Reasoning Table 8: Performance comparison of chat models with or w/o MCTS. ModelModel Capability Reasoning TypeA VG. CAL. P& E. PA. S&T. CCA. DED. IND. ABD. Qwen2.5-VL-7B 22.2 18.2 21.9 35.1 36.1 31.4 27.5 20.9 26.8 + MCTS 20.6 13.8 18.8 30.5 35.4 28.1 23.6 17.6 23.3 Table 9: Performance comparison of SoTA chat models with or w/o CoT prompt. ModelModel Capability Reasoning TypeA VG. CAL. P& E. PA. S&T. CCA. DED. IND. ABD. Qwen2.5-VL-7B 22.2 18.2 21.9 35.1 36.1 31.4 27.5 20.9 26.8 + CoT prompt 20.3 18.5 17.9 33.3 38.9 28.3 21.4 23.6 24.7 Qwen2.5-VL-32B 32.2 26.8 24.4 39.0 52.1 40.5 27.5 29.6 33.2 + CoT prompt 29.0 24.6 23.3 40.8 52.1 40.1 28.7 26.4 32.3 Qwen2.5-VL-72B 31.7 25.1 27.2 37.9 53.5 39.0 32.3 29.9 34.1 + CoT prompt 32.5 26.2 25.1 37.2 52.8 37.5 30.8 30.0 33.0 InternVL3-8B 19.5 19.6 22.6 31.6 41.0 28.1 29.9 21.4 26.4 + CoT prompt 21.1 16.3 20.2 31.6 38.2 31.2 26.9 16.8 25.2 InternVL3-38B 23.0 18.5 23.0 38.3 41.7 33.5 29.0 22.1 28.4 + CoT prompt
|
https://arxiv.org/abs/2505.21327v1
|
28.7 24.3 28.6 38.3 48.6 37.5 32.9 26.9 32.7 InternVL3-78B 26.0 24.0 26.5 41.8 50.0 35.1 33.8 27.1 32.1 + CoT prompt 29.0 22.9 27.0 40.8 48.6 36.6 35.1 26.9 32.9 A.8 Results of Captioner & LLMs We used GPT-4o as the captioner to generate visual descriptions for each question as a substitute for the images. Then we evaluated existing LLMs with "thinking mode," and the results are presented in Tab. 11. As shown in the results, even when only indirectly perceiving image content through textual descriptions, QwQ (Qwen Team, 2025b) and R1 (DeepSeek-AI, 2025) achieved impressive scores of 41.9 and 46.9 respectively—surpassing even Claude-3.7-Sonnet-Thinking. These findings indicate that there is still substantial room for improvement in extending long-term reasoning capabilities from LLMs to the multimodal domain. This gap may be due, in part, to degradation in the foundational model’s capabilities during the vision-language alignment process. Additionally, the diversity of reasoning tasks specific to multimodal settings has yet to be thoroughly explored. B Details of Annotation B.1 Difficult Annotation For each question, we assign a difficulty label: Easy ,Medium , orHard , based on the cognitive load required to solve it. The labeling criteria are as follows: 25 MME-Reasoning Model CoT Prompt Qwen2.5-VL Let’s think step by step. InternVL3 Answer the preceding question. The last line of your response should follow this format: ’Answer: $FINAL_ANSWER’ (without quotes), where ’FINAL_ANSWER’ is your conclusion based on the reasoning provided. If you are uncertain or the problem is too complex, make a reasoned guess based on the information provided. Avoid repeating steps indefinitely—provide your best guess even if unsure. Think step by step logically, considering all relevant information before answering. Table 10: Chain-of-Thought Prompts for Different Models option = { backgroundColor : '#f5f7fa' , title: { left: 'center' , textStyle : { fontFamily : 'Comic Sans MS' , fontSize : 24, fontWeight : 'bold', color: '#333' }, subtextStyle : { fontFamily : 'Comic Sans MS' , fontSize : 16, color: '#666' } }, tooltip: { trigger: 'axis', formatter : '{b}: {c}' , textStyle : { fontSize : 14 } }, grid: { left: '3%', right: '4%', bottom: '15%', containLabel : true, height: '40%' }, xAxis: { type: 'category' , data: [ 'R1-VL\n7B', 'LMM-R1' , 'VL-Ret. \n7B', 'VL-Ret. \n72B', 'R1-OV\n7B-RL', 'MM-Eu.\n7B', 'MM-Eu.\n32B', 'OpenVL\nT.-7B', 'Vision-R1 \n7B', 'Virgo\n72B', 'QvQ-72B \nPreview' , 'Kimi-VL \nA3B-T', 'Claude\n3.7-T', 'Seed1.5 \nVL-T', 'Gemini- \n 2.5-flash-T' , 'o4-mini' , 'o1', 'Gemini- \n 2.5-pro-T' ], 代码编辑 完整代码 配置项 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 JS TS 运⾏渲染设置 Tokens 01,0002,0003,0004,0005,0006,0007,000 R1-VL 7BLMM-R1VL-Ret. 7BVL-Ret. 72BR1-OV 7B-RLMM-Eu. 7BMM-Eu. 32BOpenVL T.-7BVision-R1 7BVirgo 72BQvQ-72B PreviewKimi-VL A3B-TClaude 3.7-TSeed1.5 VL-TGemini- 2.5-flash-To4-minio1Gemini- 2.5-pro-T227.7 265.6 323.4 385.8 364.3 394.5506.3 419587.7920.227642920 815.41745309753956414 1450 下载示例 截图 分享 15:10:19 图表已⽣成 , 10.30ms深⾊模式 ⽆障碍花纹 1 of 1max All + .* Aa \b S2025/5/21 15:10 Examples - Apache ECharts
|
https://arxiv.org/abs/2505.21327v1
|
https://echarts.apache.org/examples/zh/editor.html?c=bar-simple 1/1 Figure 10: Average token usage of open & closed-source thinking models on MME-Reasoning. Table 11: Performance of Caption + SoTA Reasoning LLMs. We use GPT-4o to generate caption of each image in MME-Reasoning. ModelModel Capability Reasoning TypeA VG. CAL. P& E. PA. S&T. CCA. DED. IND. ABD. QwQ-32B 48.5 32.9 39.1 37.6 53.5 44.4 45.6 35.9 41.9 DeepSeek-R1 56.9 40.0 41.6 41.8 58.3 53.8 43.8 41.5 46.9 •Easy: The question typically has a straightforward and quick solution that can be correctly answered by a human expert within 2 minutes. •Medium: The question generally requires some reasoning steps and one to two rounds of trial and reflection, and can be correctly answered by a human expert within 2 to 5 minutes. •Hard: The question usually requires more than two attempts and reflections, or involves the use of tools such as auxiliary lines or drafts to support the thought process. It may or may not be solved by a human expert within 10 minutes. 26 MME-Reasoning B.2 Reasoning Type Annotation For each question, we assign a reasoning type label: Deductive ,Inductive , orAbductive , based on the dominant reasoning method required in its solution. The labeling criteria are as follows: •Deductive: Involves deriving a necessary conclusion from given premises and general rules through step-by-step inference. Examples include math problems, physics problems, and certain puzzles. •Inductive: Involves observing specific phenomena, summarizing general patterns or rules, and extrapolating based on those patterns. Examples include figure series and analogy questions. •Abductive: Involves forming hypotheses or explanations based on known phenomena and then verifying them. These problems typically have a large solution space. Examples include Sudoku, mate-in-one chess problems, circuit fault analysis, biological pedigree analysis, and some puzzles. It should be noted that although the solutions to some puzzles, such as Sudoku, can theoretically be derived through deductive reasoning, in the actual process of human reasoning, we often resort to assuming a certain move and then verifying its validity. This hypothesis–verification–backtracking mechanism leads us to consider these a form of abductive reasoning. B.3 Capability Annotation For each question, we also assign one or more capability labels based on the primary abilities being tested. The available labels are: Pattern Analysis ,Planning and Exploring ,Spatial and Temporal , Calculation , and Causal Chain Analysis . A question may have multiple capability labels. The labeling criteria are as follows: •Pattern analysis: Requires identifying patterns in shape, color, size, or other visual features within the image. •Planning and exploring: Requires explicit planning of the answering process, involving exploration within solution space and iterative verification or reflection. •Spatial and Temporal: Requires understanding spatial relationships or temporal sequences represented in the visual input. •Calculation: Involves performing numerical calculations based on given quantitative conditions to arrive at a correct result. •Causal Chain Analysis: Requires reasoning about causal relationships across multiple nodes based on limited information, or understanding dynamic processes in the problem and identifying key events. C Details of Implementation Some of the data in MME-Reasoning are sourced from ScanNet (Dai et al., 2017), Arkitscenes (Baruch et al., 2021), VideoMME (Fu et al.,
|
https://arxiv.org/abs/2505.21327v1
|
2024), MM-IQ (Cai et al., 2025), PuzzleVQA (Chia et al., 2024). We further filter most of the data and reformulate the questions. We use gpt-4o-mini to extract the answer of all responses and judge the answer of free-form questions. The cost fluctuates with the 27 MME-Reasoning length of the MLLM’s response. As an example, extracting and judging the response of Qwen2.5- VL-72B costs around $0.1. We use VLMEvalKit to evaluate all the models. For models larger than 30B, we use vllm to reduce the inference time. All experiments are conducted on A100 GPUs except experiments on closed-source models. D Details of Evaluation D.1 Prompts for Answer Extraction We list our answer extraction prompts from Fig. 11 to Fig. 21 including: • Fig. 11: Prompt for tasks answering in ‘id : answer’ format. • Fig. 12: Prompt for tasks answering in ‘coordinates’ format. • Fig. 13: Prompt for tasks answering in ‘formula’ format. • Fig. 14: Prompt for multiple-choice tasks. • Fig. 15: Prompt for points24 tasks. • Fig. 16: Prompt for hashi puzzles. • Fig. 17: Prompt for sudoku_4x4 puzzles. • Fig. 18: Prompt for sudoku_6x6 puzzles. • Fig. 19: Prompt for skyscraper puzzles. • Fig. 20: Prompt for yinyang puzzles. • Fig. 21: Prompt for free-form tasks. E Examples of MME-Reasoning We further provide additional case studies as shown from Fig. 22 to Fig. 52, showing both correct and incorrect responses by MLLMs ( e.g., select from GPT-4o, Qwen2.5-VL-72B, o4-mini, Seed1.5- VL-Thinking, and Gemini-2.5-Pro-Thinking). In each figure, we show the original questions, reasoning types, difficulty levels, and model responses. Overall, we find that “thinking models” demonstrate stronger abilities in exploration, judgment, and reflection. However, it still struggles to arrive at correct answers for many reasoning problems that are simple for humans, indicating that the model’s reasoning ability still needs further improvement. Moreover, the number of tokens consumed by the reasoning model increases rapidly. Therefore, future research should also focus on balancing both the reasoning ability and efficiency of the model. F Limitation Despite our best efforts to cover a wide range of multimodal reasoning question types, it remains challenging to comprehensively collect all possible types of reasoning problems that occur in real- world scenarios. This is primarily because gathering and curating high-quality reasoning questions is often a time-consuming and labor-intensive process. Future work is needed to further enrich the diversity of question types and optimize dataset coverage. https://github.com/open-compass/VLMEvalKit https://github.com/vllm-project/vllm 28 MME-Reasoning Please read the following example. Then extract the answer from the model response and type it at the end of the prompt.Example: Question: Each cycle represents a number. You need to find out what the three numbers are. Give a possible answer in the format 'cycle id:number'Model Response: The possible answer is: A:5, B:1, C:2Extracted answer (jsonformat): {{"A":5,"B":1,"C":2}}Please extract the answer for the following response:Question: {question}Model Response: {response}You should only output the jsonwithout any other texts. Figure 11: Prompt for tasks answering in ‘id : answer’ format. Please read the following example. Then extract the answer from the model response and type it at the end of the prompt.Example1: Question:
|
https://arxiv.org/abs/2505.21327v1
|
According to the clues, find the corresponding position. Answer in '(row id (A-C), column id (1-3))' format.Model Response: The possible answer is: (A, 1)Extracted answer (jsonformat): [{{"row": "A","column": 1}}]Example2:Question: According to the clues, find the two corresponding position. Answer in '(row id (A-C), column id (1-3))' format.Model Response: The possible answer is: (A, 1), (B, 3)Extracted answer (jsonformat): [{{"row": "A","column": 1}},{{"row": "B","column": 3}}]Please extract the answer for the following response:Question: {question}Model Response: {response}You should only output the jsonwithout any other texts. Figure 12: Prompt for tasks answering in ‘coordinates’ format. 29 MME-Reasoning Please extract the mathematical formula from the following model response and type it at the end of the prompt.Example:Question: What is the right equation to solve the problem?Model Response: The right equation to solve the problem is: 2 + 3 = 7Extracted answer (jsonformat): {{"equation": "2 + 3 = 7"}}Please extract the answer for the following response:Question: {question}Model Response: {response}You should only output the jsonwithout any other texts. Figure 13: Prompt for tasks answering in ‘formula’ format. Please read the following example. Then extract the answer from the model response and type it at the end of the prompt.Example1: Question: Which answer is right?\n A.1\n B.2\n C.3\n D.4\n Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end.Model Response: The possible answer is: AExtracted answer: AExample2:Question: Which answer is right?\n A.1\n B.2\n C.3\n D.4\n Please answer the question and provide all correct option letter, e.g., A, B, C, D, at the end. Find all possible answers.Model Response: The possible answer is: A, CExtracted answer: [A, C]Please extract the answer for the following response:Question: {question}Model Response: {response}Extracted answer:You should only output the answer without any other texts. Figure 14: Prompt for multiple-choice tasks. Please read the following examples. Then extract the final formula from the answer to the 24-point game, and type it at the end of the prompt. In the cards, K stands for 13, Q stands for 12, J stands for 11, and A stands for 1. Note you need to use * to represent multiplication sign, / to represent division sign.Example1:Question: Given four playing cards (A, 8, 9, K), each with a value, use any combination of addition, subtraction, multiplication, and division to make the number 24. You must use each card exactly once. Give the final answer as a formula.Model Response: The possible answer is (K -9 -A) ×8 = 24Extracted answer: (13-9-1)*8=24Example2:Question: Given four playing cards (3, 8, 9, A), each with a value, use any combination of addition, subtraction, multiplication, and division to make the number 24. You must use each card exactly once. Give the final answer as a formula.Model Response: The possible answer is 9 \\div 3 \\times A \\times 8 = 24Extracted answer: 9/3*1*8=24Please extract the final formula from for the following response:Question: {question}Model Response: {response}Extracted answer:You should only output the final formula from without any other texts. Figure 15: Prompt for points24 tasks. 30 MME-Reasoning Extract all bridge connections from the Hashipuzzle solution text and format them as a structured JSON list. Follow
|
https://arxiv.org/abs/2505.21327v1
|
these rules:1. **Input**:-`solution`: Text describing bridges between islands using various formats (e.g., "c1 -c3", "a1到g1", "between b2 and b4").2. **Output Requirements**:-Return a JSON list of dictionaries in this format:```JSON[{{"start": "a1", "end": "b1", "number": 2}}, ...]```-Include ALL bridges explicitly described in `solution`.-Use 1-based row numbers and letter-based columns (e.g., "c3" not "3c").-Normalize coordinate formats (e.g., "(1,c)" →"c1", "d,4" →"d4").3. **Parsing Rules**:-Capture bridge counts (1 or 2) from phrases like:-"two bridges"-"1 bridge"-"double bridge"-Handle directional phrases:-"from X to Y"-"X connects to Y"-"X-Y bridge"4. **Edge Cases**:-The bridge corresponding to the following plot will be skipped directly:-No bridges can be parsed-Ambiguous connections (unclear start/end)-Invalid coordinate formats**Response Format**:1. Return ONLY the JSON list.2. Do not include any additional text, explanations, or formatting beyond the JSON list.**Examples**:Solution:"Draw two bridges from a1 to b1, and one bridge between f6-f8"Output:[{{"start": "a1", "end": "b1", "number": 2}}, {{"start": "f6", "end": "f8", "number": 1}}]Solution:"Connect c3 with d3 using a double bridge"Output:[{{"start": "c3", "end": "d3", "number": 2}}]Solution:{response}Output: Figure 16: Prompt for hashi puzzles. 31 MME-Reasoning Extract the final answer from the given solution. The solution will contain a detailed solution to a Sudoku-like puzzle, including step-by-step explanations and a final filled grid representation. Your task is to identify and extract only the final answer, which is presented as a matrix (list of lists or equivalent) at the end of the solution. ### Requirements:1. Only return the final filled matrix, formatted as a JSON list of lists.2. Do not include any additional text, explanations, or formatting beyond the matrix itself.3. The final answer usually follows phrases such as 'Final Filled Grid' or 'Summary as a Single Filled Matrix'.4. If the content in the given solution cannot be extracted as a matrix, return 'False' directly### Example 1:Solution:To solve the given Jigsaw Sudoku puzzle, we need to ensure that each row, each column, and each 2x2 sub-grid contains the digits 1 through 4 exactly once. Let's fill in the grid step by step.\n\nStep-by-Step Solution:\n1. **Identify the given numbers and their positions:**\n -Top row: 1, 3\n -Second row: 3\n -Third row: (empty)\n -Fourth row: 2, 3, 1\n\n2. **Fill in the top row:**\n -The top row already has 1 and 3. The remaining digits are 2 and 4.\n -The first column already has a 3, so the first cell in the top row cannot be 3. Therefore, it must be 2.\n -The fourth column already has a 3, so the fourth cell in the top row cannot be 3. Therefore, it must be 4.\n -Top row: [2, 1, 4, 3]\n\n### Final Filled Matrix:\n[[2, 1, 4, 3], [1, 4, 2, 3], [4, 2, 1, 3], [2, 3, 4, 1]]Output:[[2, 1, 4, 3], [1, 4, 2, 3], [4, 2, 1, 3], [2, 3, 4, 1]]### Example 2:Solution:1. **Identify the given numbers and their positions:**\n -Top row: 1, 3\n -Second row: 3\n -Third row: (empty)\n -Fourth row: 2, 3, 1\n\n2. **Fill in the top row:**\n -The top row already has 1 and 3. The remaining digits are 2 and 4.\n -The first column already has a 3, so the first cell in the top row cannot be 3. Therefore, it must be 2.\n
|
https://arxiv.org/abs/2505.21327v1
|
-The fourth column already has a 3, so the fourth cell in the top row cannot be 3. Therefore, it must be 4.\n -Top row: 2, 1, 4, 3\n\n3. **Fill in the second row:**\n -The second row already has a 3. The remaining digits are 1, 2, and 4.\n -The first column already has a 2 and a 3, so the first cell in the second row cannot be 2 or 3. Therefore, it must be 1.\n -The second column already has a 1, so the second cell in the second row cannot be 1. Therefore, it must be 4.\n -The fourth column already has a 3 and a 4, so the fourth cell in the second row cannot be 3 or 4. Therefore, it must be 2.\n -Second row: 1, 4, 2, 3\n\n4. **Fill in the third row:**\n -The third row is empty. The remaining digits are 1, 2, 3, and 4.\n -The first column already has a 2 and a 1, so the first cell in the third row cannot be 2 or 1. Therefore, it must be 4.\n -The second column already has a 1 and a 4, so the second cell in the third row cannot be 1 or 4. Therefore, it must be 2.\n -The third column already has a 4 and a 2, so the third cell in the third row cannot be 4 or 2. Therefore, it must be 1.\n -The fourth column already has a 3 and a 2, so the fourth cell in the third row cannot be 3 or 2. Therefore, it must be 3.\n -Third row: 4, 2, 1, 3\n\n5. **Fill in the fourth row:**\n -The fourth row already has 2, 3, and 1. The remaining digit is 4.\n -Fourth row: 2, 3, 4, 1\n\n### Final Filled Matrix:\n```python\n[\n [2, 1, 4, 3],\n [1, 4, 2, 3],\n [4, 2, 1, 3],\n [2, 3, 4, 1]\n]\n```\n\n### Summary:\nThecompleted Jigsaw Sudoku puzzle is:\n```python\n[\n [2, 1, 4, 3],\n [1, 4, 2, 3],\n [4, 2, 1, 3],\n [2, 3, 4, 1]\n]\n```Output:[[2, 1, 4, 3],[1, 4, 2, 3],[4, 2, 1, 3],[2, 3, 4, 1]]Solution:{response}Output: Figure 17: Prompt for sudoku_4x4 puzzles. 32 MME-Reasoning Extract the final answer from the given solution. The solution will contain a detailed solution to a Sudoku-like puzzle, including step-by-step explanations and a final filled grid representation. Your task is to identify and extract only the final answer, which is presented as a matrix (list of lists or equivalent) at the end of the solution. ### Requirements:1. Only return the final filled matrix, formatted as a JSON list of lists.2. Do not include any additional text, explanations, or formatting beyond the matrix itself.3. The final answer usually follows phrases such as 'Final Filled Grid' or 'Summary as a Single Filled Matrix'.4. If the content in the given solution cannot be extracted as a matrix, return 'False' directly### Example 1:Solution:To solve the given Jigsaw Sudoku puzzle, we need to ensure that each row, each column, and each 2x3 sub-grid contains the digits 1 through 6 exactly once. Let's solve it step by step.\n\n### Step-by-Step Solution:\n\n1. **Identify
|
https://arxiv.org/abs/2505.21327v1
|
the sub-grids and initial constraints:**\n -The grid is divided into 2x3 sub-grids.\n -We need to fill in the empty cells while satisfying the Sudoku rules.\n\n2. **Fill in the first row:**\n -The first row has the digits 2 and 4 already placed.\n -The remaining digits are 1, 3, 5, and 6.\n -Considering the constraints of the sub-grids and columns, we can place the digits as follows:\n -First row: [5, 3, 2, 1, 6, 4]\n\n3. **Fill in the second row:**\n -The second row has the digits 6 and 1 already placed.\n -The remaining digits are 2, 3, 4, and 5.\n -Considering the constraints, we can place the digits as follows:\n -Second row: [4, 6, 1, 5, 3, 2]\n\n4. **Fill in the third row:**\n -The third row has the digits 1, 3, and 6 already placed.\n -The remaining digits are 2, 4, and 5.\n -Considering the constraints, we can place the digits as follows:\n -Third row: [1, 3, 5, 6, 2, 4]\n\n5. **Fill in the fourth row:**\n -The fourth row has the digits 6, 3, and 1 already placed.\n -The remaining digits are 2, 4, and 5.\n -Considering the constraints, we can place the digits as follows:\n -Fourth row: [2, 5, 6, 4, 3, 1]\n\n6. **Fill in the fifth row:**\n -The fifth row has the digits 4 and 2 already placed.\n -The remaining digits are 1, 3, 5, and 6.\n -Considering the constraints, we can place the digits as follows:\n -Fifth row: [3, 1, 4, 2, 5, 6]\n\n7. **Fill in the sixth row:**\n -The sixth row has the digits 6 and 3 already placed.\n -The remaining digits are 1, 2, 4, and 5.\n -Considering the constraints, we can place the digits as follows:\n -Sixth row: [6, 2, 1, 3, 4, 5]\n\n### Final Filled Matrix:\n\n```python\n[\n [5, 3, 2, 1, 6, 4],\n [4, 6, 1, 5, 3, 2],\n [1, 3, 5, 6, 2, 4],\n [2, 5, 6, 4, 3, 1],\n [3, 1, 4, 2, 5, 6],\n [6, 2, 1, 3, 4, 5]\n]\n```Output:[[5, 3, 2, 1, 6, 4],[4, 6, 1, 5, 3, 2],[1, 3, 5, 6, 2, 4],[2, 5, 6, 4, 3, 1],[3, 1, 4, 2, 5, 6],[6, 2, 1, 3, 4, 5]]### Example 2:Solution:To solve this 6×6 Jigsaw Sudoku puzzle, we need to ensure that:\n\n1. Each row contains the digits 1–6 exactly once.\n2. Each column contains the digits 1–6 exactly once.\n3. Each 2×3 sub-grid contains the digits 1–6 exactly once.\n\nHereis the completed solution:\n\n### Completed Sudoku Grid:\n```\n1 3 4 2 5 6\n6 2 5 1 3 4\n5 6 2 3 4 1\n3 4 6 5 1 2\n4 5 1 6 2 3\n2 1 3 4 6 5\n```\n\n### Summary as a Matrix:\n```python\n[\n [1, 3, 4, 2, 5, 6],\n [6, 2, 5, 1, 3, 4],\n [5, 6, 2, 3, 4, 1],\n [3, 4, 6, 5, 1, 2],\n [4, 5, 1, 6, 2, 3],\n [2, 1, 3, 4, 6, 5]\n]\n```Output:[[1, 3, 4, 2, 5, 6],[6, 2, 5, 1, 3, 4],[5, 6, 2, 3, 4, 1],[3, 4, 6, 5, 1, 2],[4, 5, 1, 6, 2, 3],[2, 1, 3, 4, 6, 5]]Solution:{response}Output: Figure 18: Prompt for sudoku_6x6 puzzles. 33 MME-Reasoning Extract the final
|
https://arxiv.org/abs/2505.21327v1
|
answer from the given solution. The solution will contain a detailed solution to a Sudoku-like puzzle, including step-by-step explanations and a final filled grid representation. Your task is to identify and extract only the final answer, which is presented as a matrix (list of lists or equivalent) at the end of the solution. ### Requirements:1. Only return the final filled matrix, formatted as a JSON list of lists.2. Do not include any additional text, explanations, or formatting beyond the matrix itself.3. The final answer usually follows phrases such as 'Final Filled Grid' or 'Summary as a Single Filled Matrix'.4. If the content in the given solution cannot be extracted as a matrix, return 'False' directly### Example 1:Solution:To solve the 3x3 Skyscrapers puzzle, we need to place the numbers 1, 2, and 3 in each row and column while adhering to the given clues. Let's break down the solution step by step.\n\n### Step-by-Step Solution:\n\n1. **Analyze the clues:**\n -Top row: 3, 1, 2\n -Left column: 2, 2, 1\n -Right column: 2, 1, 3\n -Bottom row: 1, 2, 2\n\n2. **Start with the top row:**\n -The top row has a clue of 3 on the left, meaning all three skyscrapers are visible. This implies the order must be 1, 2, 3 from left to right.\n -The top row is: [1, 2, 3]\n\n3. **Analyze the left column:**\n -The left column has a clue of 2 at the top, meaning two skyscrapers are visible. Given the top row is [1, 2, 3], the second row must start with a 3 to block the view of the third row.\n -The left column is: [1, 3, 2]\n\n4. **Analyze the right column:**\n -The right column has a clue of 3 at the bottom, meaning all three skyscrapers are visible. This implies the order must be 1, 2, 3 from top to bottom.\n -The right column is: [3, 2, 1]\n\n5. **Fill in the remaining cells:**\n -The second row has a clue of 2 on the left, and we already have a 3 in the first column. The remaining cells must be 1 and 2 in some order. Given the right column clue of 2, the second row must be [3, 1, 2].\n -The third row has a clue of 1 on the left, meaning the first cell must be 2 to block the view of the other cells. The remaining cells must be 3 and 1 in some order. Given the right column clue of 3, the third row must be [2, 3, 1].\n\n### Final Solution:\n\nThecompleted 3x3 Skyscrapers puzzle is:\n\n```\n1 2 3\n3 1 2\n2 3 1\n```\n\n### Summary as a single filled matrix:\n\n```python\n[[1, 2, 3], [3, 1, 2], [2, 3, 1]]\n```Output:[[1, 2, 3], [3, 1, 2], [2, 3, 1]]### Example 2:Solution:To solve the 4x4 Skyscrapers puzzle, we need to place the numbers 1, 2, 3, and 4 in each row and column while adhering to the given clues. Let's break down the solution step by step.\n\n### Step-by-Step Solution:\n\n1. **Analyze the clues:**\n -Top row: Clue '2' on the left means we can see 2 skyscrapers from the left.\n -Bottom row: Clue '4' on the
|
https://arxiv.org/abs/2505.21327v1
|
right means we can see 4 skyscrapers from the right.\n -Left column: Clue '2' on the top means we can see 2 skyscrapers from the top.\n -Right column: Clue '2' on the bottom means we can see 2 skyscrapers from the bottom.\n\n2. **Start with the bottom row:**\n -Since we can see 4 skyscrapers from the right, the bottom row must be in ascending order from left to right: [1, 2, 3, 4].\n\n3. **Fill the bottom row:**\n -Row 4: [1, 2, 3, 4]\n\n4. **Analyze the top row:**\n -We can see 2 skyscrapers from the left, so the tallest skyscraper (4) must be in the second position from the left to block the view of the third and fourth skyscrapers.\n -Possible configurations: [3, 4, 1, 2] or [2, 4, 1, 3]\n\n5. **Analyze the left column:**\n -We can see 2 skyscrapers from the top, so the tallest skyscraper (4) must be in the second position from the top to block the view of the third and fourth skyscrapers.\n -Possible configurations: [3, 4, 1, 2] or [2, 4, 1, 3]\n\n6. **Combine the clues:**\n -Top row: [3, 4, 1, 2]\n -Left column: [3, 4, 1, 2]\n\n7. **Fill the remaining cells:**\n -Row 1: [3, 4, 1, 2]\n -Row 2: [4, 1, 2, 3]\n -Row 3: [2, 3, 4, 1]\n -Row 4: [1, 2, 3, 4]\n\n### Final Solution:\n\n```\n[[3, 4, 1, 2],\n [4, 1, 2, 3],\n [2, 3, 4, 1],\n [1, 2, 3, 4]]\n```\n\nThismatrix satisfies all the given clues and constraints of the Skyscrapers puzzle.Output:[[3, 4, 1, 2], [4, 1, 2, 3], [2, 3, 4, 1], [1, 2, 3, 4]]Solution:{response}Output: Figure 19: Prompt for skyscraper puzzles. 34 MME-Reasoning Extract the final answer from the given solution. The solution will contain a detailed solution to a Sudoku-like puzzle, including step-by-step explanations and a final filled grid representation. Your task is to identify and extract only the final answer, which is presented as a matrix (list of lists or equivalent) at the end of the solution. ### Requirements:1. Only return the final filled matrix, formatted as a JSON list of lists.2. Do not include any additional text, explanations, or formatting beyond the matrix itself.3. The final answer usually follows phrases such as 'Final Filled Grid' or 'Summary as a Single Filled Matrix'.4. If the content in the given solution cannot be extracted as a matrix, return 'False' directly### Example 1:Solution:To solve the Yin-Yang puzzle, we need to place black and white stones at the intersections of the grid lines while adhering to the given rules. Let's analyze the current state of the grid and determine the placement of the remaining stones.\n\n### Step-by-Step Solution:\n\n1. **Initial Setup:**\n -The grid is a 6x6 matrix.\n -Some cells already have black (1) and white (0) stones.\n\n2. **Rule Analysis:**\n -All black stones must be orthogonally connected.\n -All white stones must be orthogonally connected.\n -No 2x2 region can be monochromatic.\n -Existing stones cannot be moved.\n\n3. **Placement Strategy:**\n -Start by ensuring the connectivity of black and white stones.\n -Avoid creating monochromatic 2x2 regions.\n\n### Detailed Placement:\n\n-**Top-left corner (a1):**\n -Place a white stone to avoid a monochromatic 2x2 region with the black
|
https://arxiv.org/abs/2505.21327v1
|
stone at (b1).\n\n-**Top-right corner (f1):**\n -Place a white stone to avoid a monochromatic 2x2 region with the black stone at (e1).\n\n-**Middle section:**\n -Ensure connectivity of black and white stones while avoiding monochromatic 2x2 regions.\n\n### Final Matrix:\n\nAfterplacing the stones according to the rules, the final matrix is:\n\n```\n[\n [0, 1, 0, 1, 0, 0],\n [0, 0, 1, 1, 0, 0],\n [1, 0, 1, 0, 1, 0],\n [0, 1, 0, 1, 0, 1],\n [0, 0, 1, 0, 1, 0],\n [1, 0, 0, 1, 0, 0]\n]\n```\n\n### Summary:\n\nThecompleted solution for the Yin-Yang puzzle is represented as a single filled matrix:\n\n```python\n[\n [0, 1, 0, 1, 0, 0],\n [0, 0, 1, 1, 0, 0],\n [1, 0, 1, 0, 1, 0],\n [0, 1, 0, 1, 0, 1],\n [0, 0, 1, 0, 1, 0],\n [1, 0, 0, 1, 0, 0]\n]\n```Output:[[0, 1, 0, 1, 0, 0],[0, 0, 1, 1, 0, 0],[1, 0, 1, 0, 1, 0],[0, 1, 0, 1, 0, 1],[0, 0, 1, 0, 1, 0],[1, 0, 0, 1, 0, 0]]### Example 2:Solution:To solve the Yin-Yang puzzle, we need to ensure that the rules are followed:\n\n1. All black stones form a single orthogonally connected group.\n2. All white stones form a single orthogonally connected group.\n3. No 2x2 region is monochromatic.\n4. The positions of the pre-placed stones cannot be changed.\n\n### Step-by-Step Solution:\n\n#### Initial Setup:\nThegiven grid is a 6x6 grid with some pre-placed black and white stones. We\'ll represent the grid as follows:\n-"1" for black stones.\n-"0" for white stones.\n-Empty cells will be filled as we solve the puzzle.\n\n#### Pre-placed Stones:\nFromthe diagram:\n-Black stones ("1") are at: (a1, b1, c1, c2, d2, d3, e3).\n-White stones ("0") are at: (a3, a4, a5, b4, b5, c4, d4, e4).\n\n#### Solving the Puzzle:\nWewill fill the remaining cells while ensuring the rules are satisfied.\n\n---\n\n### Final Solution:\nAftersolving, the completed grid is as follows:\n\n```\n1 1 1 0 0 0\n0 1 0 1 0 0\n0 1 0 1 1 0\n0 0 0 1 0 1\n0 0 0 0 0 1\n1 0 0 0 1 1\n```\n\n### Summary:\nThesolution as a 6x6 matrix is:\n\n```python\n[\n [1, 1, 1, 0, 0, 0],\n [0, 1, 0, 1, 0, 0],\n [0, 1, 0, 1, 1, 0],\n [0, 0, 0, 1, 0, 1],\n [0, 0, 0, 0, 0, 1],\n [1, 0, 0, 0, 1, 1]\n]\n```Output:[[1, 1, 1, 0, 0, 0],[0, 1, 0, 1, 0, 0],[0, 1, 0, 1, 1, 0],[0, 0, 0, 1, 0, 1],[0, 0, 0, 0, 0, 1],[1, 0, 0, 0, 1, 1]]Solution:{response}Output: Figure 20: Prompt for yinyang puzzles. 35 MME-Reasoning Please read the following example. Then extract the answer from the model response and type it at the end of the prompt.Example1: Question: What value should be? Please answer the question and provide the final value, e.g., 2, 2.10, -3, at the end.Model Response: The possible value is -4Extracted answer: -4Example2:Question: What value should be? Please answer the question and provide the final value, e.g., 2, 2.10, -3, at the end.Model Response: The possible value is $2\sqrt{{3}}$Extracted answer: 2\sqrt{{3}}Example3:Question: What value should be? Please answer the question and provide the final value, e.g., 2, 2.10, -3, at the end.Model Response: The possible value is 3.28Extracted answer: 3.28Please extract the answer
|
https://arxiv.org/abs/2505.21327v1
|
for the following response:Question: {question}Model Response: {response}Extracted answer:You should only output the answer without any other texts. Figure 21: Prompt for free-form tasks. 36 MME-Reasoning Reasoning type: AbductiveDifficulty level: MediumQuestion: A sealed box with two switches $S_1$, $S_2$ and two lamps $L_1$, $L_2$ visible on its surface. To investigate how the switches and lamps are connected inside the sealed box, performing the following operations: First, close $S_1$, only lamp $L_1$ lights up; then close $S_2$, both lamps light up; finally, open $S_1$, both lamps go out. The connection method that matches the above experimental results is ( ).A. Diagram A B. Diagram BC. Diagram C D. Diagram D Option A:L1 is in series with S1. L2 is in series with S2. These two branches are in parallel.…* Step 3: Open S1 (S2 remains closed). The L1-S1 branch is broken, so L1 goes out. However, the L2-S2 branch is still complete, so current flows through L2, and L2 remains lit. This contradicts the third observation. Therefore, Option A is incorrect.Option B: L1 and L2 are connected in parallel. S2 is in series with L2. S1 is in series with the entire parallel combination of L1 and the (L2-S2) branch.* Step 1: Close S1 (S2 is open). Current flows through S1, then reaches the junction. It can flow through L1. It cannot flow through the L2-S2 branch because S2 is open. So, only L1 lights up. This matches the first observation….* Step 3: Open S1. Opening S1 breaks the main circuit path before the current can reach either L1 or the L2-S2 branch. No current flows through either lamp. Both L1 and L2 go out. This matches the third observation. Therefore, Option B is correct.Option C: L1 is in series with S1. This branch is in parallel with L2. S2 is in series with this entire parallel combination.* Step 1: Close S1 (S2 is open). Since S2 is in the main circuit path and is open, no current can flow through the circuit. Neither L1 nor L2 lights up. This contradicts the first observation. * Therefore, Option C is incorrect.Option D: L1 and L2 are in parallel. S1 and S2 are in parallel. The parallel lamp combination is in series with the parallel switch combination.* Step 1: Close S1 (S2 is open). Current flows through the S1 path, then splits to flow through both L1 and L2. Both L1 and L2 light up. This contradicts the first observation.* Therefore, Option D is incorrect.Final Answer: The final answer is $\boxed{B}$ Gemini-2.5-Pro-ThinkingThe only network that reproducesS1 closed, S2 open →only L₁on S1 & S2 closed →both on S1 open, S2 closed →both off is the “ladder”form in which the two lamps sit in series, with S₁shunting (i.e. in parallel with) L₂and S₂shunting L₁. Among the four drawings that is diagram D. o4-mini Figure 22: An example of circuit analysis in which the model needs to infer the circuit connection based on the observed phenomena. 37 MME-Reasoning Reasoning type: DeductiveDifficulty level: Hard GPT-4oQuestion: When you are next to the TV in room A facing the door, navigate
|
https://arxiv.org/abs/2505.21327v1
|
to the sofa by completing the route: 1. Go straight out of the bedroom door, 2. [?], 3. Go straight to the dining table, 4. [?], 5. Go straight to the sofa. A. Turn left, Turn right B. Turn right, Turn left C. Turn left, Turn left D. Turn right, Turn right 1. **Start:** In room A (bedroom), next to the TV, facing the bedroom door.2. **Go straight out of the bedroom door:** You are now in the main living area, just outside the bedroom.3. **[?] Turn right:** To face the dining table area, you need to turn right from the direction you exited the bedroom.4. **Go straight to the dining table:** You move towards the dining table.5. **[?] Turn left:** From the dining table area, to face the sofa, you need to turn left.6. **Go straight to the sofa:** You move towards the sofa.The missing steps are Turn right, Turn left.BTo navigate from the TV in room A to the sofa:1. Go straight out of the bedroom door.2. **Turn right** to head toward the dining table.3. Go straight to the dining table.4. **Turn left** to face the sofa.5. Go straight to the sofa.The correct option is **B**.To work it out, imagine yourself in Room A at the TV, facing the bedroom door (north). 1. You go straight out (north) of the door. 2. To head toward the dining table (which lies to the east), you turn right (now facing east). 3. You go straight to the dining table. 4. From the table you need to get to the sofa (which lies to the south), so you turn right again (now facing south). 5. You go straight to the sofa. That sequence is “Turn right, Turn right,” which is option D. Answer: D o4-mini Gemini-2.5-Pro-Thinking Figure 23: An example of navigation in which the model needs to infer the route from the starting point to the destination. 38 MME-Reasoning Reasoning type: AbductiveDifficulty level: HardQuestion:Yin-Yangisalogicpuzzlewithsimplerulesandchallengingsolutions.Therulesaresimple.Yin-Yangisplayedonarectangulargridwithnostandardsize.Somecellsstartoutfilledwithblackorwhite.Therestofthecellsareempty.Yourtaskistoplaceblackandwhitestonesattheintersectionsofthegridlinessuchthat:1.Allblackstonesmustbeorthogonallyconnectedtoformasinglegroup.2.Allwhitestonesmustbeorthogonallyconnectedtoformasinglegroup. Below is one valid way to fill all 36 intersections so that 1. every intersection is either black (1) or white (0), 2. the given stones remain unchanged, 3. every black stone is in one orthogonally-connected group, 4. every white stone is in one orthogonally-connected group, 5. no 2×2 block is all black or all white. We use the coordinate system (column a–f, row 1–6), and then display row 1 on the first line, etc. “1” means black, “0” means white.Final 6×6 matrix (rows 1→6, columns a→f):[ [1, 1, 1, 0, 0, 0], [1, 0, 1, 1, 1, 0], [1, 0, 1, 0, 0, 0], [1, 0, 1, 1, 1, 0], [1, 0, 1, 0, 1, 0], [1, 0, 0, 0, 0, 0] ] Check against the givens:- c1=1, d2=1 - b3=0, c3=1, e3=0 - b4=0, c4=1, d4=1, e4=1 - b5=0 - a6=1, b6=0 All are respected. One quickly verifies that the black-stones form one orthogonal cluster, the white-stones form one orthogonal cluster, and no 2×2 square is monochromatic. 3. No 2x2 region can be monochromatic (i.e., a 2x2 region cannot consist entirely of black or entirely of
|
https://arxiv.org/abs/2505.21327v1
|
white stones).4. **Do not change the positions of any stones already placed on the grid.**Please complete the solution for the Yin-Yang puzzle in the diagram. At the end of your response, summary your answer **as a single filled matrix** (list of lists or equivalent). Represent the stones as a 6x6 matrix of "0" and "1", where "0" represents a white stone and "1" represents a black stone. o4-mini Figure 24: An example of a Yinyang puzzle in which the model needs to try, judge, and reason to determine the positions of pieces of different colors. 39 MME-Reasoning Reasoning type: AbductiveDifficulty level: HardQuestion:Youaregivenapartiallyfilled6×6grid.Yourtaskistofillintheemptycellswithdigits1–6sothat:1.Eachrowcontainseachdigit1–6exactlyonce.2.Eachcolumncontainseachdigit1–6exactlyonce.3.Each2×3sub-gridcontainseachdigit1–6exactlyonce.PleasecompletethesolutionfortheJigsawSudokupuzzleinthediagram.Attheendofyourresponse,summaryyouranswer**asasinglefilledmatrix**(listoflistsorequivalent). Gemini-2.5-Pro-Thinking Let's denote the grid cells by (row, col) starting from (1,1) to (6,6).The initial grid is:[[ , , , , 4, ], [ , , , , 3, 1], [ , , 6, 3, , ], [ , , 1, 5, , ], [2, 6, , , , ], [ , 1, , , , ]]1. **Fill (3,5) = 1**: * Row 3 is missing 1, 2, 4, 5. * Column 5 is missing 1, 2, 5, 6. * Subgrid 4 (rows 3-4, cols 4-6) contains (3,4)=3, (4,4)=5. It's missing 1, 2, 4, 6. * Cell (3,5) is the intersection. For Row 3, it can be 1,2,4,5. For Col 5, it can be 1,2,5,6. For Subgrid 4, it can be 1,2,4,6. The common value is 1 (also 2, but 1 is more constrained). * If (3,5) = 1: Row 3 needs 1. Col 5 needs 1. Subgrid 4 needs 1. This is a valid placement. Grid: [[ , , , , 4, ], [ , , , , 3, 1], [ , , 6, 3, 1, ], [ , , 1, 5, , ], [2, 6, , , , ], [ , 1, , , , ]]2. **Fill Subgrid 4 (rows 3-4, cols 4-6)**: * Contains (3,4)=3, (3,5)=1, (4,4)=5. Missing 2, 4, 6 for cells (3,6), (4,5), (4,6). * Row 3: [?, ?, 6, 3, 1, ?]. Missing 2, 4, 5. So (3,6) must be 2 or 4. (5 is not in Subgrid 4's missing {2,4,6}). * Col 5: [4, 3, 1, ?, ?, ?]. Missing 2, 5, 6. So (4,5) must be 2 or 6. * Col 6: [?, 1, ?, ?, ?, ?]. Missing 2, 3, 4, 5, 6. So (3,6) can be 2,4. (4,6) can be 2,4,6. * If (3,6)=2: Subgrid 4 needs 4,6 for (4,5),(4,6). (4,5) must be 6 (cannot be 4 due to Col 5 missing 2,5,6). Then (4,6)=4. This combination is (3,6)=2, (4,5)=6, (4,6)=4. * If (3,6)=4: Subgrid 4 needs 2,6 for (4,5),(4,6). (4,5) can be 2 or 6. * If (4,5)=2, then (4,6)=6. This is (3,6)=4, (4,5)=2, (4,6)=6. * If (4,5)=6, then (4,6)=2. This is (3,6)=4, (4,5)=6, (4,6)=2. Let's check the first option: (3,6)=2, (4,5)=6, (4,6)=4. Row 3 becomes [?, ?, 6, 3, 1, 2]. Missing 4, 5. Row 4 becomes [?, ?, 1, 5, 6, 4]. Missing 2, 3.3. **Fill Subgrid 3 (rows 3-4, cols 1-3)**: * Contains (3,3)=6, (4,3)=1. Missing 2, 3, 4, 5 for cells (3,1), (3,2), (4,1), (4,2).
|
https://arxiv.org/abs/2505.21327v1
|
* Row 3 needs {4,5} for (3,1),(3,2). * Row 4 needs {2,3} for (4,1),(4,2). * This assignment means {(3,1),(3,2)}={4,5} and {(4,1),(4,2)}={2,3}. * Consider Col 1: [?, ?, (3,1), (4,1), 2, ?]. (5,1)=2. (3,1) is 4 or 5. (4,1) is 2 or 3. (4,1) cannot be 2. So (4,1)=3. * This implies (4,2)=2. * Consider Col 2: [?, ?, (3,2), (4,2), 6, 1]. (5,2)=6, (6,2)=1. (4,2)=2. (3,2) is 4 or 5. * If (4,1)=3, then (3,1) from {4,5}. Col 1 already has 2,3. If (3,1)=4. * So: (3,1)=4, (3,2)=5, (4,1)=3, (4,2)=2. Check: Subgrid 3: [4,5,6],[3,2,1]. Valid. Grid: [[ , , , , 4, ], [ , , , , 3, 1], [4, 5, 6, 3, 1, 2], [3, 2, 1, 5, 6, 4], [2, 6, , , , ], [ , 1, , , , ]] Figure 25: An example of a Sudoku puzzle in which the model needs to try, judge, and reason to determine the positions of each number.40 MME-Reasoning Gemini-2.5-Pro-Thinking4. **Deductions from filled rows/cols**:* Col 1: [?, ?, 4, 3, 2, ?]. Missing 1, 5, 6. (6,1) is in Subgrid 5 (missing 3,4,5 from {1,2,6}). So (6,1)=5.* Then (1,1), (2,1) are {1,6}.* Col 5: [4, 3, 1, 6, ?, ?]. Missing 2, 5. So {(5,5), (6,5)} = {2,5}.* Row 6: [(6,1)=5, 1, ?, ?, (6,5), ?]. Missing 2,3,4,6. (6,5) must be 2 (as 5 is taken by (6,1)). So (6,5)=2.* Then (5,5)=5.Grid: [[?, , , , 4, ], [?, , , , 3, 1], [4, 5, 6, 3, 1, 2],[3, 2, 1, 5, 6, 4], [2, 6, , , 5, ], [5, 1, , , 2, ]]5. **Fill Subgrid 1 (rows 1-2, cols 1-3) and Subgrid 2 (rows 1-2, cols 4-6)**:* Col 2: [?, ?, 5, 2, 6, 1]. Missing 3, 4. So {(1,2),(2,2)}={3,4}.* Col 3: [?, ?, 6, 1, ?, ?]. Missing 2,3,4,5. From Subgrid 5, (5,3),(6,3) are {3,4}. So {(1,3),(2,3)}={2,5}.* Row 2: [(2,1), (2,2), (2,3), (2,4), 3, 1]. Missing 2,4,5,6.* (2,1) is 1 or 6. Must be 6 (1 is in row). So (1,1)=1.* (2,2) is 3 or 4. Must be 4 (3 is in row). So (1,2)=3.* (2,3) is 2 or 5. Can be 2 or 5.* This means {(2,3),(2,4)} = {2,5} (remaining missing for Row 2).* If (2,3)=5, then (1,3)=2. And (2,4)=2.* Subgrid 2: [(1,4),4,(1,6)], [(2,4),3,1]. Missing 2,5,6. We have (2,4)=2. So {(1,4),(1,6)}={5,6}.* Row 1: [1,3,2,(1,4),4,(1,6)]. Missing 5,6. So {(1,4),(1,6)}={5,6}. This is consistent.* Col 4: [(1,4), (2,4)=2, 3, 5, ?, ?]. Missing 1,4,6. So (1,4) must be 6. Then (1,6)=5.Grid: [[1, 3, 2, 6, 4, 5], [6, 4, 5, 2, 3, 1], [4, 5, 6, 3, 1, 2], [3, 2, 1, 5, 6, 4], [2, 6, ?, ?, 5, ?], [5, 1, ?, ?, 2, ?]]6. **Fill Subgrid 5 (rows 5-6, cols 1-3) and Subgrid 6 (rows 5-6, cols 4-6)**:* Subgrid 5: [2,6,(5,3)], [5,1,(6,3)]. Missing 3,4. So {(5,3),(6,3)}={3,4}.* Col 4: [6,2,3,5,(5,4),(6,4)]. Missing 1,4. So {(5,4),(6,4)}={1,4}.* Row 5: [2,6,(5,3),(5,4),5,(5,6)]. Missing 1,3,4 (depending on (5,3),(5,4)).* If (5,3)=3: Row 5 needs {1,4} for (5,4),(5,6). Since (5,4) is 1 or 4, this works.* If (5,4)=1, then (5,6)=4. Then (6,3)=4 (from
|
https://arxiv.org/abs/2505.21327v1
|
Subgrid 5). (6,4)=4 (from Col 4). Row 6 has two 4s. IMPOSSIBLE.* So (5,4) must be 4. Then (5,6)=1. This leads to (6,3)=4, (6,4)=1. Row 6: [5,1,4,1,2,?]. Two 1s. IMPOSSIBLE.* So (5,3) must be 4.* Then (6,3)=3.* Row 5: [2,6,4,(5,4),5,(5,6)]. Missing 1,3. So {(5,4),(5,6)}={1,3}.* (5,4) is 1 or 4 (from Col 4). So (5,4)=1. Then (5,6)=3.* Then (6,4)=4 (from Col 4).* Row 6: [5,1,3,4,2,(6,6)]. Missing 6. So (6,6)=6.Final Grid: [[1, 3, 2, 6, 4, 5], [6, 4, 5, 2, 3, 1], [4, 5, 6, 3, 1, 2], [3, 2, 1, 5, 6, 4], [2, 6, 4, 1, 5, 3], [5, 1, 3, 4, 2, 6]]Checking all constraints:* Each row/column has unique digits 1-6.* Subgrid 1: [1,3,2],[6,4,5] -> {1,2,3,4,5,6} OK. …* Subgrid 6: [1,5,3],[4,2,6] -> {1,2,3,4,5,6} OKThe solution is correct.Summary as a single filled matrix:[[1, 3, 2, 6, 4, 5], [6, 4, 5, 2, 3, 1], [4, 5, 6, 3, 1, 2], [3, 2, 1, 5, 6, 4], [2, 6, 4, 1, 5, 3],[5, 1, 3, 4, 2, 6]] Figure 26: An example of a Sudoku puzzle in which the model needs to try, judge, and reason to determine the positions of each number. 41 MME-Reasoning o4-mini Reasoning type: DeductiveDifficulty level: EasyQuestion:Giventheuniversalset$U=\mathbb{R}$,thesets,$A=\{x\mid0<x<9,x\in\mathbb{R}\}$and$B=\{x\mid-4<x<4,x\in\mathbb{Z}\}$,therelationshipisshownintheVenndiagram.Howmanyelementsarethereinthesetrepresentedbytheshadedregioninthediagram? ###Step1:Understandthesets-Theuniversalsetis\(U=\mathbb{R}\),thesetofallrealnumbers.-\(A=\{x\mid0<x<9,x\in\mathbb{R}\}\):Thisisthesetofallrealnumbersbetween0and9(excluding0and9).-\(B=\{x\mid-4<x<4,x\in\mathbb{Z}\}\):Thisisthesetofintegersbetween-4and4(excluding-4and4).Thus,\(B=\{-3,-2,-1,0,1,2,3\}\).###Step2:IdentifytheshadedregionTheshadedregionintheVenndiagramrepresents\(B\),thesetofintegersbetween-4and4(excluding-4and4).###Step3:Counttheelementsin\(B\)Theelementsof\(B\)are\(\{-3,-2,-1,0,1,2,3\}\).Countingthese,wefindthat\(B\)contains**7elements**.###FinalAnswer:Thenumberofelementsintheshadedregionis**7**. ### Step 1: Define Sets \( A \) and \( B \) -\( A = \{x \mid 0 < x < 9, x \in \mathbb{R}\} \): All real numbers between 0 and 9 (not including 0 and 9). -\( B = \{x \mid -4 < x < 4, x \in \mathbb{Z}\} \): All **integers** \( x \) where \( -4 < x < 4 \). Listing these integers: \( B = \{-3, -2, -1, 0, 1, 2, 3\} \). ### Step 2: Find \( A \cap B \) (the overlap of \( A \) and \( B \)) \( A \cap B \) consists of elements that are in both \( A \) and \( B \). For an element to be in \( A \), it must satisfy \( 0 < x < 9 \). From \( B \), the integers satisfying \( 0 < x < 9 \) are \( 1, 2, 3 \). Thus: \( A \cap B = \{1, 2, 3\} \). ### Step 3: Find \( B \setminusA \) (the shaded region) \( B \setminusA \) is the set of elements in \( B \) that are **not** in \( A \). Subtract \( A \cap B \) from \( B \): \( B \setminusA = B -(A \cap B) = \{-3, -2, -1, 0, 1, 2, 3\} -\{1, 2, 3\} = \{-3, -2, -1, 0\} \). ### Step 4: Count the elements in \( B \setminusA \) The set \( \{-3, -2, -1, 0\} \) has 4 elements. Thus, the number of elements in the shaded region is \(\boldsymbol{4}\). GPT-4o First note-B = { x∈ℤ| −4<x<4 } = {−3,−2,−1,0,1,2,3} (7 integers)-A = { x∈ℝ| 0<x<9 }The shaded region is the part of B lying *outside* A, i.e. B−A. In B the only points in A are 1,2,3, soB−A = {−3,−2,−1,0}, which has 4 elements.Seed1.5-VL-Thinking
|
https://arxiv.org/abs/2505.21327v1
|
Figure 27: An example of a Venn diagram problem in which the model needs to calculate and reason to determine the correct answer. 42 MME-Reasoning Reasoning type: AbductiveDifficulty level: HardQuestion:Hashi(Hashiwokakero)alsoknownasBridgesisalogicpuzzle.Therulesaresimple.Somecellsstartoutwithnumbersfrom1to8inclusive;thesearetheislands.Therestofthecellsareempty.Thegoalistoconnectalloftheislandsintoasingleconnectedgroupbydrawingaseriesofbridgesbetweentheislands.Note:1.Theymustbeginandendatdistinctislands,travellingastraightlineinbetween;2.Theymustnotcrossanyotherbridgesorislands; 3.Theymayonlyrunorthogonally,i.e.,thetwoendpointsofthebridgemustbeinthesameroworcolumn;4.Atmosttwobridgesconnectapairofislands;5.Thenumberofbridgesconnectedtoeachislandmustmatchthenumberonthatisland.PleasecompletethesolutionfortheHashipuzzleinthediagram,andorganizeyourfinalanswerinasuitablestructuredformat. Seed1.5-VL-Thinking ### Hashi Puzzle Solution Bridges are defined by their start and end coordinates, with the number of bridges between each pair (1 or 2, as a maximum of 2 bridges connect two islands). Bridge Connection: Number of Bridges (a,1) ↔ (c,1): 1; (c,1) ↔ (c,5): 2; (c,5) ↔ (a,5): 2; (c,5) ↔ (f,5): 2; (a,5) ↔ (a,7): 2 (a,7) ↔ (c,7): 1; (f,5) ↔ (h,5): 2; (f,5) ↔ (f,7): 2; (f,7) ↔ (h,7): 1; (e,1) ↔ (h,1): 1 (c,1) ↔ (e,1): 1### Verification of Island Bridge Counts Each island’s bridge count matches its number: - **(a,1)**: 1 bridge (to (c,1)) ✔ - **(c,1)**: 1 (to (a,1)) + 2 (to (c,5)) + 1 (to (e,1)) = 4 ❌ *Adjustment: Correctly, (c,1) connects to (a,1) (1), (c,5) (2) → total 3. (e,1) connects to (h,1) (1) and is integrated via (c,1) connection. Final validation ensures all constraints are met.* This configuration ensures all islands are connected, bridge counts match island numbers, and no bridges cross or violate constraints. o4-miniHereisonefully-connected,non-crossingplacementofbridges:Bridgesofmultiplicity1:a1–c1c1–e1c1–c5a7–c7f5–f7f7–h7h1–h5Bridgesofmultiplicity2:a5–c5a5–a7c5–f5Bridgesofmultiplicity1(continued):f5–h5Check(degreeofeachisland=itslabel):a1=1(toc1)c1=3(toa1,e1,c5)e1=1(toc1)h1=1(toh5)a5=4(2toc5,2toa7)c5=5(1fromc1,2froma5,2tof5)f5=4(2fromc5,1tof7,1toh5)h5=2(1fromf5,1fromh1)a7=3(2froma5,1toc7)c7=1(froma7)f7=2(1fromf5,1toh7)h7=1(fromf7)Allislandsareconnectedintoasinglenetwork,nobridgescross,andnopairhasmorethantwobridges. Figure 28: An example of a Hashi puzzle in which the model needs to try, judge, and reason to determine the positions and numbers of bridges. 43 MME-Reasoning Reasoning type: AbductiveDifficulty level: HardQuestion:Hashi(Hashiwokakero)alsoknownasBridgesisalogicpuzzle.Therulesaresimple.Somecellsstartoutwithnumbersfrom1to8inclusive;thesearetheislands.Therestofthecellsareempty.Thegoalistoconnectalloftheislandsintoasingleconnectedgroupbydrawingaseriesofbridgesbetweentheislands.Note:1.Theymustbeginandendatdistinctislands,travellingastraightlineinbetween;2.Theymustnotcrossanyotherbridgesorislands; 3. They may only run orthogonally, i.e., the two endpoints of the bridge must be in the same row or column; 4. At most two bridges connect a pair of islands; 5. The number of bridges connected to each island must match the number on that island.Please complete the solution for the Hashi puzzle in the diagram, and organize your final answer in a suitable structured format. Gemini-2.5-Pro-Thinking1. **Island a7 (3):** It can only connect vertically to a5 and horizontally to c7. To satisfy the '3', it *must* have a double bridge to a5 and a single bridge to c7. * Connect a7 = a5 (2 bridges). a7 needs 1 more, a5 needs 4-2=2 more. * Connect a7 - c7 (1 bridge). a7 is now complete (2+1=3). c7 is now complete (1 bridge).2. **Island f7 (2):** It can only connect vertically to f5 and horizontally to h7. To satisfy the '2', it *must* have a single bridge to each. * Connect f7 - f5 (1 bridge). f7 needs 1 more, f5 needs 4-1=3 more. * Connect f7 - h7 (1 bridge). f7 is now complete (1+1=2). h7 is now complete (1 bridge).…9. **Island c5 (5):** Its neighbours are c1 (vertical), a5 (horizontal), f5 (horizontal), and c7 (vertical). It already has a double bridge to f5. c7 is complete. It needs 3 more bridges from c1 and a5.10. **Connecting a1, c1, a5, c5:**c5 needs 3 bridges from c1 and a5; a5 needs 2 bridges from a1 and c5; c1 needs 2 bridges from a1 and c5; a1 needs 1 bridge from c1 or a5; If a1 connects to c1, then a1 is complete. c1 needs 1 more (from c5). a5 needs
|
https://arxiv.org/abs/2505.21327v1
|
2 (from c5). c5 needs 1 (from c1) + 2 (from a5) = 3. This works. So: a1-c1 (1), c1-c5 (1), a5=c5 (2). Let's check counts: a1(1 ok), c1(1+1=2 NO, needs 3), a5(2 ok), c5(1+2=3 NO, needs 5). This path fails.If a1 connects to a5, then a1 is complete. a5 needs 1 more (from c5). c1 needs 2 (from c5). c5 needs 1 (from a5) + 2 (from c1) = 3. This works. So: a1-a5 (1), a5-c5 (1), c1=c5 (2). Let's check counts: a1: 1 (to a5) – OK; c1: 1 (to e1) + 2 (to c5) = 3 – OK; e1: 1 (to c1) – OK; a5: 2 (to a7) + 1 (to a1) + 1 (to c5) = 4 – OK; c5: 2 (to f5) + 1 (to a5) + 2 (to c1) = 5 - OK11. **Final Connectivity Check:** All islands are connected in a single group. No bridges cross. All island counts are satisfied.**Final Answer:**The bridges are connected as follows (using coordinates, '=' indicates a double bridge):a1 - a5 (1 bridge) c1 - e1 (1 bridge) c1 = c5 (2 bridges) h1 - h5 (1 bridge) a5 - c5 (1 bridge) a5 = a7 (2 bridges) c5 = f5 (2 bridges) f5 - f7 (1 bridge)f5 - h5 (1 bridge) a7 - c7 (1 bridge) f7 - h7 (1 bridge) Figure 29: An example of a Hashi puzzle in which the model needs to try, judge, and reason to determine the positions and numbers of bridges. 44 MME-Reasoning Question: Please sort the pictures in chronological order as shown in the image. (Select from A, B, C, and D) A. 3-1-4-2 B. 2-4-1-3 C. 1-3-4-2 D. 2-1-4-3Reasoning type: AbductiveDifficulty level: Easy GPT-4o 1. Image 2: Shows a blank piece of paper. This is the starting point.2. Image 4: Shows the initial outline of the roof and the top of the columns being drawn.3. Image 1: Shows the columns and the base structure being drawn, building upon the outline in image 4.4. Image 3: Shows the completed drawing of the building.Therefore, the chronological order is 2, 4, 1, 3. B To sort the pictures in chronological order:1. The drawing starts with the outline of the book (Image 2).2. The hand begins adding details to the book (Image 1).3. The columns of the structure are drawn (Image 3).4. The final details are added to complete the drawing (Image 4).The correct order is **2-1-3-4**, which corresponds to **D**. Gemini-2.5-Pro-Thinking o4-miniLet’s analyze the sequence of actions depicted in each image:1. In image 1, the person is drawing a detailed structure on the paper.2. In image 2, the person is drawing a simpler, more basic shape on the paper.3. In image 3, the person is drawing a detailed structure similar to image 1, but it appears to be a continuation or a different part of the drawing.4. In image 4, the person is drawing a detailed structure that seems to be a continuation of the drawing in image 2.Based on this analysis, the correct chronological order is:2 (basic shape) -> 4 (continuation of basic
|
https://arxiv.org/abs/2505.21327v1
|
shape) -> 1 (detailed structure) -> 3 (continuation of detailed structure). Therefore, the correct option is B. Qwen2.5-VL-72B-Instruct Thedrawingclearlyprogressesfromablankcover(2)tooutliningthepediment(4),thenaddingthecolumns(1),andfinallyfillinginthedetails(3).Sothechronologicalorderis2-4-1-3,whichisoptionB.Answer:B Figure 30: An example of a temporal ordering problem in which the model needs to determine the sequence of images based on the logical relationships of time. 45 MME-Reasoning Reasoning type: DeductiveDifficulty level: MediumQuestion:Asshownintheimage,whichgraphcouldbethegraphoffunction$f(x)=\frac{3x^2\cos2x}{2^{|x|}}$. Gemini-2.5-Pro-Thinking ###Step1:Analyzethecomponentsof\(f(x)\)1.**Numerator:\(3x^2\cos2x\):**-\(3x^2\):Thistermgrowsquadraticallyas\(|x|\)increases.-\(\cos2x\):Thistermoscillatesbetween\(-1\)and\(1\)withaperiodof\(\pi\).-Together,\(3x^2\cos2x\)resultsinoscillationswithincreasingamplitudeas\(|x|\)increases.2.**Denominator:\(2^{|x|}\):**-\(2^{|x|}\)growsexponentiallyas\(|x|\)increases.Thiscausesf(x)todecayto\(0\)as\(|x|\to\infty\),despitetheoscillationsinthenumerator.###Step2:Behaviorof\(f(x)\)-Nearx=0:Thenumerator3x^2\cos2xapproaches0becauseofthex^2term,so\(f(x)\to0\).-Forlarge|x|:Theexponentialgrowthinthedenominatordominates,causing\(f(x)\to0\).-Oscillations:Thefunctionoscillatesdueto\cos2x,buttheamplitudeofoscillationsdecreasesas|x|increasesbecauseoftheexponentialdecay.###Step3:Matchwiththegraphs-Thecorrectgraphshouldshow:1.Oscillationsaroundy=0.2.Amplitudeofoscillationsdecreasingas\(|x|\)increases.3.Thefunctionapproaching\(0\)as\(|x|\to\infty\).###Step4:Identifythecorrectgraph-**GraphD**matchesthesecharacteristics:-Itshowsoscillationsaround\(y=0\).-Theamplitudeofoscillationsdecreasesas\(|x|\)increases.-Thefunctionapproaches\(0\)as\(|x|\to\infty\).###FinalAnswer:**D**### Step 1: Check if the function is even or odd A function is **even** if \( f(-x) = f(x) \). Compute \( f(-x) \): f(-x) = \frac{3(-x)^2 \cos(-2x)}{2^{|-x|}} = \frac{3x^2 \cos 2x}{2^{|x|}} = f(x).Thus, \( f(x) \) is even, so its graph is symmetric about the \( y \)-axis. This eliminates options **A** and **D**. ### Step 2: Evaluate \( f(0) \) Substitute \( x = 0 \): f(0) = \frac{3(0)^2 \cos 0}{2^0} = 0. The graph must pass through \( (0, 0) \). Both remaining options (B and C) satisfy this, so we need further analysis. ### Step 3: Analyze \( f(1) \) Substitute \( x = 1 \): f(1) = \frac{3(1)^2 \cos 2}{2^1} = \frac{3 \cos 2}{2}. Since \( \cos 2 \) (in radians, \( 2 \approx 114.6^\circ \)) is negative, \( f(1) < 0 \). Thus, at \( x = 1 \), the graph must lie **below** the \( x \)-axis. ### Step 4: Compare with options B and C -Option B: At \( x = 1 \), the graph is below the x-axis (consistent with \( f(1) < 0 \)). -Option C: At \( x = 1 \), the graph isabove the x-axis (inconsistent with \( f(1) < 0 \)). Thus, the correct graph is **B**. GPT-4o Figure 31: An example of a function problem in which the model needs to infer the graph based on the function and provide the correct answer. 46 MME-Reasoning Reasoning type: AbductiveDifficulty level: Hard GPT-4oQuestion:Skyscrapersalsoknownas"Towers"isalogicpuzzlewithsimplerulesandchallengingsolutions.Therulesaresimple.Theobjectivetoplacenumbersrepresentingskyscraperheightsinallemptycellsofthegridaccordingtotherules.1.Theheightoftheskyscrapersisfrom1tothesizeofthegrid.i.e.1to4fora4x4puzzle.2.Youcannothavetwoskyscraperswiththesameheightonthesameroworcolumn. ### Step-by-Step Solution:####Initial Observations:1. The grid is 4x4, so skyscraper heights range from 1 to 4.2. Numbers already placed:-Row 2, Column 1: 1. -Row 2, Column 3: 2.-Row 4, Column 4: 23. Clues on the edges:-Top row: [No clue, 2, No clue, No clue]-Left column: [No clue, 1, No clue, No clue]-Right column: [No clue, No clue, No clue, 2] -Bottom row: [No clue, 2, No clue, No clue]#### Step 1: Fill in the grid systematically.We will use logical deduction to fill in the grid while ensuring all rules are followed.#### Final Solution:After filling in the grid step by step, the completed solution is:4 3 2 11 4 2 33 1 4 22 2 3 4 3. The numbers on the sides of the grid indicate how many skyscrapers would you see if you look in the direction of the arrow. Remember, higher skyscrapers will block the skyscrapers behind them. If there is no number in a position, it means there is no requirement for the number of visible skyscrapers in that direction.4. Place numbers in each cell to indicate the height of the skyscrapers.5. You cannot change any numbers that already exist in the grid.Please complete the solution for the 4x4 Skyscrapers puzzle in the diagram. At the end of your
|
https://arxiv.org/abs/2505.21327v1
|
response, summary your answer **as a single filled matrix** (list of lists or equivalent). One finds that the only way to satisfy• row 2-from-the-left=1 ⇒ the first entry of row 2 must be 4 • row 4-from-the-right=2 ⇒ the 4 in row 4 must sit at column 2, and the right-most cell <4 • column 3-from-below=2 ⇒ the 4 in column 3 must be in row 3 and still have every row-and-column a permutation of (1,2,3,4), is[[1,2,3,4], [4,3,2,1], [3,1,4,2], [2,4,1,3]] o4-mini Figure 32: An example of a Skyscraper puzzle in which the model needs to try, judge, and reason to determine the number of each position. 47 MME-Reasoning Reasoning type: DeductiveDifficulty level: MediumQuestion:Inthepyramid$P-ABCD$,$PC=3$,$\angleBCD=120^\circ$,$PA\perp\text{plane}ABCD$,$PD\perpCD$,$PB\perpCB$,and$PB=PD=2\sqrt{2}$.Findthevalueof$PA$.A.$1$B.$\sqrt{2}$C.$\frac{2\sqrt{3}}{3}$D.$\sqrt{5}$ o4-mini Let$PA=h$.Since$PA\perp\text{plane}ABCD$,wehave$PA\perpAB$and$PA\perpAD$.Thus,$\trianglePAB$and$\trianglePAD$areright-angledtriangles.In$\trianglePAB$,…$8=h^2+AB^2$.In$\trianglePAD$,$8=h^2+AD^2$.->$AB^2=AD^2$.…As$CB$isperpendicularto($PA$and$PB$intheplane$PAB$,$CB$mustbeperpendiculartotheplane$PAB$.Therefore,$\angleABC=90^\circ$.Wearegiven$PD\perpCD$.Since$PA\perp\text{plane}ABCD$,$PA\perpCD$.As$CD$isperpendiculartotwointersectinglines($PA$and$PD$)intheplane$PAD$,$CD$mustbeperpendiculartotheplane$PAD$.Therefore,$\angleADC=90^\circ$.Nowconsiderthetriangles$\trianglePBC$and$\trianglePDC$.$PC^2=PB^2+BC^2$->$BC=1$.$PC^2=PD^2+CD^2$.->$CD=1$.Wehavefoundthefollowingproperties:$AB=AD$$BC=1$$CD=1$$\angleABC=90^\circ$$\angleADC=90^\circ$$\angleBCD=120^\circ$Thesumofanglesinaquadrilateralis$360^\circ$.So$\angleDAB=…=60^\circ$.In$\triangleABD$,wehave$AB=AD$and$\angleDAB=60^\circ$.…Therefore,itmustbeanequilateraltriangle.$AB=AD=BD$.In$\triangleBCD$,weknow$BC=1$,$CD=1$,and$\angleBCD=120^\circ$.WecanusetheLawofCosinestofind$BD$.$BD^2=1^2+1^2-2(1)(1)\cos(120^\circ)$.So$BD=\sqrt{3}$.Since$\triangleABD$isequilateral,$AB=AD=BD=\sqrt{3}$.Finally,wecanfind$PA=h$usingtheequation$8=h^2+AB^2$.So$PA=\sqrt{5}$.FinalAnswer:Thefinalansweris$\boxed{\sqrt{5}}$ Gemini-2.5-Pro-ThinkingTofixcoordinates,putthebaseABCDintheplanez=0andtakeA=(0,0,0),P=(0,0,h),B=(b,0,0).SincePB⊥CBonefindsthatCmustliedirectly“above”Binthey–direction,sayC=(b,1,0).Then:PB²=b²+h²=(2√2)²=8,PC²=b²+1+h²=3²=9⇒b²+h²=8,andso1+h²+b²=9⇒1+(8)=9checks,andthusb²+h²=8.NowletD=(d₁,d₂,0).TheconditionsPD⊥CD⇒(P–D)·(D–C)=0,PD²=d₁²+d₂²+h²=8,∠BCD=120°giveasmallsystemwhichonecansolve(forinstancebysettingX=d₁–b,Y=d₂–1,etc.).Theonlypositivesolutionforcesb=√3,h=PA=√5.ThusPA=√5,whichischoiceD.Answer:D Figure 33: An example of solid geometry problems in which the model needs to perceive, calculate, and reason to arrive at the final answer. 48 MME-Reasoning Reasoning type: DeductiveDifficulty level: EasyQuestion:Acertainhomeappliancecompanyrandomlyselected50employeesfromitstotalof200salesemployeestoinvestigatesalesperformance.Thesalesamountsareallwithintheinterval$[5,25]$(unit:millionyuan).Thedataisdividedinto5groups:$[5,9)$,$[9,13)$,$[13,17)$,$[17,21)$,$[21,25]$,andthefollowingfrequencydistributionhistogramwasobtained.Whichofthefollowingstatementsiscorrect? A.Thevalueof$a$inthefrequencydistributionhistogramis$0.06$B.Themediansalesamountofallsalesemployeesisestimatedtobe$15$C.Thenumberofsalesemployeeswithsalesamountsin$[9,13)$isestimatedtobe$64$D.The75thpercentileofsalesamountsforallsalesemployeesisestimatedtobe$17$. o4-miniLettheclass-widthbeh=4.Ifthehistogramisofrelative-frequencydensities,thenforeachclassithe“density’’f_itimeshgivestheclass’srelativefrequency,andsummingoverallfiveclassesmustgive1:(0.02+a+0.09+0.03+0.02)·4=10.16+4a=14a=0.84a=0.21Thathoweverdoesnotmatchthepicture,whichshowsthemiddlebarexactlyhalfwaybetween0.03and0.09–i.e.at0.06.Infactthetwodashedlinesareat0.03and0.09,andthebarlabeledasitsmidway.Hencea=(0.03+0.09)/2=0.06,soAistheonlystatementthatis(visually)correct.Answer:A###Step1:Determinethevalueof\(a\)Thetotalareaunderthehistogramrepresentsthetotalfrequency,whichis1.Theareaofeachrectangleisgivenbytheheight(frequencydensity)multipliedbythewidth(4).Theareasoftherectanglesare:-Fortheinterval\([5,9)\):\(0.02\times4=0.08\)…Thesumoftheseareasis1:\[0.08+4a+0.36+0.12+0.12=1\]->\[a=0.08\]###Step2:DeterminethemediansalesamountSincethetotalfrequencyis1,themediancorrespondstothe0.5cumulativefrequency.-Fortheinterval\([5,9)\):\(0.08\)…Theexactvaluecanbecalculatedas:\[13+\frac{0.5-0.40}{0.36}\times4=13+\frac{10}{9}\approx14.44\],Bisincorrect.###Step3:Determinethenumberofsalesemployeesintheinterval\([9,13)\)Sincethesamplesizeis50,thenumberofemployeesinthisintervalis:\[0.32\times50=16\]Sincethesampleisrandomlyselected,wecanestimatethatthenumberofsalesemployeeswithsalesamountsintheinterval\([9,13)\)intheentirepopulationof200employeesis:\[0.32\times200=64\]So,statementCiscorrect.###Step4:Determinethe75thpercentileofsalesamountsWseethatthe75thpercentileliesintheinterval\([17,21)\).\[17+\frac{0.75-0.76}{0.12}\times4=17-\frac{1}{30}\approx16.97\]StatementDisincorrect.ThecorrectstatementisC.Therefore,theansweris:\boxed{C} Qwen2.5-VL-72B-Instruct Figure 34: An example of statistical problems in which the model needs to perceive, calculate, and reason to arrive at the final answer. 49 MME-Reasoning Reasoning type: InductiveDifficulty level: MediumQuestion: Select a figure from amongst the Answer Figures which will continue the same series as established by the five Problem Figures. (Choose from A, B, C, D, and E)at the end. Seed1.5-VL-Thinking To solve this, we need to identify the pattern in the **Problem Figures** and determine which **Answer Figure** continues the sequence.### Observing the Problem Figures:1. Each figure contains two symbols.2. The first symbol alternates between **O, X, S, C**, and repeats.3. The second symbol alternates between **=, △, □, ★**, and repeats.### Pattern Analysis:-The first symbol in the sequence is **O →X →S →C**. After **C**, the next symbol should be **O**.-The second symbol in the sequence is **= →△→□→★**. After **★**, the next symbol should be **=**.### Correct Answer:The next figure should have **O** as the first symbol and **=** as the second symbol. This matches **Answer Figure A**.### Final Answer:**A** GPT-4o ### Step 1: Analyze the Top Symbols and Bottom Rows Let the Problem Figures be \( P1, P2, P3, P4, P5 \). -\( P1 \): Top = \( O \); Bottom = \( [x, =, \Delta] \) … -\( P5 \): Top = \( C \); Bottom = \( [C, \square, S] \) ### Step 2: Identify the Pattern for the Bottom Row’s Second Element Observe that the **second element of the bottom row** in \( P(n) \) equals the **top symbol of \( P(n-2) \)**: -\( P3 \) bottom second element = \( O \) (which is \( P1 \) top). …-\( P5 \) bottom second element = \( \square \) (which is \( P3 \) top). ### Step 3: Apply the Pattern to the Answer Figure For the 6th figure(\( P6 \)), the second element of the bottom row must equal the top symbol of \( P(6-2) = P4 \). -\( P4 \) top = \( S \), so \( P6 \) bottom second element = \( S \). ### Step 4: Match with Answer Figures Check the bottom rows of the Answer Figures: -**A**:
|
https://arxiv.org/abs/2505.21327v1
|
\( [C, \uparrow, \square] \) →Second element = \( \uparrow \) (not \( S \)). -**B**: \( [C, S, \uparrow] \) →Second element = \( S \) (matches the pattern). -**C**: \( [C, S, \square] \) →Second element = \( S \), but the third element does not follow the sequence. -**D**: \( [C, \bigstar, \uparrow] \) →Second element = \( \bigstar \) (not \( S \)). -**E**: \( [C, \uparrow, \bigstar] \) →Second element = \( \uparrow \) (not \( S \)). Thus, the figure that continues the series is **B**. Answer: B Figure 35: An example of a pattern-finding problem in which the model needs to analyze and identify the pattern in order to provide the correct answer. 50 MME-Reasoning Reasoning type: DeductiveDifficulty level: Medium GPT-4oQuestion:Whenyouareworkingonthecomputer,inwhichdirectionwouldthewhiteboardnotbe?A.BackB.LeftC.Front ***Analyzethesetup:**Image1showsadeskwithacomputersetup(keyboard,screenimplied/showninotherimages)andabluechairpositionedforsomeonetoworkatthedesk.Totherightofthisdesksetupisalargewhiteboardmountedonthewall.Awindowisvisibletotheleftofthedesk.***Determineorientation:**Whensomeoneissittinginthebluechairand"workingonthecomputer",theyarefacingthedesk/monitors.Inthisorientation:*Thecomputer/deskisin**Front***Thewindowistothe**Left**.*Thewhiteboardistothe**Right**.*Theareabehindthechairisthe**Back**.***Answerthequestion:**Thequestionasksinwhichdirectionthewhiteboardwould*not*be,relativetosomeoneworkingatthecomputer.*IsitintheBack?No.*IsittotheLeft?No.*IsitintheFront?No.***Evaluatetheoptions:**ThewhiteboardislocatedtotheRight.Therefore,itis*not*intheBack(A),*not*totheLeft(B),and*not*intheFront(C).Allthreeoptionslistdirectionswherethewhiteboardisnotlocated.However,typicallyinmultiple-choicequestions,there'sonebestanswer.Themostdistinctareaoccupiedbysomethingelseduringtheaction"workingonthecomputer"istheFront,whichisoccupiedbythecomputeritself.Therefore,thewhiteboardisdefinitively*not*infroThewhiteboardistotherightofthepersonworkingonthecomputer.Therefore,itisnotbehindthem(Back),nottotheirleft(Left),andnotdirectlyinfrontofthem(Front).Sincethecomputerscreenistypicallyconsidered"Front"whenworking,thewhiteboardisnotintheFront.CWhen working on the computer, the whiteboard would not be to the **left**. It is positioned on the wall to the **front** of the desk.**Correct option: B** Gemini-2.5-Pro-Thinking Figure 36: An example of a problem involving the determination of relative spatial relationships, in which the model needs to understand space and reason about the positional relationships within it. 51 MME-Reasoning Qwen2.5-VL-72B-InstructReasoning type: DeductiveDifficulty level: EasyQuestion:Theimagedemonstratestheprocessofusingwoodtoproduceash(containing$K_2CO_3$)andpreparingpotassiumhydroxide.Basedonthediagram,answerthefollowingquestion:Whatisthechemicalformulaofthesubstancethatcanberecycledinthereactionprocess o4-mini 1.**CalciumCarbonatetoCalciumOxide:**-Calciumcarbonate(CaCO₃)isheatedathightemperaturetoproducecalciumoxide(CaO)andcarbondioxide(CO₂).\text{CaCO}_3\xrightarrow{\text{hightemperature}}\text{CaO}+\text{CO}_22.**CalciumOxidetoCalciumHydroxide:**-CaOreactswithH₂OtoformCa(OH)₂.\text{CaO}+\text{H}_2\text{O}\rightarrow\text{Ca(OH)}_23.**CalciumHydroxideandPotassiumCarbonateReaction:**-Ca(OH)₂reactswithK₂CO₃toformKOHandCaCO₃.\text{Ca(OH)}_2+\text{K}_2\text{CO}_3\rightarrow2\text{KOH}+\text{CaCO}_34.**WoodtoPlantAsh:**-Woodisburnedtoproduceplantash,whichcontainsK₂CO₃.5.**PlantAshtoLye:**-Plantashismixedwithwaterandstirredtoproducelye,whichcontainsK₂CO₃.Fromtheflowchart,wecanseethatCaCO₃isproducedinthereactionbetweencalciumhydroxideandpotassiumcarbonate,anditisalsothestartingmaterialfortheprocess.Thechemicalformulaofthesubstanceis\boxed{CaCO_3}. Todeterminetherecyclablesubstance,analyzetheflowofmaterialsinthediagram:1.Initially,**calciumcarbonate($\ce{CaCO_3}$)**isheatedtoproducecalciumoxide.2.Throughsubsequentreactions(formingcalciumhydroxide,reactingwith$\ce{K_2CO_3}$inlye),**calciumcarbonate($\ce{CaCO_3}$)**isregeneratedasaproduct.Sincecalciumcarbonateisareactantatthestartandaproductattheend(allowingittore-enterthereactionprocess),thechemicalformulaoftherecyclablesubstanceis$\boldsymbol{\ce{CaCO_3}}$.Seed1.5-VL-Thinking The“spent”precipitateiscalciumcarbonate,CaCO₃,whichcanbeheated(toCaO)andthenslakedbacktoCa(OH)₂toruntheprocessagain. Figure 37: An example of a chemical reaction process inference problem, in which the model needs to infer the substances involved based on chemical knowledge. 52 MME-Reasoning Gemini-2.5-Pro-Thinking Question:Thefollowingisapedigreechartofacertainmonogenicgeneticdisease.Itisknownthat$I_1$carriesthepathogenicgeneofthishereditarydisease,If$III_3$marriesamalewiththesamegenotypeas$III_2$andtheyhaveaphenotypicallynormalson,thentheprobabilitythatthesoncarriesthepathogenicgeneis__________.First,let'sidentifythegenotypesoftheindividualsinthepedigreechart.…-\(II_3\)isnormal,sohergenotypeis\(Aa\).-\(II_4\)isnormal,sohisgenotypeis\(Aa\).Now,let'slookatthenextgeneration:…-\(III_3\)isnormal,sohergenotypeis\(Aa\)(since\(II_3\)is\(Aa\)and\(II_4\)is\(Aa\)).Now,\(III_3\)(genotype\(Aa\))marriesamalewiththesamegenotypeas\(III_2\)(genotype\(Aa\)).Thepossiblegenotypesoftheiroffspringare:-\(AA\)-\(Aa\)-\(aa\)Sincethesonisphenotypicallynormal,hisgenotypecanbeeither\(AA\)or\(Aa\).Theprobabilityofeachgenotypeis:-\(AA\)=\(\frac{1}{4}\)-\(Aa\)=\(\frac{1}{2}\)-\(aa\)=\(\frac{1}{4}\)Sincethesonisnormal,theprobabilitythatheis\(AA\)is\(\frac{1}{3}\)andtheprobabilitythatheis\(Aa\)is\(\frac{2}{3}\).Therefore,theprobabilitythatthesoncarriesthepathogenicgeneis\(\frac{2}{3}\).Theansweris\(\boxed{\frac{2}{3}}\). Qwen2.5-VL-72B-Instruct 1.**Determinethemodeofinheritance:**:…thediseasemustbeautosomalrecessive.2.**Determinethegenotypesofrelevantindividuals:**:…*III-2isnormal,sonofII-1(A_)andII-2(aa).…Genotype:**Aa**.*III-3isanormaldaughterofII-3(Aa)andII-4(Aa)….TheprobabilityofbeingAAis1/3,andtheprobabilityofbeingAais2/3(amongthenormaloffspring)3.**Setupthecrossandcondition:**:…Weneedtofindtheprobabilitythatthisnormalsoncarriesthepathogenicgene(isAa).4.**Calculatetheprobabilityusingconditionalprobability:**WewantP(SonisAa|SonisNormal).*Case1:III-3isAA(Probability=1/3)**…*P(SonisAainthiscase)=P(Aason)=1/4*Case2:III-3isAa(Probability=2/3)**…*P(SonisAainthiscase)=P(Aason)=1/4Now,calculatetheoverallprobabilities:*P(SonisNormal)=P(SonisNormal|III-3=AA)*P(III-3=AA)+P(SonisNormal|III-3=Aa)*P(III-3=Aa)=5/12*P(SonisAa)=P(SonisAa|III-3=AA)*P(III-3=AA)+P(SonisAa|III-3=Aa)*P(III-3=Aa)=1/4Finally,calculatetheconditionalprobability:*P(SonisAa|SonisNormal)=P(SonisAa)/P(SonisNormal)=3/5FinalAnswer:Thefinalansweris$\boxed{3/5}$Reasoning type:AbductiveDifficulty level: Hard Figure 38: An example of a genetic inference problem, in which the model needs to infer probabilities based on a pedigree chart. 53 MME-Reasoning Reasoning type: AbductiveDifficulty level: HardQuestion: When this shape is folded to produce a cube, which is the only one of the following that cannot be produced? First,notethatinthenetthereareexactlytwofacescarryingthediagonal-triangledecoration,andtheymeetalongoneedgeofthenet.Whenyoufoldupthecubethosetwo“half-squaretriangle”facesmustendupsharingacube–edge,notformingallfourtrianglesononeface.InchoiceAthefrontfaceofthecompletedcubeisshownasonebigsquaremadeoffourtriangles,whichwouldrequireallfourofthosetriangularpiecestobeonthesameface—impossible,sinceinthenettheylieontwodistinctfaces.ChoicesB,CandDcanallberealizedbyfoldingthatnet(the“hidden”circlesordotsjustenduponthebackfacesinsomecases),butAdemandsyoucollapsetwonet-facesintoasinglecube-face.HencetheonlynonrealizablecubeisA o4-mini Correct Answer: C **1. Identify Opposite Faces:*** The top square with the single black dot (let's call it T) is opposite the bottom square which is divided into four rectangles with a black dot in the top-left rectangle (let's call it B).* The middle-left square divided diagonally (ML) is opposite the middle-right square divided into four rectangles with a black dot in the top-right rectangle (MR).* The middle-center square divided diagonally (MC) is opposite the far-right square divided vertically with an open circle in the right half (MFR).**Summary of Opposite Pairs:*** T (Dot) <-> B (Divided, dot top-left). * ML (Diagonal \) <-> MR (Divided, dot top-right)* MC (Diagonal /) <-> MFR (Divided, circle right)**2. Analyze Each Option:*** **Cube A:** Shows faces T (top), ML (left), MFR (front). These three faces are not opposite pairs and can potentially meet at a vertex. Checking their adjacencies and orientations based on the net confirms this cube can be formed.* **Cube B:** Shows faces T (top), MR (front), B (right). Faces T and B are identified as opposite faces in the net. Opposite faces cannot be adjacent on a folded cube. Therefore, Cube B cannot be formed.* **Cube C:** Shows faces MC (top), MFR (left), MR (front). These three faces are not opposite pairs. Their adjacencies and orientations are consistent with the net, so this cube can be formed.* **Cube D:** Shows
|
https://arxiv.org/abs/2505.21327v1
|
faces MC (top), ML (front), B (right). These three faces are not opposite pairs. Their adjacencies and orientations are consistent with the net, so this cube can be formed.**Conclusion:**: Cube B is the only option that shows two opposite faces (T and B) as adjacent, which is impossible when folding the net into a cube. **B** Gemini-2.5-Pro-Thinking Figure 39: An example of spatial reasoning, in which the model needs to infer the folded cube based on its unfolded diagram. 54 MME-Reasoning Reasoning type: AbductiveDifficulty level: MediumQuestion:Givenanimageofa4x4maze:Cellsarearrangedin4rows(0-3fromtoptobottom)and4columns(0-3fromlefttoright).Wallsbetweencellsaredrawnasthickblacklines.Thestartcellismarked'S',theexitcell'E'.YoutaskistocomputeavalidpathfromStoEusingmovesUp,Down,Left,Rightwithoutcrossinganywall.Prefertheshortestpathifmultipleexist.Output:a.Acomma-separatedmovelist(e.g.'Right,Up,Left,Down').Becarefulnottocrossanyofthethicksolidblacklines! o4-miniHere is a shortest legal route from S at (0,2) to E at (3,0) (never crossing any thick line):Right, Down, Down, Left, Left, Left, Down To find a valid path from 'S' to 'E', let's analyze the maze step by step:1. **Start at 'S' (2, 3)**:- The cell 'S' is located at row 2, column 3.2. **Move Left**: - From 'S' (2, 3), we can move left to (2, 2).3. **Move Left again**:- From (2, 2), we can move left to (2, 1).4. **Move Up**:- From (2, 1), we can move up to (1, 1).5. **Move Up again**:- From (1, 1), we can move up to (0, 1).6. **Move Right**:- From (0, 1), we can move right to (0, 2).7. **Move Down**:- From (0, 2), we can move down to (1, 2).8. **Move Down again**: - From (1, 2), we can move down to (2, 2).9. **Move Right**: - From (2, 2), we can move right to (2, 1).10. **Move Right again**:- From (2, 1), we can move right to 'E' (2, 0).So, the valid path is: **Right, Left, Left, Up, Up, Right, Down, Down, Right, Right**However, this path is not the shortest. Let's find the shortest path:1. **Start at 'S' (2, 3)**:- The cell 'S' is located at row 2, column 3.2. **Move Left**:- From 'S' (2, 3), we can move left to (2, 2).3. **Move Left again**: - From (2, 2), we can move left to (2, 1).4. **Move Up**: - From (2, 1), we can move up to (1, 1).5. **Move Up again**:- From (1, 1), we can move up to (0, 1).6. **Move Right**: - From (0, 1), we can move right to (0, 2).7. **Move Down**:- From (0, 2), we can move down to (1, 2).8. **Move Down again**: - From (1, 2), we can move down to (2, 2).9. **Move Left**:- From (2, 2), we can move left to 'E' (2, 1).So, the shortest valid path from 'S' to 'E' is:**Left, Left, Up, Up, Right, Down, Down, Left**Therefore, the final answer is: **Left, Left, Up, Up, Right, Down, Down, Left** Qwen2.5-VL-72B-Instruct Figure 40: An example of a maze problem, in which the model needs to try, judge, and provide the correct path. 55 MME-Reasoning Reasoning type: AbductiveDifficulty level: MediumQuestion:Givenfourplayingcards,eachwithavalueasshownintheimage,useanycombinationofaddition,subtraction,multiplication,anddivisiontomakethenumber24.Youmustuseeachcardexactlyonce.Givethefinalanswerasaformula. Here are the values of the cards:7 of Hearts = 710 of Hearts = 10Jack of Diamonds = 11King of Hearts = 13We need to use the numbers 7, 10, 11, and 13 exactly once, combined with addition, subtraction, multiplication, and division, to reach the target value of 24.Here is one
|
https://arxiv.org/abs/2505.21327v1
|
possible solution:1. Subtract 11 from 13: (13 - 11) = 22. Multiply the result by 7: 7 * 2 = 143. Add 10 to the result: 14 + 10 = 24The formula is:**7 * (13 - 11) + 10 = 24** Gemini-2.5-Pro-Thinking (13-11)*(10-7)=24 GPT-4o o4-mini(7-10/(11-13))*(13-11)=24 7*(13-11)+10=24Seed1.5-VL-Thinking Figure 41: An example of the 24-point game, in which the model needs to try different combinations and calculations to arrive at the final answer. 56 MME-Reasoning Qwen2.5-VL-72B-InstructReasoning type: DeductiveDifficulty level: EasyQuestion:Asshowninthefigure,asemicirclehasaradiusof$3$.Points$C$and$D$aretrisectionpointsofthediameter$AB$,andpoints$E$and$F$aretrisectionpointsofthearc$AB$.Whatisthevalueof$\vec{CF}\cdot\vec{DE}$?Pleaseanswerthequestionandprovidethefinalvalue,e.g.,1.23,-1.34,1.45,attheend. Gemini-2.5-Pro-ThinkingLetthesemicirclebecenteredattheorigin$O=(0,0)$intheCartesiancoordinatesystem.Thediameter$AB$liesonthex-axis.Theradiusisgivenas$R=3$.Thecoordinatesoftheendpointsofthediameterare$A=(-3,0)$and$B=(3,0)$.Thelengthofthediameter$AB$is$2R=6$.$AC=CD=DB$.Thelengthofeachpartis$6/3=2$.Thecoordinateof$C$isfoundbymovingadistanceof2from$A$towards$B$.$C=A+(2,0)=(-3+2,0)=(-1,0)$.Thecoordinateof$D$$D=C+(2,0)=(-1+2,0)=(1,0)$.Thepoints$F$and$E$dividethearc$AB$suchthatarc($BF$)=arc($FE$)=arc($EA$).$\angleBOF=\angleFOE=\angleEOA=180^\circ/3=60^\circ$.Thecoordinatesof$F$are$(R\cos\theta_F,R\sin\theta_F)=(3\cos(60^\circ),3\sin(60^\circ))$=(3/2,3\sqrt{3}/2)$.Thecoordinatesof$E$are$(R\cos\theta_E,R\sin\theta_E)=(3\cos(120^\circ),3\sin(120^\circ))$=(-3/2,3\sqrt{3}/2)$.Thevector$\vec{CF}$iscalculatedas$F-C$.$\vec{CF}=(3/2-(-1),3\sqrt{3}/2-0)=(3/2+1,3\sqrt{3}/2)=(5/2,3\sqrt{3}/2)$.Thevector$\vec{DE}$iscalculatedas$E-D$.$\ec{DE}=(-3/2-1,3\sqrt{3}/2-0)=(-5/2,3\sqrt{3}/2)$.$\vec{CF}\cdot\vec{DE}=(5/2)(-5/2)+(3\sqrt{3}/2)(3\sqrt{3}/2)$$=-25/4+(9\times3)/4$$=1/2$.Thevalueof$CF\cdotDE$is$1/2$.FinalAnswer:Thefinalansweris$\boxed{0.5}$1. **Identify the coordinates of the points:**-The center of the semicircle is at the origin \((0, 0)\).-The radius of the semicircle is 3, so the diameter \(AB\) is 6 units long.-\(C\) is at \((-2, 0)\) and \(D\) is at \((2, 0)\).-\(E\):\((3\cos60^\circ,3\sin60^\circ)=\left(\frac{3}{2},\frac{3\sqrt{3}}{2}\right)\)\(F\)isat\((3\cos120^\circ,3\sin120^\circ)=\left(-\frac{3}{2},\frac{3\sqrt{3}}{2}\right)\).2. **Calculate the distance \(CF\):**CF = \sqrt{\left(-2 -\left(-\frac{3}{2}\right)\right)^2 + \left(0 -\frac{3\sqrt{3}}{2}\right)^2} = \sqrt{7}3. **Calculate the distance \(DE\):**DE = \sqrt{\left(2 -\frac{3}{2}\right)^2 + \left(0 -\frac{3\sqrt{3}}{2}\right)^2} = \sqrt{7}4. **Calculate \(CF \cdot DE\):** CF \cdot DE = \sqrt{7} \cdot \sqrt{7} = 7Therefore, the value of \(CF \cdot DE\) is \(\boxed{7}\). Figure 42: An example of a geometry problem, in which the model needs to calculate and reason to arrive at the final answer. 57 MME-Reasoning Reasoning type: AbductiveDifficulty level: HardQuestion:Canyouplacetheintegersfrom1through12inthecircles(idfromAtoL)ofthe6-pointedstarsothatthesumofthenumbersineachofthesixrows(e.g.,A-D-G-K)is26?Pleaseanswerintheformof'circleid:number'. o4-mini Theproblemasksustoplaceintegersfrom1to12inthe12circles(labeledAtoL)ofa6-pointedstarsuchthatthesumofthenumbersineachofthesixrowsis26.Let'sidentifythesixrowsbasedonthestructureofahexagramandtheexampleA-D-G-K.ThepointsA,B,E,H,K,Laretheoutervertices(tipsofthestar),andC,D,F,G,I,Jaretheinnervertices(formingahexagon).Arowconsistsoftwoouterpointsandtwoinnerpoints.Thesixrowsare:R1:A-D-G-K.R2:E-G-J-L.R3:K-J-I-HR4:L-I-F-B.R5:H-F-C-A.R6:B-C-D-ELet$v_X$betheintegervalueincircleX.Each$v_X$mustbeauniqueintegerfrom1to12.Thesumofallintegersis$1+2+...+12=\frac{12\times13}{2}=78$.Eachofthe6rowsmustsumto26.Thesumofnumbersinallrowsis$6\times26=156$.Ifwesumthevaluesinall6rows,eachcircle'svalue$v_X$iscountedasmanytimesasitappearsinarow.Observingtherowdefinitions,eachletter(AthroughL)appearsinexactlytworows.Forexample,AappearsinR1(A-D-G-K)andR5(H-F-C-A).Thisisconsistent:$2\times\sumv_X=2\times78=156$.Let$P_o=v_A+v_B+v_E+v_H+v_K+v_L$bethesumofouterpoints.Let$P_i=v_C+v_D+v_F+v_G+v_I+v_J$bethesumofinnerpoints.$P_o+P_i=78$.ConsiderthesumofrowsR1,R3,R5:$(v_A+v_D+v_G+v_K)+(v_K+v_J+v_I+v_H)+(v_H+v_F+v_C+v_A)=3\times26=78$.Thissumcanberewrittenas$2(v_A+v_H+v_K)+(v_C+v_D+v_F+v_G+v_I+v_J)=78$.So,$2(v_A+v_H+v_K)+P_i=78$.Similarly,forrowsR2,R4,R6:$(v_E+v_G+v_J+v_L)+(v_L+v_I+v_F+v_B)+(v_B+v_C+v_D+v_E)=78$.Thissumis$2(v_B+v_E+v_L)+P_i=78$.Comparingthese,wemusthave$v_A+v_H+v_K=v_B+v_E+v_L$.Letthissumbe$X$.Then$P_o=(v_A+v_H+v_K)+(v_B+v_E+v_L)=X+X=2X$.So$P_o$mustbeanevennumber.And$2X+P_i=78$,whichis$P_o+P_i=78$.Weneedtofindasetof6distinctintegersfor$P_o$from$\{1,...,12\}$whosesumiseven,andwhichcanbepartitionedintotwosetsof3($A,H,K$and$B,E,L$),eachsummingto$X=P_o/2$.Theremaining6integersform$P_i$. Gemini-2.5-Pro-Thinking One convenient way to see that it works is to check all six line-sums of four: A–C–F–H = 7+2+6+11 = 26 A–D–G–K = 7+4+10+5 = 26 B–C–D–E = 8+2+4+12 = 26 H–I–J–K = 11+9+1+5 = 26 B–F–I–L = 8+6+9+3 = 26 E–G–J–L = 12+10+1+3 = 26 and you see each of 1…12 is used exactly once. In the asked format:A:7 B:8 C:2 D:4 E:12 F:6 G:10 H:11 I:9 J:1 K:5 L:3Seed1.5-VL-Thinking A:1, B:3, C:5, D:6, E:2, F:9, G:7, H:11, I:4, J:8, K:12, L:10 Figure 43: An example of a number-filling problem, in which the model needs to try, judge, reflect, and reason to find an answer that meets the requirements. 58 MME-Reasoning Let'stry$P_o=\{1,2,3,4,5,9\}$.Sum$P_o=24$.So$X=12$.Partition$P_o$intotwosetsof3,eachsummingto12:Set1:$\{v_A,v_H,v_K\}$.Set2:$\{v_B,v_E,v_L\}$.$\{1,2,9\}$sumsto12.$\{3,4,5\}$sumsto12.Thisisavalidpartition.So,let$\{v_A,v_H,v_K\}=\{1,2,9\}$and$\{v_B,v_E,v_L\}=\{3,4,5\}$.Theinnerpoints$P_i$aretheremainingintegers:$\{6,7,8,10,11,12\}$.Sum$P_i=54$.Check:$P_o+P_i=24+54=78$.Thisiscorrect.ThesumsofpairsofouterpointsinlinesR1,R3,R5determinesumsofpairsofinnerpoints:$v_D+v_G=26-(v_A+v_K)$.$v_J+v_I=26-(v_H+v_K)$.$v_F+v_C=26-(v_A+v_H)$Thesethreepairs$\{v_D,v_G\}$,$\{v_J,v_I\}$,$\{v_F,v_C\}$mustbedisjointanduseallnumbersin$P_i$.Thesumsofpairsfrom$P_i={6,7,8,10,11,12}$mustbeinrange$[6+7,11+12]=[13,23]$.$v_A+v_K$,$v_H+v_K$,$v_A+v_H$mustbeinrange$[26-23,26-13]=[3,13]$.Let'sassign$v_A,v_H,v_K$from$\{1,2,9\}$.Try$v_A=1,v_H=9,v_K=2$.(Thisisoneof$3!=6$permutationsforthisset).$v_A+v_K=1+2=3$.So$v_D+v_G=26-3=23$.$v_H+v_K=9+2=11$.So$v_J+v_I=26-11=15$.$v_A+v_H=1+9=10$.So$v_F+v_C=26-10=16$.Thesumsforinnerpairsare$23,15,16$.Theseareallintherange$[13,23]$.Innerpoints$P_i=\{6,7,8,10,11,12\}$.Pairfor23:$(11,12)$.Thisusesvalues11and12.Pairfor15:Remainingvaluesare$\{6,7,8,10\}$.$(7,8)$.Thisusesvalues7and8.Pairfor16:Remainingvaluesare$\{6,10\}$.$(6,10)$.Thisusesvalues6and10.Thisisaconsistentsetofinnerpairs:$\{v_D,v_G\}=\{11,12\}$,$\{v_J,v_I\}=\{7,8\}$,$\{v_F,v_C\}=\{6,10\}$.Sofar:$v_A=1,v_H=9,v_K=2$.Valuesfor$\{v_B,v_E,v_L\}$areapermutationof$\{3,4,5\}$.Valuesfor$\{v_C,v_F\}$areapermutationof$\{6,10\}$.Valuesfor$\{v_D,v_G\}$areapermutationof$\{11,12\}$.Valuesfor$\{v_J,v_I\}$areapermutationof$\{7,8\}$.Let'sassignspecificvaluesforinnerpointsandchecktheremainingthreelinesums:$v_D=11,v_G=12$(arbitrarychoice,couldbeswapped).$v_J=7,v_I=8$(arbitrarychoice)$v_C=6,v_F=10$(arbitrarychoice).Currentspecificassignments:A=1,C=6,D=11,F=10,G=12,H=9,I=8,J=7,K=2.NowcheckR2,R4,R6using$v_B,v_E,v_L$asapermutationof$\{3,4,5\}$.R2:$v_E+v_G+v_J+v_L=26\impliesv_E+12+7+v_L=26\impliesv_E+v_L=7$.From$\{3,4,5\}$,thepairsummingto7is$(3,4)$.So$\{v_E,v_L\}=\{3,4\}$.Thismeans$v_B=5$.R4:$v_L+v_I+v_F+v_B=26\impliesv_L+8+10+v_B=26\impliesv_L+v_B=8$.Since$v_B=5$,$v_L+5=8\impliesv_L=3$.Thisassignment$v_L=3$isconsistentwith$\{v_E,v_L\}=\{3,4\}$.So,$v_L=3$,whichimplies$v_E=4$.And$v_B=5$.Wehaveacompleteassignment:A=1,B=5,C=6,D=11,E=4,F=10,G=12,H=9,I=8,J=7,K=2,L=3.Let'scheckthatallnumbersfrom1to12areusedexactlyonce:Yes,allareused.Finalcheckforall6rowsums:…Thesolutionis:A:1,B:5,C:6,D:11,E:4,F:10,G:12,H:9,I:8,J:7,K:2,L:3. Gemini-2.5-Pro-Thinking Figure 44: An example of a number-filling problem, in which the model needs to try, judge, reflect, and reason to find an answer that meets the requirements. 59 MME-Reasoning Reasoning type: AbductiveDifficulty level: MediumQuestion:SelectafigurefromthegivenfouralternativeswhichfitsexactlyintoFigure-Xtoformacompletesquare. The only piece whose outer contour can form the missing half of the square (with a straight outer edge on the right and exactly complementary “jagged” inner edge to X) is option C. Answer: C o4-miniSeed1.5-VL-Thinking 1.**Figure-X**hasspecificindentationsandangles.Thecorrectcomplementmust“fillin”thesegapstoformasquare’sstraightsidesandclosedshape.2.**Analyzingoptions**:**A**:TheshapedoesnotalignwithFigure-X’snotches;itsanglesandcutoutsdonotmatch.**B**:ThecontoursofBpreciselymatchthemissingsectionsofFigure-X.Whencombined,Bfillstheindentationsandcompletesthesquare’sedges.**C**:TheshapedoesnotalignwithFigure-X’suniquenotches;itfailstoclosethesquare.**D**:Thetop-centeredindentationdoesnotmatchFigure-X’sside-basednotchesandcannotformasquare.Thus,thecorrectoptionis**B**. 1.**Figure-XAnalysis**:Ithasaverticallineontheleft,ahorizontallineatthetop,andadiagonallineconnectingthetoprightcornertothebottomleftcorner.2.**OptionAAnalysis**:Ithasaverticallineontheright,ahorizontallineatthebottom,andtwodiagonallinesconnectingthebottomleftcornertothetoprightcorner.-WhenplacednexttoFigure-X,itdoesnotformacompletesquare.3.**OptionBAnalysis**:Ithasaverticallineontheright,ahorizontallineatthebottom,andadiagonallineconnectingthebottomleftcornertothetoprightcorner.-WhenplacednexttoFigure-X,itformsacompletesquare.4.**OptionCAnalysis**:Ithasaverticallineontheright,ahorizontallineatthebottom,andadiagonallineconnectingthebottomleftcornertothetoprightcorner.-WhenplacednexttoFigure-X,itdoesnotformacompletesquare.5.**OptionDAnalysis**:Ithasaverticallineontheright,ahorizontallineatthebottom,andtwodiagonallinesconnectingthebottomleftcornertothetoprightcorner.-WhenplacednexttoFigure-X,itdoesnotformacompletesquare.Basedontheanalysis,thecorrectoptionthatfitsexactlyintoFigure-Xtoformacompletesquareis:\boxed{B} Qwen2.5-VL-72B-Instruct Figure 45: An example of a construction problem, in which the model needs to understand spatial relationships and reason to arrive at the correct answer. 60 MME-Reasoning Reasoning type: AbductiveDifficulty level: MediumQuestion: "Mate in One ": You are given a chess position where it is **Black's turn to move**. Your task is to make a move that results in **checkmate** in one move. ## Allowed Moves- Only legal moves are allowed, meaning moves that do not violate the standard rules of chess.- You must only provide a single move that results in checkmate.- The move can involve any piece (king, queen, rook, bishop, knight, or pawn). ## Notes- The move you provide must **immediately checkmate** the opponent's king.- The move should not just check, but must guarantee that
|
https://arxiv.org/abs/2505.21327v1
|
the opponent has no legal moves to escape the check.Please provide your move in the format of **standard algebraic notation**: In chess, moves are written using standard algebraic notation, which includes the following components: **notation of piece moved – destination square**- **Piece**: The piece is represented by a capital letter: - **K** for King - **Q** for Queen - **R** for Rook - **B** for Bishop - **N** for Knight - **P** for Pawn (No letter initial is used for pawns, so e4 means "pawn moves to e4".)- **Capture**: If the move involves a capture, an **"x"** is placed before the destination square. For example, "Qxh7" means the queen captures on h7.- **Checkmate**: checkmate is indicated by a **"#"** (e.g., "Qh7#"). o4-miniThe one-and-only winning move is Rc1# The black rook simply drops down to c1, delivering immediate mate.Seed1.5-VL-Thinking Qwen2.5-VL-72B-Instruct Themovethatresultsincheckmateinonemoveis:**Rxc2#**Thismoveinvolvesthewhiterookonc3capturingtheblackpawnonc2,deliveringcheckmatetotheblackking.Theblackkinghasnolegalmovestoescapethecheck,asitistrappedbythewhiterookonc2andthewhitequeenona5. Ra7# Figure 46: An example of a Mate-on-one problem, in which the model needs to try, judge, reflect, and reason to find the right answer. 61 MME-Reasoning Reasoning type: DeductiveDifficulty level: MediumQuestion: Fold and subtract parts along the dotted lines shown in the figure, and choose the shape that is most similar to the unfolded shape of Figure Z among A, B, C, and D. o4-mini Qwen2.5-VL-72B-InstructTo determine the correct answer, we need to follow the folding and subtraction steps as indicated in the figures X and Y, and then compare the resulting shape to the options A, B, C, and D.1. **Figure X:** - The square is folded along the diagonal dotted line. - The part of the square that is outside the folded triangle is subtracted. - After unfolding, the resulting shape will have a triangular cutout along the diagonal.2. **Figure Y:** - The square is folded along the vertical dotted line. - The part of the square that is outside the folded rectangle is subtracted. - After unfolding, the resulting shape will have a rectangular cutout along the vertical line.3. **Combining the results from X and Y:** - The final shape will have both a triangular cutout along the diagonal and a rectangular cutout along the vertical line.4. **Comparing with the options:** - Option A: This shape has a triangular cutout along the diagonal but no rectangular cutout along the vertical line. - Option B: This shape has a triangular cutout along the diagonal and a rectangular cutout along the vertical line, matching the combined result from X and Y. - Option C: This shape has a triangular cutout along the diagonal but no rectangular cutout along the vertical line. - Option D: This shape has a triangular cutout along the diagonal but no rectangular cutout along the vertical line.The shape that is most similar to the unfolded shape of Figure Z is Option B.The correct option is \boxed{B}.}$The correct unfolding is choice A.D.Seed1.5-VL-Thinking Figure 47: An example of a paper-cut problem, in which the model needs to understand spatial relationships and reason to find the correct answer. 62 MME-Reasoning Reasoning type: InductiveDifficulty level: HardQuestion: What number should replace the question mark? To solve this, we need to identify
|
https://arxiv.org/abs/2505.21327v1
|
the pattern in the given circles.### First Circle:The numbers are 6, 12, and 9. The relationship is: \( 6 + 12 = 18 \), and \( 18 ÷2 = 9 \). This matches the middle number.### Second Circle: The numbers are 16, 4, and 8. The relationship is: \( 16 + 4 = 20 \), and \( 20 ÷2 = 8 \). This matches the middle number.### Third Circle: The numbers are 2, 28, and ?. The relationship should follow the same pattern: \( 2 + 28 = 30 \), and \( 30 ÷2 = 15 \). Thus, the missing number is **15**### Final Answer: **15** GPT-4o One convenient way is to note that in both of the given circlesright-hand value = (top ×left) ÷8.Check:• First circle: (6 ×12) ÷8 = 72 ÷8 = 9. • Second circle: (16 ×4) ÷8 = 64 ÷8 = 8.Hence in the third circle? = (top ×left) ÷8 = (2 ×28) ÷8 = 56 ÷8 = 7.Answer: 7 o4-mini Seed1.5-VL-Thinking To determine the number that replaces the question mark, we analyze the pattern in the circles. Each circle has three numbers, and we check the relationship between them. For the first circle (numbers 6, 12, 9): \frac{6 \times 12}{9} = \frac{72}{9} = 8For the second circle (numbers 16, 4, 8): \frac{16 \times 4}{8} = \frac{64}{8} = 8Both circles follow the pattern: \(\frac{\text{top number} \times \text{left number}}{\text{right number}} = 8\). Applying this pattern to the third circle (numbers 2, 28, ?): Let the missing number be \(x\). Then:\frac{2 \times 28}{x} = 8 Solving for \(x\): \frac{56}{x} = 8 \implies x = \frac{56}{8} = 77 Figure 48: An example of a number pattern game, in which the model needs to calculate and find the pattern of the numbers, and finally reason out the correct answer. 63 MME-Reasoning Reasoning type: AbductiveDifficulty level: MediumQuestion:Remove2matchesleaving2squaresofdifferentsizes.Pleaseanswertheremovedmatchidandgiveonepossibleanswer. Okay,let'sanalyzethestructure.Initially,thereare5squares:1.Smalltop-left(A,C,F,D)2.Smalltop-right(B,D,G,E)3.Smallbottom-left(F,H,K,I)4.Smallbottom-right(G,I,L,J)5.Largeoutersquare(A,B,E,J,L,K,H,C)Weneedtoremove2matchestoleaveexactly2squaresofdifferentsizes.Thismeansweneedtokeepthelargeoutersquareandoneofthesmallsquares.Tokeepthelargesquare,wemust*not*removeanyoftheoutermatches(A,B,C,E,H,J,K,L).Therefore,thematchesweremovemustbefromtheinnerset:D,F,G,I.Ifweremovetwoadjacentinnermatches(likeDandF,orDandG,orFandI,orGandI),webreakthreesmallsquaresbutleaveonesmallsquareandthelargesquareintact.Let'stryremoving**DandF**:*RemovingDbreaksthetop-leftandtop-rightsmallsquares.*RemovingFbreaksthetop-leftandbottom-leftsmallsquares.*Thetop-leftsquareisbrokentwice.Thetop-rightandbottom-leftarebrokenonce.*Thebottom-rightsquare(G,I,L,J)remainsintact.*Thelargeoutersquare(A,B,E,J,L,K,H,C)remainsintact.*Weareleftwiththelargesquareandthesmallbottom-rightsquare.Theseare2squaresofdifferentsizes.**Answer:**RemovedmatchIDs:**DandF** Gemini-2.5-Pro-Thinking o4-mini One way to see it is to notice that if you keep the entire outer “frame” (A, B, C, E, H, J, K, L) you get a big 2×2 square, and if you keep the bottom-right little cell (G, I, J, L) you get a 1×1 square. All the interior edges are D (the top-half of the middle vertical) and F (the left-half of the middle horizontal). So Remove: D and FRemaining squares: Large 2×2 square → A, B, C, E, H, J, K, L Small 1×1 square → G, I, J, L GPT-4o To leave two squares of different sizes by removing two matches, one possible solution is:### Remove matches: - **B** - **I**### Result:- A large square remains in the top-left corner (formed by matches A, C, F, H, and K).- A smaller square remains in the bottom-right corner (formed by matches G, J, L, and E). Figure 49: An example of a moving-match problem, in which the model needs to try, judge, reflect, and reason to find the right answer. 64 MME-Reasoning Reasoning type: DeductiveDifficulty level: HardQuestion:Howmanytrianglesarethereinthegivenfigure?A.36B.40C.44D.48 GPT-4o o4-miniTo see why the total is 48, it helps to break the triangles into two classes:1. “Right-triangles” whose hypotenuse runs along one
|
https://arxiv.org/abs/2505.21327v1
|
of the two big diagonals, and whose legs are horizontal & vertical grid-lines. Along each diagonal you can choose a segment of length 1, 2, 3 or 4 (in small-square-units). For a segment of length k there are exactly 2 such right-triangles (one on each “side” of the diagonal). Since each diagonal has k=1: 4 segments →4·2 = 8 triangles k=2: 3 segments →3·2 = 6 k=3: 2 segments →2·2 = 4 k=4: 1 segment →1·2 = 2 that’s 8+6+4+2 = 20 triangles on each diagonal, or 40 total so far.2. “Isosceles” triangles whose two equal sides both lie on the diagonals and meet at the center (2,2), with a horizontal or vertical base. There are two sizes in each of the four compass-pointing directions (up, down, left, right), for 2·4 = 8 more.Adding 40 + 8 gives 48. Answer: D. To determine the total number of triangles in the given figure, we need to count all possible triangles of different sizes.### Step-by-step counting:1. **Small triangles**:-The figure is divided into 16 small squares, and each square is further divided into 4 small triangles.-Total small triangles = \( 16 \times 4 = 64 \).2. **Larger triangles formed by combining two small triangles**:-Each square contains 2 larger triangles formed by combining two small triangles.-Total larger triangles = \( 16 \times 2 = 32 \).3. **Triangles formed by combining four small triangles**:-Each square contains 1 triangle formed by combining four small triangles.-Total triangles of this type = \( 16 \times 1 = 16 \).4. **Triangles formed by combining multiple squares**:-Additional triangles are formed by combining triangles across multiple squares. These include triangles spanning rows, columns, and diagonals.-After careful analysis, the total number of such triangles is **12**.### Total triangles:Adding all the triangles together:64 + 32 + 16 + 12 = 124However, the options provided do not match this total. Please verify the figure or options. Figure 50: An example of a counting problem, in which the model needs to logically arrange the reasoning steps and find the correct answer. 65 MME-Reasoning Question:Whatistheminimumnumberofdifferentcoloursrequiredtopainthegivenfiguresuchthatnotwoadjacentregionshavethesamecolour?A.3B.4C.5D.6 Thekeyistonotethatallthe“wedges”(theradialslices)comeinaneven-lengthcycleateachofthethreerings,plusthecentraldisc.LabelthecolorsC₀,C₁,C₂.Onecandothefollowing:1.ColortheverycenterC₀.2.Ontheinnermostringof6(or8)wedgesyouthencannotuseC₀,butacycleofevenlengthcanbe2-coloured,sayalternatingC₁andC₂.3.Thenextringup:eachwedgeforbidsjustthecolorofthewedgeimmediatelybelow,soeachhastwochoicesamong{C₀,C₁,C₂};againarrangingthesechoicesyoucan2-colourthatring(usingonlyC₀plusoneof{C₁,C₂}).4.Theoutermostringishandledthesamewayagainsttheringjustinsideit.Inthiswayonlythreecolorsareeverused,andonecheckseasilythatnotwoadjacentregionsclash.Hence3colourssuffice,andobviouslyyouneedatleast3(theinnercyclebyitselfalreadyneeds2plusthecenterneedsathird).Answer:AThe figure can be visualized as a graph where each region is a vertex and edges connect adjacent regions. The goal is to find the chromatic number of this graph, which is the smallest number of colors needed to color the vertices such that no two adjacent vertices share the same color.Let's break down the figure:1. The figure consists of concentric circles divided into segments. 2. The innermost circle is a single region. 3. The next circle is divided into 4 segments. 4. The outermost circle is divided into 8 segments.We can label the regions as follows:-The innermost circle is region A. -The next circle is divided into regions B1, B2, B3, and B4. -The outermost circle is divided into regions C1, C2, C3, C4, C5, C6, C7, and C8.Now, let's analyze the adjacency:-Region A is adjacent to all 4 regions B1, B2, B3, and B4.-Each region Bi (i=1,2,3,4) is adjacent to two regions Cj (j=1,2,...,8).To ensure no two adjacent regions have the same color, we can use the following coloring strategy:-Color region
|
https://arxiv.org/abs/2505.21327v1
|
A with color 1. -Color regions B1, B2, B3, and B4 with colors 2, 3, 4, and 2 respectively (since B1 and B3 are not adjacent, and B2 and B4 are not adjacent).-Color regions C1, C2, C3, C4, C5, C6, C7, and C8 with colors 3, 4, 2, 3, 4, 2, 3, and 4 respectively (alternating colors to ensure no two adjacent regions have the same color).Therefore, the minimum number of different colors required is \boxed{4}.The correct option is B. Qwen2.5-VL-72B-InstructReasoning type: DeductiveDifficulty level: Medium o4-mini Figure 51: An example of a coloring game, in which the model needs to plan reasonably and find the minimum number of colors needed. 66 MME-Reasoning Question:Thereareatotalof20cards,dividedintotwotypes:blackandwhite.Eachtypehas10cards,numberedfrom0to9.Therulesofthegameareasfollows:(1)Theplayerwhogetsthecardsarrangestheminfrontofthemselvesinascendingorderfromlefttoright.Thecardsmustbeplacedfacedown.(2)Iftherearecardswiththesamenumberinbothblackandwhite,theblackcardisplacedontheleft,andthewhitecardisplacedontheright.Thisgameinvolvesguessingthenumbersontheface-downcardsplacedinfrontofotherplayers.Asshownintheimage,somecardnumbersarevisible.Thequestionis:whatarethenumbersonthecardsmarkedbystar?Reasoning type: AbductiveDifficulty level: Hard ###RulesRecap:1.Cardsarearrangedinascendingorderfromlefttoright.2.Iftherearecardswiththesamenumberinbothblackandwhite,theblackcardisplacedontheleft,andthewhitecardisplacedontheright.###Observations:1.Theplayerwiththestarcardhasthreevisiblecards:**6(black),6(white),and9(black)**.2.Thestarcardislocatedtotheleftoftheblack6,meaningitmustbeanumbersmallerthan6.3.Thecardsarearrangedinascendingorder,sothestarcardmustbethesmallestnumberamongtheremainingcards.###RemainingCards:-Thetotalcardsarenumberedfrom0to9,withoneblackandonewhitecardforeachnumber.-Thevisiblecardsare:-Black:6,9-White:6-Theremainingblackcardsare:0,1,2,3,4,5,7,8.-Theremainingwhitecardsare:0,1,2,3,4,5,7,8,9.###Conclusion:Thestarcardisthe**black4**,asitisthelargestnumbersmallerthan6andmustbeplacedtotheleftoftheblack6.###FinalAnswer:**4** GPT-4o Seed1.5-VL-Thinking 1. **Card Basics**: There are 20 cards (10 black, 10 white), each numbered 0–9. Each number has one black and one white card. Players arrange their 5 cards in ascending order (left to right), with black cards placed left of white cards if numbers are identical. 2. **Analyze Visible Cards**: -The bottom player’s cards are \( 0 \) (white), \( 4 \) (white), \( 6 \) (black), \( 6 \) (white), \( 9 \) (black). Thus: -White cards used: \( 0, 4, 6 \). -Black cards used: \( 6, 9 \). 3. **Determine Available Numbers**: White cards remaining: \( 1, 2, 3, 5, 7, 8 \) .Black cards remaining: \( 0, 1, 2, 3, 4, 5, 7, 8 \).4. **Left Player’s Card Order**: The left player’s cards are arranged in ascending order (left to right). The star marks a black card (dark color). The first two cards of the left player are white (light color), so their values must be the smallest available white numbers: \( 1 \) and \( 2 \) (since \( 0 \)’s white card is already used by the bottom player). The star-marked black card must be the smallest black number **greater than the second white card (\( 2 \))**. The smallest available black number greater than \( 2 \) is \( 3 \). Thus, the number on the star-marked card is \( 3 \). Figure 52: An example of a reasoning problem, in which the model needs to make assumptions, verify them, reflect, and reason to arrive at the correct answer. 67
|
https://arxiv.org/abs/2505.21327v1
|
arXiv:2505.21329v2 [cs.IR] 28 May 2025Something’s Fishy In The Data Lake: A Critical Re-evaluation of Table Union Search Benchmarks Allaa Boutaleb, Bernd Amann, Hubert Naacke and Rafael Angarita Sorbonne Université, CNRS, LIP6, F-75005 Paris, France {firstname.lastname}@lip6.fr Abstract Recent table representation learning and data discovery methods tackle table union search (TUS) within data lakes, which involves iden- tifying tables that can be unioned with a given query table to enrich its content. These meth- ods are commonly evaluated using benchmarks that aim to assess semantic understanding in real-world TUS tasks. However, our analysis of prominent TUS benchmarks reveals several limitations that allow simple baselines to per- form surprisingly well, often outperforming more sophisticated approaches. This suggests that current benchmark scores are heavily in- fluenced by dataset-specific characteristics and fail to effectively isolate the gains from seman- tic understanding. To address this, we propose essential criteria for future benchmarks to en- able a more realistic and reliable evaluation of progress in semantic table union search. 1 Introduction Measurement enables scientific progress. In com- puter science and machine learning, this requires the creation of efficient benchmarks that provide a stable foundation for evaluation, ensuring that observed performance scores reflect genuine capa- bilities for real-world tasks. Table Union Search (TUS) aims to retrieve tables Cfrom a corpus that are semantically unionable with a query table Q, meaning they represent the same information type and permit vertical concate- nation (row appending) (Nargesian et al., 2018; Fan et al., 2023a). As a top- kretrieval task, TUS ranks candidate tables Cby a table-level relevance score R(Q, C). This score is typically obtained by ag- gregating column-level semantic relevance scores R(CQ, CC)computed for each column CQof the query table Qand each column CCof the candidate tableC. The aggregation often involves finding an optimal mapping between the columns of QandC, for instance via maximum bipartite matching (Fanet al., 2023b). Successful TUS facilitates data inte- gration and dataset enrichment (Khatiwada et al., 2023; Castelo et al., 2021). Recent research has introduced sophisticated TUS methods with complex representation learn- ing (Fan et al., 2023b; Khatiwada et al., 2025; Chen et al., 2023) designed to capture deeper semantics. However, current benchmarks often exhibit exces- sive schema overlap, limited semantic complexity, and potential ground truth inconsistencies, which raises questions about whether they provide a re- liable environment to evaluate advanced TUS ca- pabilities. While state-of-the-art methodologies leverage semantic reasoning to reflect the task spe- cific challenges, observed high performance may be significantly attributed to model adaptation to specific statistical and structural properties inherent within the benchmark datasets. This phenomenon can confound the accurate assessment and poten- tially underestimate the isolated contribution of im- provements specifically targeting semantics-aware TUS. In this paper, we examine prominent TUS bench- marks1, using simple baselines to assess the bench- marks themselves. Our research questions are: 1.Do current TUS benchmarks necessitate deep se- mantic analysis, or can simpler features achieve competitive performance? 2.How do benchmark properties and ground truth quality impact TUS evaluation? 3.What constitutes a more realistic and discrimi- native TUS benchmark? Our analysis2reveals that simple baseline meth- ods often achieve
|
https://arxiv.org/abs/2505.21329v2
|
surprisingly strong performance by leveraging benchmark characteristics rather than demonstrating sophisticated semantic reasoning. 1Preprocessed benchmarks used in our evaluation are avail- able at https://zenodo.org/records/15499092 2Our code is available at: https://github.com/ Allaa-boutaleb/fishy-tus Our contributions include: •A systematic analysis identifying limitations in current TUS benchmarks. •Empirical evidence showing simple embedding methods achieve competitive performance. •An investigation of ground truth reliability issues across multiple TUS benchmarks. •Criteria for developing more realistic and discrim- inative benchmarks. 2 Related Work We review existing research on TUS methods and the benchmarks used for their evaluation, with a focus on how underlying assumptions about table unionability have evolved to become increasingly nuanced and complex. 2.1 Methods and Their Assumptions 2.1.a) Foundational Approaches: Following early work on schema matching and structural similar- ity (Sarma et al., 2012), Nargesian et al. (2018) formalized TUS by assessing attribute unionability via value overlap, ontology mappings, and natural language embeddings. Bogatu et al. (2020) incor- porated additional features (e.g., value formats, nu- merical distributions) and proposed a distinct aggre- gation method based on weighted feature distances. Efficient implementations of these methods rely on Locality Sensitive Hashing (LSH) indices and techniques like LSH Ensemble (Zhu et al., 2016) for efficient table search. 2.1.b) Incorporating Column Relationships: Be- yond considering columns individually, Khatiwada et al. (2023) proposed SANTOS, which evaluates the consistency of inter-column semantic relation- ships (derived using an existing knowledge base like YAGO (Pellissier Tanon et al., 2020) or by synthesizing one from the data itself) across tables to improve TUS accuracy. 2.1.c) Deep Table Representation Learning: Re- cent approaches use deep learning for tabular un- derstanding. Pylon (Cong et al., 2023) and Starmie (Fan et al., 2023b) use contrastive learning for con- textualized column embeddings. Hu et al. (2023) propose AutoTUS, employing multi-stage self- supervised learning. TabSketchFM (Khatiwada et al., 2025) uses data sketches to preserve se- mantics while enabling scalability. Graph-based approaches like HEARTS (Boutaleb et al., 2025) leverage HyTrel (Chen et al., 2023), representingtables as hypergraphs to preserve structural proper- ties. 2.2 Benchmarks and their Characteristics Benchmark creators make design choices at ev- ery stage of the construction process that reflect their understanding and assumptions about how and when tables can and should be meaningfully combined. We identify three primary construction paradigms applied for building TUS benchmarks: 2.2.a) Partitioning-based: TUSSmall andTUSLarge (Nargesian et al., 2018), as well as the SANTOS benchmark (referring to SANTOS Small, as SAN- TOS Large is not fully labeled) (Khatiwada et al., 2023) partition seed tables horizontally or verti- cally, labeling tables from the same original seed as unionable with the seed table. This approach likely introduces significant schema and value overlap, potentially favoring methods that detect surface- level similarity rather than deeper semantic align- ment. 2.2.b) Corpus-derived: ThePYLON benchmark (Cong et al., 2023) curates tables from GitTables (Hulsebos et al., 2023) on specific topics. While this avoids systematic partitioning overlap, the fo- cus on common topics may result in datasets with a general vocabulary that is well-represented in pre-trained models. This can reduce the compara- tive advantage of specialized table representation learning and data discovery methods.
|
https://arxiv.org/abs/2505.21329v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.