text
string
source
string
Augmentation Recipe for Improving Transcriptomic Representations Figure 2. Performance comparison of the distillation and augmentation components of our approach compared to existing distillation methods (a) and biological data augmentation techniques (b) across five training seeds. Higher is better for all metrics. Se...
https://arxiv.org/abs/2505.21317v1
methods and the unimodal base- line in relationship recall while closely matching the best unsupervised multimodal methods. On the transcriptomics 6 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Figure 3. Ablation study on the known relationship recall scor...
https://arxiv.org/abs/2505.21317v1
ability to introduce meaningful variation to the distillation process. Furthermore, integrating PEA with all other augmentations further enhances performance be- yond using PEA alone, demonstrating its complementarity to existing transcriptomics augmentation techniques. This combined approach yields the highest overall...
https://arxiv.org/abs/2505.21317v1
augmentations. Each step in the ablation builds upon the pre- vious one: (1) Fixed biological augmentation : applying a predefined set of batch correction techniques. (2) Inference on TVN-corrected embeddings : applying Typical Varia- tion Normalization (TVN) (Ando et al., 2017) correction to zSat inference before pass...
https://arxiv.org/abs/2505.21317v1
we perform Gene-Set Enrichment Anal- ysis (GSEA) (Subramanian et al., 2005) to identify enriched biological pathways within this set compared to other dis- tinct relationships retrieved by each distillation approach, filtering for gene sets with p-values <0.01. Surprisingly, KD, SHAKE, and VICReg fail to significantly ...
https://arxiv.org/abs/2505.21317v1
sciences. There are many potential societal consequences of our work, 9 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations especially relating to the discovery of new biological rela- tionships and potential drug treatments. Utmost care should be taken to valida...
https://arxiv.org/abs/2505.21317v1
Mohamed, H., Monteverde, T., Mouchet, E., Nicke, B., Ogier, A., Ong, A.-L., Osterland, M., Otrocka, M., Peeters, P. J., Pilling, J., Prechtl, S., Qian, C., Rataj, K., Root, D. E., Sakata, S. K., Scrace, S., Shimizu, H., Simon, D., Sommer, P., Spruiell, C., Sumia, I., Swalley, S. E., Terauchi, H., Thibaudeau, A., Unruh,...
https://arxiv.org/abs/2505.21317v1
unimodal features to multimodal features, 2024. Hinton, G., Vinyals, O., and Dean, J. Distilling the knowl- edge in a neural network. In NeurIPS Deep Learning Workshop , 2015. Huang, T., You, S., Wang, F., Qian, C., and Xu, C. Knowl- edge distillation from a stronger teacher. In NeurIPS , 2022. Huo, F., Xu, W., Guo, J....
https://arxiv.org/abs/2505.21317v1
bioRxiv , September 2023. 11 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Lopez, R., Regier, J., Cole, M. B., Jordan, M. I., and Yosef, N. Deep generative modeling for single-cell transcrip- tomics. Nature Methods , 15(12):1053–1058, 2018. Lu, S., F ¨urth,...
https://arxiv.org/abs/2505.21317v1
Golub, T. R., Lander, E. S., and Mesirov, J. P. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl. Acad. Sci. U. S. A. , 102(43):15545–15550, October 2005. Subramanian, A., Narayan, R., Corsello, S. M., Peck, D. D., Natoli, T. E., Lu, X., Gould, J., Da...
https://arxiv.org/abs/2505.21317v1
In Thirty-seventh Conference on Neural Information Processing Systems , 2023. URL https://openreview.net/forum? id=eT1tMdAUoc . Xue, Z., Ren, S., Gao, Z., and Zhao, H. Multimodal knowledge expansion. In ICCV , 2021. doi: 10.1109/ ICCV48922.2021.00089. Yang, C., An, Z., Huang, L., Bi, J., Yu, X., Yang, H., Diao, B., and...
https://arxiv.org/abs/2505.21317v1
layer of size 768. The image adapter follows a similar design, with an input size of 768, two hidden layers of size 1024, and an output layer of size 768. ReLU activations are applied to all hidden layers, while the output layer uses a linear activation. For VICReg, learning rates for the Tx and image adapters were 0.1...
https://arxiv.org/abs/2505.21317v1
bottom relationships according to a percentage threshold (usually 5%) of the distribution of all pairwise similarities. High similarity scores indicate cooperative relationships, while low scores suggest functional opposition. 3. Validation Against Biological Databases: The predicted relationships are validated using e...
https://arxiv.org/abs/2505.21317v1
involves adjusting the dataset such that each feature has a mean of zero. This is achieved by subtracting the mean of each feature from the data. Given a feature matrix X∈Rn×m, where nis the number of samples and mis the number of features, the centered matrix ˜Xis computed as: ˜Xij=Xij−1 nnX k=1Xkj,∀i= 1, . . . , n, ∀...
https://arxiv.org/abs/2505.21317v1
4 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Figure 7. Literature-known biological relationships retrieved by the transcriptomics and microscopy imaging unimodal encoders, alongside our proposed Semi-Clipped approach (first row), without data augmentatio...
https://arxiv.org/abs/2505.21317v1
Beyond Chemical QA: Evaluating LLM’s Chemical Reasoning with Modular Chemical Operations Hao Li1∗, He Cao2∗, Bin Feng2, Yanjun Shao3, Xiangru Tang3, Zhiyuan Yan1, Li Yuan1‡,Yonghong Tian1‡,Yu Li2‡, 1Peking University,2International Digital Economy Academy,3Yale University lihao1984@pku.edu.cn, caohe@idea.edu.cn, liyu@i...
https://arxiv.org/abs/2505.21318v1
on factual recall with domain knowledge, while our ChemCoTBench focuses on the evaluation of step-wise reasoning for complex chemical problems by defining a set of modular chemical operations. decompose challenges. For instance, they don’t capture the process of iteratively refining a molecule’s substructure to optimiz...
https://arxiv.org/abs/2505.21318v1
Claude [ 53] have achieved notable results on mathematical bench- marks like MATH [ 19] and GSM8K [ 6], while also excelling at programming. Recent studies have begun exploring LLMs for chemical tasks, such as synthesis planning [ 4] and computational chemistry [ 26,45,51]. However, these efforts lack a systematic eval...
https://arxiv.org/abs/2505.21318v1
with prompt templates iteratively refined to meet subtask-specific requirements. 3 3.1 Task Construction To evaluate the capabilities of LLMs in chemistry, we constructed a comprehensive suite of tasks. Foundation Task: Molecule-Understanding. We begin with the recognition and counting of two fundamental elements of mo...
https://arxiv.org/abs/2505.21318v1
and their impact on yield and selectivity. (4) Reaction Mechanism Understanding : Includes Next Elementary-Step Product Prediction (predicting intermedi- ates stepwise, testing electron flow modeling) and Mechanism Route Selection (choosing the most plausible pathway from alternatives, assessing mechanistic reasoning)....
https://arxiv.org/abs/2505.21318v1
Molecule Understanding Molecule Editing Molecule Optimization Reaction Prediction +Samples Updated Samples + Chemical ExpertsFigure 3: The dataset construction pipeline of ChemCoTBench contains four steps, including raw data collection, molecule filtering and sampling, chain-of-thoughts annotation, and chemical expert ...
https://arxiv.org/abs/2505.21318v1
0.40 80.0 84 85 80 83.4 DeepSeek-R1 0.27 1.55 0.34 45.0 65 70 70 68.3 o3-mini@20250103 0.13 0.60 0.39 75.0 78 65 55 80.0 o1-mini@20240912 0.21 1.25 0.25 61.7 66 55 80 58.3 Qwen3-235B-A22B-think 0.42 1.00 0.38 82.5 72 40 75 71.7 Qwen3-32B-think 0.25 0.95 0.21 75.0 68 20 55 20.0 Llama-Nemo-49B-think 0.80 1.90 0.09 86.8 4...
https://arxiv.org/abs/2505.21318v1
∆ SR% W/ Thinking Gemini-2.5-pro-think -0.28 81 1.91 92 0.21 84 0.35 74 -0.04 35 0.04 68 Claude3.7-sonnet-think 0.41 81 0.59 77 0.09 73 0.18 66 -0.01 49 0.01 57 DeepSeek-R1 0.36 74 1.48 97 0.05 72 0.10 62 -0.06 29 -0.02 41 o3-mini@20250103 0.29 68 1.15 85 0.17 86 0.18 69 -0.08 23 -0.03 45 o1-mini@20240912 -0.42 52 1.78...
https://arxiv.org/abs/2505.21318v1
that RL- honed "slow thinking" capabilities [ 40,48,62], when combined with sufficient domain knowledge, enable superior abstraction and problem-solving beyond mere knowledge retrieval. 7 Table 3: The chemical reaction task contains forward prediction (Fwd major: major-product prediction, and Fwd by: by-product predict...
https://arxiv.org/abs/2505.21318v1
Distilling CoT capabilities from advanced LLMs (e.g., using DeepSeek-R1-generated samples [ 13,71]) is a common strategy to enhance reasoning in smaller models. However, this approach proves significantly limited for specialized chemical reasoning. Our experiments (Fig.4) show that Qwen2.5-Instruct models distilled for...
https://arxiv.org/abs/2505.21318v1
complex chemical reasoning, while also validating the boosting effect of our large chemical CoT dataset on chemical reasoning capabilities. ChemCoTBench bridges the gap between LLM reasoning capabilities and real-world chemical problem-solving needs, offering researchers a standardized 9 evaluation platform for complex...
https://arxiv.org/abs/2505.21318v1
reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [14] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y . Wu, Y . K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. Deepseek-coder: When the large language model meets programming – the rise of code intelligenc...
https://arxiv.org/abs/2505.21318v1
Zhengkai Tu, John Bradshaw, and Connor W Coley. Reproducing reaction mechanisms with machine-learning models trained on a large-scale mechanistic dataset. Angewandte Chemie International Edition , 63(43):e202411296, 2024. [29] Daniel Kahneman. Thinking, fast and slow . macmillan, 2011. [30] Sunghwan Kim, Paul A Thiesse...
https://arxiv.org/abs/2505.21318v1
furious, 2025. [45] Siru Ouyang, Zhuosheng Zhang, Bing Yan, Xuan Liu, Yejin Choi, Jiawei Han, and Lianhui Qin. Structured chemistry reasoning with large language models. arXiv preprint arXiv:2311.09656 , 2023. [46] Nadine Schneider, Roger A. Sayle, and Gregory A. Landrum. Get your atoms in order—an open- source impleme...
https://arxiv.org/abs/2505.21318v1
Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635 , 2023. [64] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma,...
https://arxiv.org/abs/2505.21318v1
. . . . . . . . . . . . . 3 C.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 C.3 Count Distribution Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 D Case Study for Tasks in ChemCoTBench 5 D.1 Case Study for Molecule Understanding . . . . . . . . . . . . ...
https://arxiv.org/abs/2505.21318v1
Dataset6400 4500 3000 rationale for dataset construction. In Table. 4, we also visualize the data distribution of subtasks in ChemCoTBench. B.1 Data Collection The raw molecular structures used for understanding, editing, and optimization are obtained from sev- eral published datasets, including PubChem [ 30], ChEMBL [...
https://arxiv.org/abs/2505.21318v1
evaluate LLMs’ capabilities in this multifaceted domain rigorously. •Forward Reaction Prediction : This task, pivotal for academic discovery and industrial applications like drug development, evaluates an LLM’s ability to predict both major products and, uniquely in our benchmark, byproducts from given reactants and re...
https://arxiv.org/abs/2505.21318v1
the A6000/3090 pair efficiently managed concurrent API requests and lighter workloads. Storage requirements remained modest at approximately 1GB, encompassing benchmark datasets (SMILES strings and annotations), quantized model checkpoints, and evaluation logs, all hosted on an NVMe-backed filesystem for rapid data acc...
https://arxiv.org/abs/2505.21318v1
can provide detailed information compared to number prediction tasks and correction distinguishing tasks. Source Molecule GT-Scaffold Gemini-2.5-pro Llama3.3-70B 100% 41.8% 27.8% 100% 38.6% 0.0% 100% 56.8% 15.4% 100% 33.3% 13.3% D.1 Case Study for Molecule Understanding The molecule understanding task in ChemCoTBench c...
https://arxiv.org/abs/2505.21318v1
addressed through targeted training. Commercial LLMs demonstrate bolder optimization strategies compared to open-source models . For instance, Gemini-2.5-pro frequently performs skeleton-level modifications (e.g., additions or deletions), whereas Qwen3-235B and llama3.3 tend toward conservative insertions with minimal ...
https://arxiv.org/abs/2505.21318v1
subtask: Ring System Counting Task. 9 Question example for Molecule Editing You are a chemical assistant. Given the SMILES structural formula of a molecule, help me add a specified functional group and output the improved SMILES sequence of the molecule. Input: Molecule SMILES string, Functional Group Name. Output: Mod...
https://arxiv.org/abs/2505.21318v1
correct elementary reaction stages description, considering the mechanism of this type of reaction?Choices: A: Carboxylic acid deprotonation →Reaction of carboxylic acid and HATU/HBTU →Addition of HOBt(1-hydroxybenzotriazole) into carboxylic acid-HATU/HBTU →Amine attacks HOBt-carboxylic acid complex →Proton exchange be...
https://arxiv.org/abs/2505.21318v1
arXiv:2505.21322v1 [cs.AI] 27 May 2025Proceedings of Machine Learning Research 288:1–19, 2025 Assured Autonomy with Neuro-Symbolic Perception R. Spencer Hallyburton SPENCER .HALLYBURTON @DUKE .EDU Miroslav Pajic MIROSLAV .PAJIC @DUKE .EDU Duke University Keywords: Perception. Autonomy. Cyber-physical system security. A...
https://arxiv.org/abs/2505.21322v1
allows for incorporating logical constraints and commonsense knowledge – an approach that promises to enhance robustness in safety-critical applications such as autonomous driving (AD) and unmanned aerial vehicles (UA Vs). Our neuro-symbolic approach to sensor fusion commences with a joint detection and graph generatio...
https://arxiv.org/abs/2505.21322v1
used. Recent transformer-based detectors , such as DETR Carion et al. (2020), offer an end-to-end approach minimizing the use of hand-crafted heuristics. Multi-sensor fusion. Fusing data improves observability, robustness, and attack resilience. Multi- sensor fusion typically occurs at the semantic level Durrant-Whyte ...
https://arxiv.org/abs/2505.21322v1
boxes) from ego maintains consistency with 2D frustum in image plane. Attacker runs optimization to move object as far back as possible while retaining at least a minimum IoU (overlap) when projected into 2D image. 4.1. Overview of Approach Unlike DNNs, human perception seamlessly integrates low-level feature recogniti...
https://arxiv.org/abs/2505.21322v1
relationships and global contexts, making them ideal for SGG. Recent specialized models, such as EGTR Im et al. (2024) and SGTR Li et al. (2022), integrate object proposals with relationship prediction heads to generate comprehensive scene graphs. Fig. 5 illustrates the pipeline and output of a specialized SGG model. S...
https://arxiv.org/abs/2505.21322v1
In contrast, our neuro-symbolic cross-sensor integrity function takes into account the full graph of detections and their relationships with other detections. This approach to cross-sensor in- tegrity is particularly effective in securing perception to attacks that alter the semantic understanding of the scene, such as...
https://arxiv.org/abs/2505.21322v1
relation- ships for this work is described in Appendix C. An example is illustrated in Fig. 6( a). For LiDAR data, scene graphs are generated by first detecting 3D bounding boxes using classical detectors and then passing boxes to rule-based geometric relationship functions. This study focuses on proximal relationships...
https://arxiv.org/abs/2505.21322v1
on autonomy-specific datasets for efficient, real-time SGG. Dataset construction. Constructing high-quality datasets while handling edge cases (e.g., zero- /few-shot) and real-world complexity is a key challenge for neuro-symbolic algorithms Gilpin and Ilievski (2021). We employed a dataset generation pipeline using A ...
https://arxiv.org/abs/2505.21322v1
2021. Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh V ora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 11621–11631, 2020....
https://arxiv.org/abs/2505.21322v1
Week 2023) , pages 209–220, 2023b. Jinbae Im, JeongYeon Nam, Nokyung Park, Hyungmin Lee, and Seunghyun Park. Egtr: Extracting graph from transformer for scene graph generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 24229–24238, 2024. Jinyuan Jia, Xiaoyu Cao, Binghu...
https://arxiv.org/abs/2505.21322v1
Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, and Raquel Urtasun. Physically realizable adversarial examples for lidar object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 13716–13725, 2020. Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi....
https://arxiv.org/abs/2505.21322v1
vulnerabilities including Trojans in Hallyburton et al. (2023a); Petit and Shladover (2014). For example, frustum-type attacks lead to translations of existing objects as illustrated in Hallyburton et al. (2022). 14 NEURO -SYMBOLIC PARADIGM FOR PERCEPTION B.2. Attacker Goal: Optimal Frustum Attack One particular attack...
https://arxiv.org/abs/2505.21322v1
contextual dependencies Zellers et al. (2018); Tang et al. (2020). GNNs represent objects as graph nodes and relationships as edges, enabling message passing that propagates semantic and spatial information throughout the graph for enhanced relational rea- soning. Transformers utilize self-attention mechanisms to dynam...
https://arxiv.org/abs/2505.21322v1
(b) DNN yields detections on LiDAR data, rules construct scene graph. (c) Adversary manipulates pedestrian, translating it away from ego. When pro- jected to front-view, pedestrian is still consistent with camera. (d) Reasoning on subgraphs illuminates in- consistencies in semantics between image and attacked LiDAR gra...
https://arxiv.org/abs/2505.21322v1
MME-Reasoning MME-Reasoning: A Comprehensive Benchmark for Logical Reasoning in MLLMs Jiakang Yuan1,3,∗, Tianshuo Peng2,3,∗, Yilei Jiang2, Yiting Lu4, Renrui Zhang2, Kaituo Feng2, Chaoyou Fu5, Tao Chen1,†, Lei Bai3, Bo Zhang3,†, Xiangyu Yue2,3 1Fudan University2MMLab, The Chinese University of Hong Kong 3Shanghai AI La...
https://arxiv.org/abs/2505.21327v1
2024) and EMMA (Hao et al., 2025) expand the scope to include additional subjects, such as physics and chemistry. Apart from knowledge-driven tasks, some works (Song et al., 2025; Chia et al., 2024; Zhang et al., 2025b) have begun to decouple knowledge from logical reasoning, aiming to assess the reasoning abilities of...
https://arxiv.org/abs/2505.21327v1
MME-Reasoning multiple evaluation methods enables a wider variety of question types, thereby facilitating a more comprehensive evaluation of models’ capabilities. Experiments were conducted on state-of-the-art MLLMs, covering Chat and Thinking types of both open-source and closed-source, as presented in Fig. 1. Evaluat...
https://arxiv.org/abs/2505.21327v1
…If the particle entering directly toward point O can just reach the Earth's surface, what's the particle's velocity? Black’s turn. Mate in one ReasoningTypeEvaluationDiverseDataSourceDeductiveCapabilityInductiveAbductive PatternAnalysisPlanning&ExploringSpatial&TemporalCalculationCasualChainAnalysis ChoiceFree-formRul...
https://arxiv.org/abs/2505.21327v1
Knowledge Reliance. It is essential to ensure that the questions do not require complex domain knowledge, thereby preventing models from being penalized for the absence of specialized information. In MME-Reasoning, the domain expertise is limited to K12 or below. 4) Diverse evaluation formats. The benchmark should cons...
https://arxiv.org/abs/2505.21327v1
more details about the question source and type. Data Curation. We initially collect around 4k questions from various sources mentioned above. Following the design principles of MME-Reasoning, we conduct a careful manual curation process to ensure the quality of the benchmark. Specifically, we exclude questions that de...
https://arxiv.org/abs/2505.21327v1
2024); (2) Claude-3.7-Sonnet (Anthropic, 2022) (3) Kimi-latest (Team et al., 2025a); (4) Seed1.5-VL (Guo et al., 2025a). Open-source models : (1) Qwen-2.5-VL (7B, 32B, 72B) (Qwen Team, 2025a); (2) InternVL-3 (8B, 38B, 78B) (Zhu et al., 2025); (3) LLaVA-Onevision-72B (Li et al., 2024); (4) Molmo (7B-O, 7B-D, 72B) (Deitk...
https://arxiv.org/abs/2505.21327v1
30.5 28.1 30.6 MM-Eureka-Qwen-7B 27.1 19.3 22.3 31.9 50.0 32.7 28.7 22.6 28.2 R1-VL-7B 16.3 11.6 17.7 30.9 26.4 25.3 21.8 15.8 21.1 Vision-R1-7B 18.2 18.0 17.9 34.4 36.1 27.4 26.3 18.1 24.0 R1-Onevision-7B-RL 19.5 12.2 20.0 31.6 27.1 27.7 24.8 14.6 22.5 Kimi-VL-A3B-T 28.7 16.0 19.5 32.3 35.4 33.3 25.1 18.1 25.9 Open-so...
https://arxiv.org/abs/2505.21327v1
improved by 1.1 compared to Qwen2.5-VL, and VL-Rethinker improved by 1.7 compared to Qwen2.5-VL. This effect is more pronounced among closed-source models: Seed1.5-VL-T outperformed Seed1.5-VL by 12.4, and o1 exceeded GPT-4o by 15.5. Further experiments concerning thinking models will be elaborated in subsequent sectio...
https://arxiv.org/abs/2505.21327v1
subsequent analyses. Besides, Fig. 6 illustrates the trend of ATC across different reasoning types and difficulty levels. It reveals a consistent pattern: overall, output length increases steadily with rising difficulty. This trend holds across varying output lengths, model categories, and reasoning types. Compared 11 ...
https://arxiv.org/abs/2505.21327v1
Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 5828–5839, 2017. DeepSeek-AI. Deepseek-r1: Incentivizing reasoning capability in llms via reinforce...
https://arxiv.org/abs/2505.21327v1
Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403 , 2024. Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incent...
https://arxiv.org/abs/2505.21327v1
Tianshuo Peng, Mingsheng Li, Hongbin Zhou, Renqiu Xia, Renrui Zhang, Lei Bai, Song Mao, Bin Wang, Conghui He, Aojun Zhou, et al. Chimera: Improving generalist model with domain- specific experts. arXiv preprint arXiv:2412.05983 , 2024. Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Ya...
https://arxiv.org/abs/2505.21327v1
16 MME-Reasoning Renqiu Xia, Bo Zhang, Hancheng Ye, Xiangchao Yan, Qi Liu, Hongbin Zhou, Zijun Chen, Peng Ye, Min Dou, Botian Shi, et al. Chartx & chartvlm: A versatile benchmark and foundation model for complicated chart reasoning. arXiv preprint arXiv:2402.12185 , 2024b. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, ...
https://arxiv.org/abs/2505.21327v1
chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 , 2023. Jiaxing Zhao, Xihan Wei, and Liefeng Bo. R1-omni: Explainable omni-multimodal emotion recogni- tion with reinforcement learning, 2025. Changmeng Zheng, Dayong Liang, Wengyu Zhang, Xiao-Yong Wei, Tat-Seng Chua, and Qing Li. A picture ...
https://arxiv.org/abs/2505.21327v1
. . 25 B.2 Reasoning Type Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 B.3 Capability Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 C Details of Implementation 27 D Details of Evaluation 28 D.1 Prompts for Answer Extraction . . . . . . . . . . ....
https://arxiv.org/abs/2505.21327v1
the use of Test-Time Compute Scaling (TTS) methods can improve model performance on MME-Reasoning, we take Qwen2.5-VL-7B as an example and use Qwen2.5-VL-32B as the Reward Model. The evaluation is conducted using the Monte Carlo Tree Search (MCTS) algorithm, with the settings: branch = 3 and max-iteration = 18 . The re...
https://arxiv.org/abs/2505.21327v1
Kimi-VL-A3B 18.7 11.9 21.4 34.0 27.8 25.9 26.3 17.1 23.1 21 MME-Reasoning Table 5: Comparison of statistics between full and mini-set of MME-Reasoning. SplitReasoning Type Question Type Difficulty Level DED. IND. ABD. Open MCQ Rule. Easy Medium Hard Mini 39.7% 25.8% 34.4% 32.4% 58.3% 9.3% 31.8% 39.4% 28.8% Full 38.6% 2...
https://arxiv.org/abs/2505.21327v1
58.5 49.3 83.3 67.5 48.7 67.3 62.6 o4-mini 64.0 58.3 56.6 45.1 54.8 57.5 51.3 60.6 57.0 o1 50.0 38.5 41.5 43.7 52.4 50.8 42.3 42.3 45.7 Claude-4-Sonnet-T 33.0 30.2 35.8 39.4 50.0 42.5 37.2 33.7 38.1 Claude-3.7-Sonnet-T 30.0 17.7 36.8 38.0 38.1 31.7 42.3 27.9 33.1 Gemini-2.5-Flash-T 18.0 16.7 15.1 39.4 33.3 27.5 19.2 26...
https://arxiv.org/abs/2505.21327v1
Seed1.5-VL 54.0 46.2 59.3 51.8 57.0 38.5 42.4 48.0 20.0 GPT-4o 36.3 38.3 48.3 39.1 15.2 17.3 27.9 21.1 7.0 Claude-3.7-Sonnet 38.7 42.2 37.3 39.9 30.4 21.2 27.9 28.0 12.2 Kimi-Latest 31.3 29.6 38.1 31.8 20.9 3.8 20.0 18.1 0.9 Open-source & Thinking QVQ-72B-Preview 43.7 35.0 45.8 40.6 38.0 26.9 31.5 33.6 8.7 Virgo-72B 39...
https://arxiv.org/abs/2505.21327v1
28.7 24.3 28.6 38.3 48.6 37.5 32.9 26.9 32.7 InternVL3-78B 26.0 24.0 26.5 41.8 50.0 35.1 33.8 27.1 32.1 + CoT prompt 29.0 22.9 27.0 40.8 48.6 36.6 35.1 26.9 32.9 A.8 Results of Captioner & LLMs We used GPT-4o as the captioner to generate visual descriptions for each question as a substitute for the images. Then we eval...
https://arxiv.org/abs/2505.21327v1
https://echarts.apache.org/examples/zh/editor.html?c=bar-simple 1/1 Figure 10: Average token usage of open & closed-source thinking models on MME-Reasoning. Table 11: Performance of Caption + SoTA Reasoning LLMs. We use GPT-4o to generate caption of each image in MME-Reasoning. ModelModel Capability Reasoning TypeA VG....
https://arxiv.org/abs/2505.21327v1
2024), MM-IQ (Cai et al., 2025), PuzzleVQA (Chia et al., 2024). We further filter most of the data and reformulate the questions. We use gpt-4o-mini to extract the answer of all responses and judge the answer of free-form questions. The cost fluctuates with the 27 MME-Reasoning length of the MLLM’s response. As an exam...
https://arxiv.org/abs/2505.21327v1
According to the clues, find the corresponding position. Answer in '(row id (A-C), column id (1-3))' format.Model Response: The possible answer is: (A, 1)Extracted answer (jsonformat): [{{"row": "A","column": 1}}]Example2:Question: According to the clues, find the two corresponding position. Answer in '(row id (A-C), c...
https://arxiv.org/abs/2505.21327v1
these rules:1. **Input**:-`solution`: Text describing bridges between islands using various formats (e.g., "c1 -c3", "a1到g1", "between b2 and b4").2. **Output Requirements**:-Return a JSON list of dictionaries in this format:```JSON[{{"start": "a1", "end": "b1", "number": 2}}, ...]```-Include ALL bridges explicitly des...
https://arxiv.org/abs/2505.21327v1
-The fourth column already has a 3, so the fourth cell in the top row cannot be 3. Therefore, it must be 4.\n -Top row: 2, 1, 4, 3\n\n3. **Fill in the second row:**\n -The second row already has a 3. The remaining digits are 1, 2, and 4.\n -The first column already has a 2 and a 3, so the first cell in the second row c...
https://arxiv.org/abs/2505.21327v1
the sub-grids and initial constraints:**\n -The grid is divided into 2x3 sub-grids.\n -We need to fill in the empty cells while satisfying the Sudoku rules.\n\n2. **Fill in the first row:**\n -The first row has the digits 2 and 4 already placed.\n -The remaining digits are 1, 3, 5, and 6.\n -Considering the constraints...
https://arxiv.org/abs/2505.21327v1
answer from the given solution. The solution will contain a detailed solution to a Sudoku-like puzzle, including step-by-step explanations and a final filled grid representation. Your task is to identify and extract only the final answer, which is presented as a matrix (list of lists or equivalent) at the end of the so...
https://arxiv.org/abs/2505.21327v1
right means we can see 4 skyscrapers from the right.\n -Left column: Clue '2' on the top means we can see 2 skyscrapers from the top.\n -Right column: Clue '2' on the bottom means we can see 2 skyscrapers from the bottom.\n\n2. **Start with the bottom row:**\n -Since we can see 4 skyscrapers from the right, the bottom ...
https://arxiv.org/abs/2505.21327v1
stone at (b1).\n\n-**Top-right corner (f1):**\n -Place a white stone to avoid a monochromatic 2x2 region with the black stone at (e1).\n\n-**Middle section:**\n -Ensure connectivity of black and white stones while avoiding monochromatic 2x2 regions.\n\n### Final Matrix:\n\nAfterplacing the stones according to the rules...
https://arxiv.org/abs/2505.21327v1
for the following response:Question: {question}Model Response: {response}Extracted answer:You should only output the answer without any other texts. Figure 21: Prompt for free-form tasks. 36 MME-Reasoning Reasoning type: AbductiveDifficulty level: MediumQuestion: A sealed box with two switches $S_1$, $S_2$ and two lamp...
https://arxiv.org/abs/2505.21327v1
to the sofa by completing the route: 1. Go straight out of the bedroom door, 2. [?], 3. Go straight to the dining table, 4. [?], 5. Go straight to the sofa. A. Turn left, Turn right B. Turn right, Turn left C. Turn left, Turn left D. Turn right, Turn right 1. **Start:** In room A (bedroom), next to the TV, facing the b...
https://arxiv.org/abs/2505.21327v1
white stones).4. **Do not change the positions of any stones already placed on the grid.**Please complete the solution for the Yin-Yang puzzle in the diagram. At the end of your response, summary your answer **as a single filled matrix** (list of lists or equivalent). Represent the stones as a 6x6 matrix of "0" and "1"...
https://arxiv.org/abs/2505.21327v1
* Row 3 needs {4,5} for (3,1),(3,2). * Row 4 needs {2,3} for (4,1),(4,2). * This assignment means {(3,1),(3,2)}={4,5} and {(4,1),(4,2)}={2,3}. * Consider Col 1: [?, ?, (3,1), (4,1), 2, ?]. (5,1)=2. (3,1) is 4 or 5. (4,1) is 2 or 3. (4,1) cannot be 2. So (4,1)=3. * This implies (4,2)=2. * Consider Col 2: [?, ?, (3,2), (...
https://arxiv.org/abs/2505.21327v1
Subgrid 5). (6,4)=4 (from Col 4). Row 6 has two 4s. IMPOSSIBLE.* So (5,4) must be 4. Then (5,6)=1. This leads to (6,3)=4, (6,4)=1. Row 6: [5,1,4,1,2,?]. Two 1s. IMPOSSIBLE.* So (5,3) must be 4.* Then (6,3)=3.* Row 5: [2,6,4,(5,4),5,(5,6)]. Missing 1,3. So {(5,4),(5,6)}={1,3}.* (5,4) is 1 or 4 (from Col 4). So (5,4)=1. ...
https://arxiv.org/abs/2505.21327v1
Figure 27: An example of a Venn diagram problem in which the model needs to calculate and reason to determine the correct answer. 42 MME-Reasoning Reasoning type: AbductiveDifficulty level: HardQuestion:Hashi(Hashiwokakero)alsoknownasBridgesisalogicpuzzle.Therulesaresimple.Somecellsstartoutwithnumbersfrom1to8inclusive;...
https://arxiv.org/abs/2505.21327v1
2 (from c5). c5 needs 1 (from c1) + 2 (from a5) = 3. This works. So: a1-c1 (1), c1-c5 (1), a5=c5 (2). Let's check counts: a1(1 ok), c1(1+1=2 NO, needs 3), a5(2 ok), c5(1+2=3 NO, needs 5). This path fails.If a1 connects to a5, then a1 is complete. a5 needs 1 more (from c5). c1 needs 2 (from c5). c5 needs 1 (from a5) + 2...
https://arxiv.org/abs/2505.21327v1
shape) -> 1 (detailed structure) -> 3 (continuation of detailed structure). Therefore, the correct option is B. Qwen2.5-VL-72B-Instruct Thedrawingclearlyprogressesfromablankcover(2)tooutliningthepediment(4),thenaddingthecolumns(1),andfinallyfillinginthedetails(3).Sothechronologicalorderis2-4-1-3,whichisoptionB.Answer:B...
https://arxiv.org/abs/2505.21327v1
response, summary your answer **as a single filled matrix** (list of lists or equivalent). One finds that the only way to satisfy• row 2-from-the-left=1 ⇒ the first entry of row 2 must be 4 • row 4-from-the-right=2 ⇒ the 4 in row 4 must sit at column 2, and the right-most cell <4 • column 3-from-below=2 ⇒ the 4 in colu...
https://arxiv.org/abs/2505.21327v1
\( [C, \uparrow, \square] \) →Second element = \( \uparrow \) (not \( S \)). -**B**: \( [C, S, \uparrow] \) →Second element = \( S \) (matches the pattern). -**C**: \( [C, S, \square] \) →Second element = \( S \), but the third element does not follow the sequence. -**D**: \( [C, \bigstar, \uparrow] \) →Second element ...
https://arxiv.org/abs/2505.21327v1
faces MC (top), ML (front), B (right). These three faces are not opposite pairs. Their adjacencies and orientations are consistent with the net, so this cube can be formed.**Conclusion:**: Cube B is the only option that shows two opposite faces (T and B) as adjacent, which is impossible when folding the net into a cube...
https://arxiv.org/abs/2505.21327v1
possible solution:1. Subtract 11 from 13: (13 - 11) = 22. Multiply the result by 7: 7 * 2 = 143. Add 10 to the result: 14 + 10 = 24The formula is:**7 * (13 - 11) + 10 = 24** Gemini-2.5-Pro-Thinking (13-11)*(10-7)=24 GPT-4o o4-mini(7-10/(11-13))*(13-11)=24 7*(13-11)+10=24Seed1.5-VL-Thinking Figure 41: An example of the ...
https://arxiv.org/abs/2505.21327v1
the opponent has no legal moves to escape the check.Please provide your move in the format of **standard algebraic notation**: In chess, moves are written using standard algebraic notation, which includes the following components: **notation of piece moved – destination square**- **Piece**: The piece is represented by ...
https://arxiv.org/abs/2505.21327v1
the pattern in the given circles.### First Circle:The numbers are 6, 12, and 9. The relationship is: \( 6 + 12 = 18 \), and \( 18 ÷2 = 9 \). This matches the middle number.### Second Circle: The numbers are 16, 4, and 8. The relationship is: \( 16 + 4 = 20 \), and \( 20 ÷2 = 8 \). This matches the middle number.### Thi...
https://arxiv.org/abs/2505.21327v1
of the two big diagonals, and whose legs are horizontal & vertical grid-lines. Along each diagonal you can choose a segment of length 1, 2, 3 or 4 (in small-square-units). For a segment of length k there are exactly 2 such right-triangles (one on each “side” of the diagonal). Since each diagonal has k=1: 4 segments →4·...
https://arxiv.org/abs/2505.21327v1
A with color 1. -Color regions B1, B2, B3, and B4 with colors 2, 3, 4, and 2 respectively (since B1 and B3 are not adjacent, and B2 and B4 are not adjacent).-Color regions C1, C2, C3, C4, C5, C6, C7, and C8 with colors 3, 4, 2, 3, 4, 2, 3, and 4 respectively (alternating colors to ensure no two adjacent regions have th...
https://arxiv.org/abs/2505.21327v1
arXiv:2505.21329v2 [cs.IR] 28 May 2025Something’s Fishy In The Data Lake: A Critical Re-evaluation of Table Union Search Benchmarks Allaa Boutaleb, Bernd Amann, Hubert Naacke and Rafael Angarita Sorbonne Université, CNRS, LIP6, F-75005 Paris, France {firstname.lastname}@lip6.fr Abstract Recent table representation lear...
https://arxiv.org/abs/2505.21329v2
surprisingly strong performance by leveraging benchmark characteristics rather than demonstrating sophisticated semantic reasoning. 1Preprocessed benchmarks used in our evaluation are avail- able at https://zenodo.org/records/15499092 2Our code is available at: https://github.com/ Allaa-boutaleb/fishy-tus Our contribut...
https://arxiv.org/abs/2505.21329v2