text
string
source
string
long-context tasks improves and degrades, with the best performance observed when n= 10 . We hypothesize that this trend is due to an excessive number of documents increasing the task difficulty beyond the model’s learning capacity, thereby leading to a drop in performance. 4.2 Impact of Different Mixing Strategy To investigate the effectiveness of the p-Mix Strategy, we compare p-Mix with three alternative mixing approaches: (1) No Mix: training solely on LongMagpie data without short-context SFT datasets; (2) Sequential Mix: first training on short-context data (UltraChat) then fine-tuning on long-context data (LongMagpie), similar to (Ding et al., 2023); (3) Simple Mix: directly combining and shuffling long and short data in a single training stage, similar to approaches used with LongAlign (Bai et al., 2024a); and (4) p-Mix (Ours): our proposed strategy from Algorithm 1 that pre-pends short instructions and probabilistically mixes segments. As Table 3 demonstrates, alternative strategies struggle to balance long-context and short-context performance compared to our p-Mix approach. In contrast, our p-Mix strategy demonstrates a superior balance: it achieves a competitive LongA VG of 61.60 (notably better than sequential and simple mixing, and only a slight trade-off compared to no mixing) while attaining the best ShortA VG score of 64.10. This highlights the efficacy of the p-Mix approach in maintaining strong long-context reasoning abilities while significantly bolstering performance on short, non-contextual tasks. More details can be found in Appendix A.3. 4.3 Impact of Different Data Size To investigate the impact of data volume on model performance, we train our models using two different sizes of LongMagpie-generated data: 190k and 450k samples. As shown in Table 4, scaling up the training data from 190k to 450k samples leads to consistent improvements across all long- context evaluation benchmarks. Specifically, we observe gains of +0.81 on HELMET, +0.52 on RULER, and +1.8 on Longbench v2, resulting in a +1.05 improvement in the overall LongA VG metric. This demonstrates that increasing the volume of high-quality long-context instruction data significantly enhances the model’s ability to comprehend and reason over extended contexts. 4.4 Impact of Different Source Model Size To assess the impact of different models on data synthesis, we use LongMagpie to generate two 450k long-context instructions respectively by the Qwen-2.5-7B model and the Qwen-2.5-70B model. As shown in Table 5, using the larger 70B model improves LongA VG performance (59.61 →62.56), 7 Table 4: Increasing the volume of training data improves performance on long-context benchmarks. Source Model Data V olume HELMET RULER Longbench v2 LongA VG ShortA VG Qwen-2.5-70B 190k 61.29 90.65 32.6 61.51 62.30 Qwen-2.5-70B 450k 62.10 91.17 34.4 62.56 62.37 Table 5: Using the larger source model improves performance on long-context benchmarks.. Source Model Data V olume HELMET RULER Longbench v2 LongA VG ShortA VG Qwen-2.5-7B 450k 59.28 86.95 32.6 59.61 62.18 Qwen-2.5-70B 450k 62.10 91.17 34.4 62.56 62.37 (a) Reward model scores for different datasets. (b) Query similarities within different datasets. Figure 2: Analysis of LongMagpie-generated data quality and diversity. (a) higher reward model scores indicates higher quality. (b) lower pairwise query similarity indicates better diversity. (a) (b) (c) Figure 3:
https://arxiv.org/abs/2505.17134v1
Visualizations of LongMagpie characteristics: (a,b) t-SNE visualizations of query embed- dings from different datasets showing LongMagpie’s dispersed distribution indicating diversity; (c) Long-context performance vs. token consumption showing LongMagpie’s superior performance. and shows similar performance on ShortA VG. This superior performance likely stems from larger models’ enhanced ability to model long-context capabilities (Xiong et al., 2023), which translates to better results when applied to the LongMagpie method. 4.5 Analysis of of LongMagpie Queries Higher Quality of LongMagpie Queries We use the Reward Model FsfairX-Llama3-RM-v0.1 (Dong et al., 2024) to score three long-context fine-tuning datasets. As shown in Figure 2a, the x-axis represents the scores given by the reward model, and the y-axis represents the proportion of data within each dataset corresponding to that score. The overall data quality of LongMagpie is significantly higher than that of ChatQA and LongAlign. Better Diversity of LongMagpie Queries To investigate the diversity of different datasets, we sampled 300 queries from each dataset, inferred their embeddings using the jina-embeddings-v3 (Sturua et al., 2024) model, and visualized their distribution using t-SNE (Van der Maaten and Hinton, 8 2008), as shown in Figure 3. It can be observed that LongMagpie’s distribution is more dispersed, reflecting its better diversity. Furthermore, we repeated the following experiment 30 times: sampled queries from each dataset, calculated the pairwise similarity between the sampled queries within each dataset, and aggregated the distributions of all similarities, as shown in Figure 2b. It can be seen that LongMagpie queries generally exhibit lower similarity among themselves, which also reflects their good diversity. 4.6 Sample Efficiency of LongMagpie We analyze the sample efficiency of various long-context instruction synthesis methods by quantifying the average token processing requirements during instruction synthesis. As illustrated in Figure 3c, LongMagpie exhibits exceptional sample efficiency, achieving superior long-context performance while processing substantially fewer tokens per instruction (averaging 1.6K tokens)1. This efficiency stands in stark contrast to methods like ChatQA and LongAlign, which consume 10-13 ×more tokens per instruction during synthesis yet produce inferior performance outcomes. LongMagpie’s remarkable sample efficiency facilitates greater scalability and diversity. 5 Related Work 5.1 Long-Context Data Synthesis Existing approaches to synthesizing long-context data can be divided into two categories. Continuation-Oriented Methods Approaches in this category generate long-context data by concatenating shorter documents. Early methods (Roziere et al., 2023; Chen et al., 2023b) used random sampling and concatenation, but failed to maintain meaningful long-range dependencies. Later approaches preserved semantic coherence through document clustering (Guu et al., 2020) or nearest-neighbor retrieval (Shi et al., 2023). Quest (Gao et al., 2024a) balances relevance and diversity using keyword matching. NExtLong (Gao et al., 2025) decomposes a document into multiple meta-chunks and extends the context by interleaving hard negative distractors retrieved from pretraining corpora. However, these methods focus on pre-training rather than instruction tuning. In contrast, LongMagpie directly generates instruction-following data with the model’s auto-regressive capabilities. Instruction-Oriented Methods There exist many approaches to generate long-context instruction data (Zhu et al., 2025b; Wang et al., 2024; Köksal et al., 2024; Sun et al., 2025). Representative works include WildLong (Li et al., 2025) uses templates and seed questions, LongAlign
https://arxiv.org/abs/2505.17134v1
(Bai et al., 2024a) employs Self-Instruct with packing strategies but requires curated examples, ChatQA (Liu et al., 2024b) blends QA datasets with conversational QA, ChatQA 2 (Xu et al., 2024b) packs documents into 32-128K token contexts, LOGO (Tang et al., 2024) adapts self-synthesis for long- context alignment, and GATEAU (Si et al., 2024) focuses on valuable instruction selection. These methods obtain high-quality data through complex pipelines. In contrast, LongMagpie eliminates seed questions, and complex pipelines by leveraging aligned LLMs’ ability to generate contextually relevant queries when provided only with documents. 5.2 Synthesis Methods for Short-Context Instruction Data Recent studies scale synthesis across various dimensions: Unnatural Instructions (Honovich et al., 2022) yields diverse instructions through paraphrasing; WizardLM (Xu et al., 2023a) uses evolutionary strategies for challenging variants; GLAN (Li et al., 2024a) eliminates templates by generating tasks from taxonomies; BARE (Zhu et al., 2025a) improves factual correctness; and Humpback (Li et al., 2024b) performs instruction back-translation. Domain-specific approaches like MetaMath (Yu et al., 2024) generate specialized content. Magpie (Xu et al., 2024c) demonstrates aligned LLMs can autoregressively generate diverse instructions without human annotation or seed examples. Motivated 1Our multi-document extension approach enables arbitrary context length extension without incurring additional computational overhead. 9 by Magpie, LongMagpie extends this paradigm to long-context settings by leveraging document- query relationship patterns from instruction tuning, enabling diverse long-context instruction data without specialized prompting. 6 Conclusion This paper introduces LongMagpie, a self-synthesis method that automatically generates large-scale long-context instruction data without human annotation or seed examples. Extensive experiments on HELMET, RULER, and Longbench v2 demonstrate that models trained on LongMagpie data achieve leading performance on long-context tasks while maintaining competitive short-context capabilities when combined with our proposed p-Mix strategy. This work establishes LongMagpie as an effective approach for democratizing long-context capabilities. 7 Limitations First, LongMagpie unavoidably inherits biases from the source instruction-tuned LLMs, which future work should detect and mitigate. Second, the current implementation of LongMagpie inadequately covers tasks requiring long-form outputs, as it primarily focuses on document-query relationships rather than extended reasoning or generation. Future research should expand support for diverse output formats and complex analytical tasks. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Mohammad Gheshlaghi Azar, Kevin Swersky, Daniel Tarlow, Satinder Sundararajan, Mohammad Hashemi, Danilo Jimenez Rezende, Stephan Gouws, Wenjie Zhao, Yunhao Li, Edward Gibson, et al. A general theoretical paradigm to understand learning from human preferences. arXiv preprint arXiv:2310.12036 , 2023. Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508 , 2023. Yushi Bai, Xin Lv, Jiajie Zhang, Yuze He, Ji Qi, Lei Hou, Jie Tang, Yuxiao Dong, and Juanzi Li. Longalign: A recipe for long context alignment of large language models. arXiv preprint arXiv:2401.18058 , 2024a. Yushi Bai, Shangqing Tu, Jiajie Zhang, Hao Peng, Xiaozhi Wang, Xin Lv, Shulin Cao, Jiazheng Xu, Lei Hou, Yuxiao
https://arxiv.org/abs/2505.17134v1
Dong, Jie Tang, and Juanzi Li. Longbench v2: Towards deeper understanding and reasoning on realistic long-context multitasks. arXiv preprint arXiv:2412.15204 , 2024b. Ramakrishna Bairi, Atharv Sonwane, Aditya Kanade, VageeshD C, Arun Iyer, Suresh Parthasarathy, Sriram Rajamani, B. Ashok, and Shashank Shet. Codeplan: Repository-level coding using llms and planning. Sep 2023. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 7432–7439, 2020. Avi Caciularu, MatthewE. Peters, Jacob Goldberger, Ido Dagan, and Arman Cohan. Peek across: Improving multi-document modeling via cross-document question-answering. May 2023. Chi Han Chen, Sang Michael Liu, Tianyi Xiao, Zhilin Xu, Rameswar Panda, Xin Eric Xu, Yikang Ma, Kevin Xie Lin, Soheil Feizi, and Pradip Mittal. Extending context window of large language models via positional interpolation. In Proceedings of the 40th International Conference on Machine Learning , 2023a. 10 Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. Longlora: Efficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307 , 2023b. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv preprint arXiv:2307.08691 , 2023. Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Sheng Si, Yun Liu, Zhiyuan Zhang, Yu Wu, Chao Li, et al. Ultrachat: A large-scale auto-generated data for diverse conversations with large language models. arXiv preprint arXiv:2305.14233 , 2023. Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. Rlhf workflow: From reward modeling to online rlhf. arXiv preprint arXiv:2405.07863 , 2024. Yury Fu, Peter Levin, Nikos Casas, Orhan Firat, and Rohan Anil. Data engineering for scaling language models to 128k context. arXiv preprint arXiv:2402.10171 , 2024. Chaochen Gao, Dongfu Li, Liantao Si, Yuanhang Zhao, Xing Wu, Debing Zhang, and Songlin Hu. Quest for long context with l2 norm enhanced position embeddings. arXiv preprint arXiv:2402.17320 , 2024a. Chaochen Gao, Xing Wu, Zijia Lin, Debing Zhang, and Songlin Hu. Nextlong: Toward effective long-context training without long documents. arXiv preprint arXiv:2501.12766 , 2025. Tianyu Gao, Alexander Wettig, Howard Yen, and Danqi Chen. How to train long-context lan- guage models (effectively). 2024b. URL https://api.semanticscholar.org/CorpusID: 273098476 . Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yixin Dai, Jiawei Sun, Haofen Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2:1, 2023. Google DeepMind. Gemini model updates: March 2025, March 2025. URL https://blog.google/technology/google-deepmind/ gemini-model-thinking-updates-march-2025/ . Accessed on May 8, 2025. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Retrieval augmented language model pre-training. arXiv preprint arXiv:2002.08909 , 2020. Chi Han, Qifan
https://arxiv.org/abs/2505.17134v1
Wang, Wenhan Wang, Yun Bai, Eric P Xing, Xiang Ren, and Fan Yang. Lm- infinite: Simple on-the-fly length generalization for large language models. arXiv preprint arXiv:2308.16137 , 2023. Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689 , 2022. Tianyu Hsieh, Gupta Otkrist, Jeff Wu, Devamanyu Lin, Yuntian Li, Yue Tian, Yann LeCun, and Wenhan Xiong. Ruler: Discrimination-aware long-context benchmarking. arXiv preprint arXiv:2405.17781 , 2024. Cheng Jin, Wenli Zhang, Aniruddha Li, Yang Zhang, Xiang Yuan, Wenjie Liu, Bin Zhao, Shiyu Chang, Yongfeng Zhang, and Yueting Zhuang. Llm self-extension: Teaching large language models to use tools by generating canonical api calls. arXiv preprint arXiv:2403.08233 , 2024. 11 Abdullatif Köksal, Timo Schick, Anna Korhonen, and Hinrich Schütze. Longform: Optimizing instruction tuning for long text generation with corpus extraction. In Findings of EMNLP 2024 , 2024. Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. T \" ulu 3: Pushing frontiers in open language model post-training. arXiv preprint arXiv:2411.15124 , 2024. Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang, Shaohan Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng, Xun Wang, Si- Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wang, Wai Lam, and Furu Wei. Synthetic data (almost) from scratch: Generalized instruction tuning for language models. arXiv preprint arXiv:2402.13064 , 2024a. Jiaxi Li, Xingxing Zhang, Xun Wang, Xiaolong Huang, Li Dong, Liang Wang, Si-Qing Chen, Wei Lu, and Furu Wei. Wildlong: Synthesizing realistic long-context instruction data at scale. arXiv preprint arXiv:2502.16684 , 2025. Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259 , 2023. Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259 , 2024b. Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet V ong, and "Teknium". Openorca: An open dataset of gpt augmented flan reasoning traces. https://https:// huggingface.co/datasets/Open-Orca/OpenOrca , 2023. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024a. Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. arXiv preprint arXiv:2007.08124 , 2020. Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Chankyu Lee, Mohammad Shoeybi, and Bryan Catanzaro. Chatqa: Building gpt-4 level conversational qa models. CoRR , 2024b. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. Anton Lozhkov, Loubna Ben Allal, Leandro von Werra, and Thomas Wolf. Fineweb-edu, May 2024. URL https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu . Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems , 35: 27730–27744, 2022. Denis Paperno, Germán
https://arxiv.org/abs/2505.17134v1
Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031 , 2016. Bowen Peng, Ye Zheng, Tie Xiao, Guangtao Liu, Siyuan Wang, Yutao Yang, and Aston Zhang. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071 , 2023. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290 , 2023. 12 Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis , pages 1–16. IEEE, 2020. Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Liu, Denis Lebowitz, Piero Molino Ferrer, Tom Cochrane, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 , 2023. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM , 64(9):99–106, 2021. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207 , 2021. Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Gergely Szilvasy, Rich James, Xi Victoria Lin, Noah A Smith, Luke Zettlemoyer, et al. In-context pretraining: Language modeling beyond document boundaries. arXiv preprint arXiv:2310.10638 , 2023. Shuzheng Si, Haozhe Zhao, Gang Chen, Yunshui Li, Kangyang Luo, Chuancheng Lv, Kaikai An, Fanchao Qi, Baobao Chang, and Maosong Sun. Gateau: Selecting influential sample for long context alignment. arXiv preprint arXiv:2410.15633 , 2024. Saba Sturua, Isabelle Mohr, Mohammad Kalim Akram, Michael Günther, Bo Wang, Markus Krimmel, Feng Wang, Georgios Mastrapas, Andreas Koukounas, Andreas Koukounas, Nan Wang, and Han Xiao. jina-embeddings-v3: Multilingual embeddings with task lora, 2024. URL https: //arxiv.org/abs/2409.10173 . Rui Sun, Zhiwei Sun, Yang Li, Yi Ren, and Wei Bi. Efficient training of ultra-long context large language models. arXiv preprint arXiv:2504.06214 , 2025. Zecheng Tang, Zechen Sun, Juntao Li, Qiaoming Zhu, and Min Zhang. Logo–long context alignment via efficient preference optimization. arXiv preprint arXiv:2410.18533 , 2024. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpaca: A strong, replicable instruction-following model. arXiv preprint arXiv:2303.16199 , 2023. Llama team at Meta. Llama 3: Third-generation open foundation language models. arXiv preprint arXiv:2405.08456 , 2024. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research , 9(11), 2008. Liang Wang, Nan Yang, Xingxing Zhang, Xiaolong Huang, and Furu Wei. Bootstrap your own context length. arXiv preprint arXiv:2412.18860 , 2024. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560 , 2022a. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated
https://arxiv.org/abs/2505.17134v1
instructions. arXiv preprint arXiv:2212.10560 , 2022b. URL https://arxiv.org/abs/2212.10560 . Yunshu Wu, Hayate Iso, Pouya Pezeshkpour, Nikita Bhutani, and Estevam Hruschka. Less is more for long document summary evaluation by llms. arXiv preprint arXiv:2309.07382 , 2023. Guangxuan Xiao, Han Tian, Ji Lin, Xiang Zhang, Hanling Cheng, Kai Huang, Chuang Gan, Luowei Zhou, Jordan Boyd-Graber, and Ser-Nam Wang. Efficient streaming language models with attention sinks. In Proceedings of the 41st International Conference on Machine Learning , 2023. 13 Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, et al. Effective long-context scaling of foundation models. arXiv preprint arXiv:2309.16039 , 2023. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244 , 2023a. Peng Xu, Wei Ping, Xianchao Wu, Chejian Xu, Zihan Liu, Mohammad Shoeybi, and Bryan Catanzaro. Chatqa 2: Bridging the gap to proprietary llms in long context and rag capabilities. arXiv preprint arXiv:2407.14482 , 2024a. Peng Xu, Wei Ping, Xianchao Wu, Chejian Xu, Zihan Liu, Mohammad Shoeybi, and Bryan Catanzaro. Chatqa 2: Bridging the gap to proprietary llms in long context and rag capabilities. arXiv preprint arXiv:2407.14482 , 2024b. Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, and Bill Yuchen Lin. Magpie: Alignment data synthesis from scratch by prompting aligned llms with nothing. In International Conference on Learning Representations , 2023b. Zhangchen Xu, Fengqing Jiang, Luyao Niu, Yuntian Deng, Radha Poovendran, Yejin Choi, and Bill Yuchen Lin. Magpie: Alignment data synthesis from scratch by prompting aligned llms with nothing. arXiv preprint arXiv:2406.08464 , 2024c. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Tse-Yu Yen, Tsu-Jui Cheng, Paul Pu Liang, Xiang Dong, Tianyu Zhao, Wenhan Liu, Wenhao Wang, Min Peng, Oleksiy Shliazhko, Li Zhang, et al. Helmet: A hierarchical efficient benchmark for long context evaluation with modular units and comprehensive taxonomy. arXiv preprint arXiv:2405.18696 , 2024. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. In ICLR 2024 , 2024. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 , 2019. Alan Zhu, Parth Asawa, Jared Q. Davis, Lingjiao Chen, Boris Hanin, Ion Stoica, Joseph E. Gonzalez, and Matei Zaharia. Bare: Combining base and instruction-tuned language models for better synthetic data generation. arXiv preprint arXiv:2502.01697 , 2025a. Dawei Zhu, Zhihong Xia, Fan Ye, Yuchao Qin, Haoran Zhou, Haoyu Chen, Rongpeng Ren, Yankai Li, Zheng Zhang, Rui Fu, et al. Pose: Efficient context window extension of llms via positional skip-wise training. arXiv preprint arXiv:2309.10400 , 2023. Wenhao Zhu, Pinzhen Chen, Hanxu Hu, Shujian Huang, Fei Yuan, Jiajun Chen, and Alexandra Birch. Effective data synthesis for long-context instruction tuning. arXiv preprint arXiv:2502.15592 ,
https://arxiv.org/abs/2505.17134v1
2025b. 14 A Detailed Experimental Results A.1 Training Config We employ the AdamW (Loshchilov and Hutter, 2017) optimizer with parameters β1= 0.9and β2= 0.95. Following ProLong (Gao et al., 2024b), we concatenate samples up to 64K sequence length and apply the document masking technique to prevent interactions between independent sequences. Additionally, we utilize FlashAttention-2 (Dao, 2023) and ZeRO (Rajbhandari et al., 2020) to optimize memory usage and accelerate training. The detailed training config is shown in Table 6. Table 6: Model Training Configuration. training setting Initial Model Llama-3-8B-NExtLong-512K-Base rotary-emb-base 128,000,000 β1 0.9 β2 0.95 lr 2e−5 precision bfloat16 gradient-clipping 1.0 weight-decay 0.1 lr-decay-style cosine train-iters 250 seq-length 65536 GPU-type H100 GPU-numbers 8 training-time 10h A.2 Impact of Multi-Document Setting Figure 4 illustrates the performance variation under different multi-document configurations. Figure 4: Impact of the multi-document setting on model performance. As the number of documents increases, the performance on long-context tasks improves and then decreases. A.3 Ablation Study on p-Mix Strategy Parameters To further understand the behavior of the p-Mix strategy, we conducted an ablation study on its key parameters: the number of initial short-context samples pre-pended ( NS), and the probability ( PL) of selecting a long-context sample during the probabilistic mixing phase (see Algorithm 1). The results, presented in Table 7, showcase how different configurations impact overall performance on both long and short tasks evaluation benchmarks. These experiments were conducted with n= 10 for the multi-document context length parameter. 15 Table 7: Detailed ablation results for different parameter settings of the p-Mix strategy. NSis the number of pre-pended short tasks. PLis the long-context selection probability. NSPL HELMET RULER Longbench LongA VG ShortA VG 00.2 61.38 88.52 29.60 59.83 64.17 0.4 61.84 89.65 31.20 60.90 64.04 0.6 61.64 90.51 31.00 61.05 63.92 0.8 61.48 90.54 30.40 60.81 63.41 10.2 61.62 88.05 31.60 60.42 64.39 0.4 62.11 89.70 33.00 61.60 64.10 0.6 61.74 90.58 29.80 60.71 63.71 0.8 61.45 90.66 28.80 60.30 63.33 50.2 61.41 88.12 29.80 59.78 64.16 0.4 61.70 88.67 31.20 60.52 64.13 0.6 61.90 90.07 30.00 60.66 63.97 0.8 61.34 90.53 31.00 60.96 63.68 300.2 61.17 85.67 31.80 59.55 64.41 0.4 60.77 85.30 30.00 58.69 64.25 0.6 60.67 86.09 30.80 59.19 64.39 0.8 60.60 84.42 30.00 58.34 64.21 Algorithm 1 Hybrid SFT Data Construction with short-context Pre-pending and Probabilistic Mixing 1:procedure CONSTRUCT HYBRID SAMPLE (DS, DL, PL, Lmax, sep ) 2: Initialize Sconcat ←empty sequence ▷ DS: set of short-context SFT samples, DL: set of long-context SFT samples ▷ PL: probability of selecting a long-context sample, Lmax : max sequence length ▷ sep : separator token/sequence between samples 3: s0←RandomSample (DS) 4: Sconcat ←FormatSample (s0) 5: current _length ←Length (Sconcat ) 6: whilecurrent _length < L max do 7: rand←RandomReal (0,1) 8: ifrand < P Lthen ▷Select long-context sample with probability PL9: lnext←RandomSample (DL) 10: formatted _lnext←FormatSample (lnext) 11: ifcurrent _length +Length (sep) +Length (formatted _lnext)≤Lmax then 12: Sconcat ←Sconcat ⊕sep⊕formatted _lnext13: current _length ←Length (Sconcat ) 14: else 15: break ▷Next sample exceeds Lmax16: end if 17: else ▷Select short-context sample with probability 1−PL18: snext←RandomSample (DS) 19: formatted _snext←FormatSample
https://arxiv.org/abs/2505.17134v1
When can isotropy help adapt LLMs’ next word prediction to numerical domains? Rashed Shelim Department of Electrical and Computer Engineering & Department of Computer Science Virginia Tech rasheds@vt.eduShengzhe Xu Department of Computer Science Virginia Tech shengzx@vt.edu Walid Saad Department of Electrical and Computer Engineering Virginia Tech walids@vt.edu Naren Ramakrishnan Department of Computer Science Virginia Tech naren@cs.vt.edu Abstract Recent studies have shown that vector representations of contextual embeddings learned by pre-trained large language models (LLMs) are effective in various downstream tasks in numerical domains. Despite their significant benefits, the tendency of LLMs to hallucinate in such domains can have severe consequences in applications such as energy, nature, finance, healthcare, retail and transportation, among others. To guarantee prediction reliability and accuracy in numerical domains, it is necessary to open the black-box and provide performance guarantees through explanation. However, there is little theoretical understanding of when pre- trained language models help solve numeric downstream tasks. This paper seeks to bridge this gap by understanding when the next-word prediction capability of LLMs can be adapted to numerical domains through a novel analysis based on the concept of isotropy in the contextual embedding space. Specifically, we consider a log-linear model for LLMs in which numeric data can be predicted from its context through a network with softmax in the output layer of LLMs (i.e., language model head in self- attention). We demonstrate that, in order to achieve state-of-the-art performance in numerical domains, the hidden representations of the LLM embeddings must possess a structure that accounts for the shift-invariance of the softmax function. By formulating a gradient structure of self-attention in pre-trained models, we show how the isotropic property of LLM embeddings in contextual embedding space preserves the underlying structure of representations, thereby resolving the shift-invariance problem and providing a performance guarantee. Experiments show that different characteristics of numeric data and model architecture could have different impacts on isotropy. 1 Introduction Large language models have demonstrated impressive success in adapting to various downstream tasks in numerical domains, such as finance Garza and Mergenthaler-Canseco [2023], Yu et al. [2023], energy Gao et al. [2024], climate science Jin et al. [2024], healthcare Wang and Zhang [2024],arXiv:2505.17135v2 [cs.CL] 26 May 2025 wireless communications Xu et al. [2024], synthetic tabular generation Dinh et al. [2022], Borisov et al. [2023], Xu et al. [2024], among others. Inspired by the success of pre-trained LLMs, several methods have been developed recently in Gruver et al. [2024], Dooley et al. [2023], Nie et al. [2023], Rasul et al. [2024], Woo et al. [2024], Jin et al. [2024], Ansari et al. [2024] by adapting LLM to numerical domains. For many of these numeric downstream tasks, training a linear classifier on top of the hidden-layer representations generated by the pre-trained LLMs have already shown near state-of-the-art performance Jin et al. [2024], Ansari et al. [2024]. However, these models in Gruver et al. [2024], Dooley et al. [2023], Nie et al. [2023], Rasul et al. [2024], Woo et al. [2024], Jin et al. [2024], Ansari et al. [2024] are treated as ‘black-box’ where numeric forecasts are controlled by complex
https://arxiv.org/abs/2505.17135v2
nonlinear interactions between many parameters. This makes it difficult to understand how models arrive at their predictions and makes it challenging for users to trust the model outputs. LLMs’ tendency to hallucinate can have serious consequences in critical numeric applications. For example, prediction errors in fraud detection in finance can lead to huge financial losses and errors in protection onset of sepsis or cardiac arrest in healthcare can result in patient deaths. Thus, to guarantee prediction reliability and accuracy in numerical domains, it is necessary to open the black- box and provide performance guarantees through explaination. Although recent empirical studies Jin et al. [2024], Nie et al. [2023], Liu et al. [2024] demonstrate the benefits of vector representations of embedding learned by LLMs in various numeric downstream tasks, there is little theoretical understanding of their empirical success. Thus, a fundamental question arises: “when (or how) can the next-word prediction capability of LLMs be effectively adapted to numerical domains?" The main contribution of this paper is to provide an approach to answer this question by exploiting the isotropic property of LLM hidden representations in the contextual embedding space. Isotropy refers to the geometric property wherein vector representations in the embedding space are uniformly distributed in all directions, a characteristic critical for maintaining the expressiveness of the embed- ding space Arora et al. [2016], Mu and Viswanath [2018]. To achieve state-of-the-art performance in numerical domains, we show that the hidden representations of LLMs must exhibit a structured form in contextual embedding space that accounts for the shift-invariance of the softmax function (i.e., the softmax output remains unchanged when all logits are shifted by a constant). Without such structure, the model can shift the logits while keeping the training loss unchanged, thereby leaving the logits ineffective for numeric downstream tasks. By formulating a gradient structure of self-attention in pre-trained models, we show how the isotropic property of LLM embeddings in contextual embedding space preserves the underlying structure of representations, thereby resolving the shift-invariance problem of the softmax function. Our key contributions include: (i) We consider a log-linear model for LLMs and demonstrate theoreti- cally why hidden representations must exhibit structure to address the shift-invariance problem of the softmax function. (ii) We take a deeper look into the hidden representations of pre-trained models and show how isotropy preserves the structural integrity of representations. In particular, we derive an up- per bound for the Jacobian matrix which collects all first-order partial derivatives of self-attention with respect to the input pattern and show that mlargest eigenvectors of the LLM hidden representations minimize the gradient norm of self-attention. Then by projecting the representations into lower di- mensions using these mlargest eigenvectors, we find the isotropy within the clusters in the contextual embedding space. (iii) Finally, we provide a comprehensive evaluation across 12real and 10synthetic 𝒙𝒕 𝒔𝒕 Tokenization techniques Quantization, Patching etc. 𝒌𝟏𝒌𝟐⋯𝒌𝑻Time series tokensTime series Figure 1: Time series tokenization.time series datasets over 6different LLMs. 2 Problem Setup in Numerical Domains Time Series Tokens and Similarity Measure. Similar to next-word prediction by LLMs, the next-value prediction in the numerical
https://arxiv.org/abs/2505.17135v2
domain can be modeled by time series forecast- ingtechniques Jin et al. [2024], Ansari et al. [2024] which are widely adopted in the machine learning literature. Formally, given a time series x1:T+L= [x1, . . . , x T, . . . , x T+L], where the first Ttime instances give the historical context, the next Ltime instances constitute the forecast region, and xt∈R is the observation of each time instance, we are interested in predicting the joint distribution of next Ltime instances, p(xT+1:T+L|x1:T). Since, the pre-trained models operate on tokens from a finite vocabulary, using them for time series data requires mapping the observations to a finite set of tokens. Based on different numeric applications 2 and LLM architectures, various tokenization techniques, e.g., quantization and scaling Ansari et al. [2024], Rasul et al. [2024], patching Woo et al. [2024], Jin et al. [2024], Nie et al. [2023], and adaptation of language model tokenizer in numeric domians Gruver et al. [2024], Dooley et al. [2023], can be applied to tokenize the time series and create a time series vocabulary VofNtime series tokens, i.e., |V|=N, as shown in Figure 1. Then, the realization of the next Ltime instances can be obtained by autoregressively sampling from the predicted distribution p(kT+l+1|k1:T+l), for l∈ {1, . . . , L }, where k1:T+lis the tokenized time series and kibe a time series token in time series vocabulary |V|. Let˜Ψ(ki) ={ψ1(ki), ψ2(ki), . . .}be the set of all LLM contextual embedding instances of time series token ki. Here, different contexts in the time series sequences yield different LLM embeddings ofki. By constructingP k|˜Ψ(k)|=|V|, we define the inter-token cosine similarity as, ζcos≜Ei̸=j[cos(ψ(ki), ψ(kj))], (1) where ψ(ki)andψ(kj)are random samples from ˜Ψ(ki). The expectation is taken over all pairs of different tokens. The inter-token cosine similarity metric describes the similarity between different tokens based on the contexts. For ease of reading, we express T+lasTlandT+l+ 1asTl+1for the rest of the paper. Model. We consider a general pre-trained model for numerical data and open the black box of the pre-trained model. Specifically, we assume that the observation probability of kTl+1given k1:Tl satisfies the log-linear model Arora et al. [2016] p∗(kTl+1=i|k1:Tl)∝exp(⟨ψ∗(k1:Tl), ψ∗(ki)⟩), (2) where ψ∗(ki)∈RDis a vector that only depends on the time series token ki∈V, and ψ∗(k1:Tl)is a function that encodes the tokenized time series sequence k1:Tlinto a vector in RD. The log-linear modeling aligns with the commonly used LLMs networks whose last layer is typically a softmax layer. Moreover, we do not consider any prior distribution for input, which makes our model more general than previous latent models Arora et al. [2016], Wei et al. [2021]. To define the numeric downstream task, let z∗ i(k, l):=⟨ψ∗(k1:Tl), ψ∗(ki)⟩be the i-th logit of the ground-truth model, and assume that the numeric downstream tasks are defined by a function of the logits, i.e., f∗(z∗). Also let Z∗(k, l)=PN i=1exp(z∗ i(k, l))=P|V| i=1exp(⟨ψ∗(k1:Tl), ψ∗(ki)⟩)be the partition function Arora et al. [2016], i.e., normalization factor. In LLMs, the partition function is often used to normalize the output probabilities of the model, ensuring that they sum
https://arxiv.org/abs/2505.17135v2
to 1. Then, the normalized ground-truth model ∀i∈Vis given by p(kTl+1=i|k1:Tl) =exp(⟨ψ∗(k1:Tl), ψ∗(ki)⟩) P|V| i=1exp(⟨ψ∗(k1:Tl), ψ∗(ki)⟩)=exp(z∗ i(k, l)) Z∗(k, l) Since we do not know the ground-truth model in reality, we do not have access to the ground- truth model ψ∗(ki)andψ∗(k1:Tl). Instead, we only have access to the student model ψ(ki)and ψ(k1:Tl)that aims to achieve low pre-training loss. We can define the student logits as z(k, l):= {⟨ψ(k1:Tl), ψ(ki)⟩}|V| i=1. Intuitively, zare the contextualized representations learned by the student- model during pre-training. Then, the solution of the downstream task is to learn a function f(k, l). Then, the output of the student model ∀i∈Vcan be defines as p(kTl+1=i|k1:Tl)=exp(⟨ψ(k1:Tl), ψ(ki)⟩) Z(k, l). (3) Loss Function. As typical in language models, we use the categorical distribution over the elements in the time series vocabulary Vas the output distribution p(kTl+1|k1:Tl), forl∈ {1, . . . , L }, where k1:Tlis the tokenized time series. The student model is trained to minimize the cross entropy between the distribution of the tokenized ground truth label and the predicted distribution. The loss function for a single sequence of tokenized time series is given by Ansari et al. [2024], Wu et al. [2023] L=−L+1X l=1|V|X i=1p∗(kTl+1=i|k1:Tl) logp(kTl+1=j|k1:Tl) =L+1X l=1|V|X i=1DKL(p∗(kTl+1=i|k1:Tl)∥p(kTl+1=j|k1:Tl)) +H(p∗(kTl+1=j|k1:Tl)), (4) 3 where p(kTl+1=i|k1:Tl)is the categorical distribution predicted by the student model parametrized byv1:Tl,p∗(kTl+1=i|k1:Tl)is the distribution of ground-truth model, DKLis the KL divergence, and H(p∗(kTl+1=i|k1:Tl))is the entropy of distribution p∗(kTl+1=i|k1:Tl)which is a constant. We assume that student model achieves a small loss so that the KL-divergence term in (4)is also small. Downstream Numerical Task. We consider a simple downstream task whose prediction on cat- egorical distribution is linear in ψ∗(k1:Tl), that is, f∗(k, l) =⟨ψ∗(k1:Tl), u∗⟩=P|V| i=1a∗ iz∗ i(k, l), where u∗=P|V| i=1a∗ iψ∗(ki)∈RDandajis the coefficient. This model is still not sufficient to provide a performance guarantee to generalize to downstream task in unseen scenarios. However, the log probability difference is proportional to the difference in the value of the perfect model (i.e., ground-truth) f∗(k, l). This allows the student model to alter the signs of f∗(k, l)without resulting in a large KL divergence Wu et al. [2023]. Then, it is more reasonable to model the numeric downstream task as f∗(k, l) =|V|X i=1a∗ iσ(z∗ i(k, l)−b∗ i) =|V|X i=1a∗ iσ(⟨ψ∗(k1:Tl), ψ∗(ki)⟩ −b∗ i), where σis the ReLU function and b∗ jdenotes the threshold for the logits. The numeric downstream task only considers the logits that are above the threshold, and thus ignores all the entries with very small probabilities. 3 The Role of Isotropy in Adapting LLMs to Numerical Data As previously discussed in Section 2, we consider LLM networks whose last layer is usually a softmax layer and the numeric downstream task is determined by the function of the logits. The underlying relation between the logits and softmax function determines the performance of the numeric downstream tasks. However, the softmax function is shift-invariant, that is, the output of the softmax function remains unchanged when all logits are shifted by a constant. Since we do not have any control over the logit shift of the student model
https://arxiv.org/abs/2505.17135v2
on unseen data, good performance during training does not necessarily provide any performance guarantee for the numeric downstream task on unseen scenarios. This can be formalized in the following theorem and the proof is provided in Appendix A. Theorem 3.1. Let the logits of the ground-truth model be bounded. Then for any f∗(k, l), there exists a set of functions {ˆzi(k, l)}|V| i=1such that for all kandTl+1, the predictive distribution of the student model ˆp(kTl+1|k1:Tl)matches that of ground-truth model p∗(kTl+1|k1:Tl)andˆf(k, l) = 0 . In other words, there exists a student model with the same pre-training loss as the ground-truth model, but its logits are ineffective for the numeric downstream tasks. Theorem 3.1 demonstrates that without any structure in the hidden representations of LLM embed- dings, the student model is able to shift the logits for any sample while keeping the pre-training loss unchanged and leaving logits ineffective for the numeric downstream tasks. Consequently, a theoretical guarantee for the numeric downstream task performance needs structure in the LLM representations learned by the pre-trained model. One way to prevent the shift-invariance problem from influencing the performance of the numeric downstream tasks is to keep the partition function stable. Let Ψ=(ψ1(k), . . . , ψ |V|(k))⊤∈R|V|×Dbe the hidden representations of input time series sequence. Then the stability of the partition function can be assessed through the isotropy in the contextual embedding space Arora et al. [2016], Mu and Viswanath [2018] as follows I({ψ(k)}) =minψ(k)∈CZ(k, l) max ψ(k)∈CZ(k, l), (5) whereC=Ψ⊤Ψis the input correlation matrix of input pattern and l= 1, . . . , L . From (5), we can see that when the partition function is constant (i.e., stable) for different samples, I({ψ(k)})becomes close to 1which indicates that the contextual embedding space {ψ(k)}is more isotropicArora et al. [2016], Mu and Viswanath [2018]. Note that in (3), the probability of a value in any time instance is the exponential of the corresponding logit zi(k, l)divided by the partition function Z(k, l). If the partition function remains stable for different samples, the logits can be solely determined by the probabilities, thereby resolving the shift-invariance problem of the softmax function. 4 Study of isotropy in LLM hidden representations Analysis settings. For illustrative purposes, in this section, we present two examples from a specific numerical domain, such as transport, and for a specific language model (i.e., GPT- 2) that illustrate the conditions under which isotropy is preserved in LLM representations. 4 Figure 2: Visualization of transport Dataset 1 (Left) and Dataset 2 (Right) labels for two extreme cases of velocities, i.e., 10km/hour and 100km/hour.In Section 5, we provide comprehensive evalu- ations across 22different datasets from various numerical domains and other 5different lan- guage models with different architectures and tokenization techniques. For this specific anal- ysis, we select two datasets from transport as they are dynamic, noisy, time-varying, and thus hold all primary characteristics of the numeri- cal data across various domains. Specifically, we use two ways of sharing signals which are typically used for signal transmission in trans- portation settings, as shown in Figure 2. We
https://arxiv.org/abs/2505.17135v2
call these two datasets as “Dataset 1” and “Dataset 2”. The downstream task here is to predict the transport signalling property over wireless channel using LLM, where Dataset 1causes good down- stream performance (i.e., near optimal signal prediction), while the Dataset 2causes bad downstream task performance (i.e., high error in signal prediction). We use NMSE as a performance metric for the numeric downstream task because it is widely used for signal prediction. We deploy the first six layers of GPT2 Radford et al. [2019] and use the datasets and simulation setups from Liu et al. [2024], which are the standard settings for time series forecasting. We predict L= 4future signaling properties based on the historical T= 16 signal properties through time series forecasting using GPT2. The training and validation dataset contains 8,000and1,000samples, respectively, with user velocities uniformly distributed between 10km/hour and 100km/hour. The test dataset contains ten velocities ranging from 10km/hour to 100km/hour, with 1,024samples for each velocity. Table 1: The effective dimension d(0.8) Layer 1 2 3 4 5 6 Dataset 1 4 4 4 4 4 4 Dataset 2 1 1 1 1 1 1Effective Dimensions. In each layer of each model, we start with a data matrix A∈R|V|×D, where |V| represents the number of tokens in the input time se- ries sequence, and Dcorresponds to the embedding dimension. We apply PCA to reduce the dimensionality from Dtomi.e.,˜A∈R|V|×m. Then the fraction of variance captured by the reduced representation is given by: rm=Pm−1 i=0σiPD−1 i=0σiwhere σidenotes thei-th largest eigenvalue of the covariance matrix of A. We define the ϵ-effective dimension as d(ϵ)≜arg min mrm≥ϵ. For instance, if d(0.8) = 3 , it means that three principal dimensions retain 80% of the variance. A higher dsuggests a more isotropic space Cai et al. [2021], where information is spread across multiple dimensions rather than being concentrated in a narrow subspace. Table 1 presents the values of d(0.8)for different layers and models. Surprisingly, GPT-2 has so few effective dimensions, with d(0.8) = 4 for Dataset 1andd(0.8) = 4 for Dataset 1, for layers 1 through 6, as compared to its original embedding dimensions D= 768 . The reduced dimensionality suggests that GPT-2’s embeddings lie in a subspace defined by a very narrow cone Ethayarajh [2019], and hence, there would be hardly any model capacity. Surprisingly, these language models are as successful as they are in numerical domains, given that most of their embedding vectors are as similar to one another as they are. These observations motivate us to look deeper into the contextual embedding space. 4.1 Clusters in the Contextual Embedding Space LetG(Ψ) = (g1(Ψ), . . . , g |V|(Ψ))⊤:R|V|×D7→R|V|×Dbe the function for self-attention, i.e., gi(Ψ) =softmax (ΨΛΨ⊤)Ψ, where Λ=WQW⊤ K∈RD×D, andWQ∈RD×m,WK∈RD×mare the parameter matrices for the query and key matrices of self-attention. The lemma below contributes to understanding why the isotropic property of pre-trained LLMs help to generalize to numerical domains. The proof of this lemma closely aligns with the approach in Kim et al. [2021]. The proof of this lemma follows the analysis in Kim et al. [2021] is provided in
https://arxiv.org/abs/2505.17135v2
Appendix B for completeness. Lemma 4.1. Consider the Jacobian matrix J=h ∂gi(Ψ) ∂ψji|V| i,j=1, which represents the gradient of the self-attention mapping G(Ψ)with respect to the input time series token embeddings. Then the spectral norm of Jsatisfies ∥J∥2≤ |Λ|2P|V| i=1 pi,i+1 2 ψi−P|V| j=1pi,jψj 2 + ∆, where the 5 residual term ∆is given by ∆ =|Λ|2P|V| i̸=jpi,j ψj−P|V| q=1pi,qψq 2 +|Λ|2 2P|V| j=1|ψj|2, and the attention weights pi,jare defined as pi,j=exp(ψ⊤ iΛψj)P|V| k=1exp(ψ⊤ iΛψk). From Lemma 4.1, we can see that to minimize the norm of the gradient ∥J∥2, we essentially need to makeP|V| i=1 ψi−P|V| j=1pi,jψj 2 small. When Λis small and all the input time series token embeddings are centered at the origin,P|V| i=1ψi= 0, we haveP|V| i=1 ψi−Ψ⊤pi,: 2≈P|V| i=1 ψi−Ψ⊤ΨΛψi 2(see Appendix B). The theorem below shows that Λminimizes the objectiveP|V| i=1 ψi−Ψ⊤ΨΛψi 2and contains the mlargest eigenvectors of correlation matrix Ψ⊤Ψof time series token embeddings, where mis the rank of Λ. The proof of Theorem 4.1 is provided in Appendix C. Theorem 4.1. Let the eigenvalues of the correlation matrix Ψ⊤Ψbe ordered as λ1≥λ2≥ ··· ≥ λD, and let γi∈RDfori= 1, . . . , D denote their associated eigenvectors. Then, the matrix Λ∗that minimizes the quantityP|V| i=1 ψi−Ψ⊤ΨΛψi 2has the optimal form Λ=Pm i=11 λiγiγ⊤ i. Theorem 4.1 shows that the self-attention learns to perform a function for the numeric downstream tasks through training, which are closely related to the mlargest eigenvectors of LLM hidden representations. In other words, the self-attention mechanism effectively projects input time series tokens onto a low-dimensional contextual embedding space defined by the top eigenvectors of correlation matrix Ψ⊤Ψ. (a) (b) Figure 3: Variations in GPT-2’s hidden representation for different datasets from the same domain: (a) PCA plot of contextual embedding space for transport Dataset 1. (b) PCA plot of contextual embedding space for transport Dataset 2. Motivated by the findings from Lemma 4.1 and Theorem 4.1, we project the models’ hidden rep- resentations of into a lower-dimensional space by utilizing the m=3largest eigenvectors through PCA, as shown in Figure 3. The three axes of the figure represent the first three principal components of the covariance matrix of LLM representations of each layer. For instance, in layer 6, the first three principal components account for 76% and 92% of the total variance for Dataset 1and Dataset 2, respectively. From Figure 3 a and Figure 3 b, we can see that there are disconnected or slightly overlapped islands that are far away from each other through layer 3 to layer 6. Note that the first principal dimension value spans from 0to2000 , significantly wider than the other 2dimensions, and dominates the total variance. A similar analogy can also be observed for transport Dataset 2 in Fig- ure 3 b. In (1), the space isotropy is measured on pairs of arbitrary time series token representations, which could reside in two disconnected clusters. However, given that the variance is dominated by distances between clusters, such estimation would be biased by the inter-cluster distances. Hence, it is more meaningful to consider a per-cluster (i.e., local)
https://arxiv.org/abs/2505.17135v2
investigation rather than a global estimate. We start by performing clustering on the LLM representations in the contextual embedding space. There are various methods for performing clustering, such as k-means, DBSCAN Ester et al. [1996]. We select K-means clustering method because it is reasonably fast in high embedding dimensions. We use the classical silhouette score analysis Rousseeuw [1987] to determine the number of clusters |C|in the contextual embedding space (see Appendix D for details). Since each LLM contextual embedding instance ψ(ki)belongs to a particular cluster through clustering, the cosine similarity should be measured after shifting the mean to the origin Mu and Viswanath [2018]. Accordingly, 6 we subtract the mean for each cluster (i.e., centroid) and calculate the adjusted ζcosin Section 2. Assuming we have a total of |C|clusters, let ψc(ki) ={ψ1 c(ki), ψ2 c(ki), . . .}be the set of token k’s contextual embeddings in cluster c∈C, andψc(ki)be one random sample in ψc(ki). We define the adjusted inter-token cosine similarity as ζ′ cos≜Ec Ei̸=j cos¯ψc(ki),¯ψc(kj) , (6) where ¯ψc(ki) =ψc(ki)−Eψc[ψc(ki)]. Here Ecis the average over different clusters, and ¯ψc(ki) is the original contextual embedding shifted by the mean, with the mean taken over the samples in cluster cKim et al. [2021]. The inter-token cosine similarity takes values between −1and1. A value close to 0 indicates strong isotropy and ensures the existence of structure in the LLM representations. Isotropy in Dataset 1. In this section, we provide an example of Dataset 1where a high prediction accuracy (i.e., low NMSE) is achieved by GPT2 based model. For instance, in Figure 4a, we compare the NMSE performance of our GPT2 based LLM with non-language models for different user velocities. From Figure 4 a, we can observe that the NMSE performance of all models gradually increased along with the increase in user velocity. This is because, with the increase in velocity, the transport signal characteristics rapidly change within a very short coherence time, resulting in increased prediction difficulty for the prediction model. The GPT2 based model consistently outperforms other baselines and demonstrates its high prediction accuracy. inter -type cosine similarity 𝜻𝒄𝒐𝒔 is close to 0Isotropy exists in LLM embedding space Stabilizes the partition function 𝒁 Solves the shift - invatriance problem of softmax function Makes the logits 𝒛 effective for numeric downstream tasks(a) (b)Question: When does the next -word prediction capability of LLMs fail to adapt to numerical domains? Explanation: Figure 4: (a) LLM outperforms all other baselines for all of the ten different velocities for Dataset 1. (b) Inter-type cosine similarities for Dataset 1with different velocities. ζ′ cosare close to zero for all the layers, including layer 6, indicating that nearly perfect isotropy exists in the LLM embedding space for the Dataset 1, which preserves the structure in the LLM’s hidden representations and causes the high prediction accuracy. For illustrative purposes, we pick three user velocities: 10km/hour, 50km/hour, and 100km/hour, for isotropy assessment of Dataset 1. The GPT2 based model achieves good NMSE performance for all of these three velocities, as shown in Figure 4a. We apply inter-type cosine similarity ζ′ cosin (6)to measure
https://arxiv.org/abs/2505.17135v2
the isotropy in GPT2 embedding space. From Figure 4b, we can see that the GPT2 has consistent near-zero cosine similarity values for all layers, including layer 6. This indicates that nearly perfect isotropy exists in the GPT2 embedding space for the Dataset 1, which preserves the structure in the GPT2’s hidden representations and causes good downstream task performance. Explanation:Question: When does the next -word prediction capability of LLMs fail to adapt to numerical domains? High inter -type cosine similarityWeak isotropy (i.e., anisotropy) exists in LLM embedding space Destabilizes the partition function 𝒁 Softmax function becomes shift -variant Makes the logits 𝒛 useless for numeric downstream tasks (a) (b) Figure 5: (a) The NMSE performance of the LLM based model for Dataset 2 deteriorates significantly compared to Dataset 1. (b) Inter-type cosine similarities for Dataset 2for different velocities. Higher ζcosvalues indicate a weak isotropy (i.e., anisotropy) exists in the LLM embedding space which causes a lack of structure in the hidden representations, yielding poor prediction accuracy. Isotropy in Dataset 2. In this section, we provide an example of Dataset 2where a poor prediction accuracy (i.e., high NMSE) is achieved by GPT2 based model. As shown in Figure 5 a, the NMSE performance fluctuates randomly for different velocities, while the NMSE performance for Dataset 1is gradually increasing with increase in the velocities. The NMSE performance for Dataset 2 7 deteriorates significantly compared to the Dataset 1. As before, with the three user velocities, the NMSE performance for Dataset 2for all of these velocities is worse as compared to Dataset 1, as shown in Figure 5 a. From Figure 5 b, we can observe a weak isotropy (i.e., anisotropy) in the LLM embedding space for Dataset 2, causing a lack of structure in the GPT2 hidden representations, and thus leading to bad downstream performance. 5 Experiments Baselines. We consider popular pre-trained LLMs as the baselines for numeric downstream tasks, including PatchTST Nie et al. [2023], Lag-Llama Rasul et al. [2024], Moirai-1.0-R Woo et al. [2024], Chronos-T5 Ansari et al. [2024] and Choronos-Bolt (https://huggingface.co/autogluon/chronos-bolt- base). The models use different architectures, time series tokenization techniques and hyperparameters for numeric downstream tasks as summarized in Table 2. Table 2: LLM models architectures, time series tokenization techniques and hyperparameter choices. Lstands for context length, dhfor hidden layer dimension, nLfor number of layers, nHfor number of heads, and ηfor learning rate. Model Architecture Tokenization Technique Hyperparameters Chronos-T5Encoder-Decoder with autoregressive forecastingScaling & Quantization Default Chronos-BoltEncoder-Decoder with multi-step forecastingScaling & Quantization Default PatchTST Vanilla Encoder Patching Patch length: 16, Stride: 8, dh= 32, nL= 2, nH= 4 Moirai Encoder Patching L= 1024 , Patch length: selected by dataset-specific validation Lag-Llama Decoder Lag Feature L= 32 Table 3: Real and Synthetic Datasets Data Subset Domain Dataset 1 Dataset 2 Real DatasetsEnergy Australian Electricity – Queensland State Australian Electricity – South Australia Weather Solar Radiation Rainfall Finance Exchange Rate NN5 Weekly Cash Withdrawals Healthcare Hospital Patient Counts COVID-19 Deaths Transportation Transportation Signaling 1 Transportation Signaling 2 Retail Car Sales Dominick Synthetic DatasetsLinear DotProduct kernel (C=0) DotProduct kernel (C=1) Seasonality Seasonality kernel (period = 0.5W)
https://arxiv.org/abs/2505.17135v2
Seasonality kernel (period = 0.25H) Trend RationalQuadratic kernel ( α= 1) RationalQuadratic kernel ( α= 10 ) Non-Linear RBF kernel (length scale = 0.1) RBF kernel (length scale = 1) Stochastic WhiteKernel (noise level = 0.1) WhiteKernel (noise level = 1) Datasets. We conduct a comprehensive evaluation using 12different real time series datasets from various numerical domains, including energy, nature, finance, healthcare, retail and transportation (data sources can be found in Table 4 of Appendix E). We also illustrate our findings using Kernel- Synth Ansari et al. [2024] (see Algorithm E in Appendix E for details), a method that generates 10more synthetic datasets via Gaussian processes in Section 5. In Section 4.1, we show how the LLM hidden representation varies for two different datasets from the same numerical domain and its impact on downstream task performance. So we select two different datasets from each numerical domain (for good and bad downstream task performance examples) for qualitative and quantitative analysis as shown in Table 3. 5.1 Qualitative Analysis In this section, we analyze the time series forecasting by the baseline LLMs qualitatively. We focus on synthetically generated time series for a controlled analysis of different types of time series patterns which belong to 5different domains, such as linear, seasonality, trend, non-linear and Gaussian perturbation. We are particularly interested in the isotropic measurement in the LLM’s last layer as it is related to the logits and probabilistic inference. So all isotropic measure provided in this section is based on the last layer of the baselines. We begin by analyzing time series forecasting performance (i.e., NMSE) for different baselines and its relation with isotropy in Figure 6. For instance, in Figure 6 b, the inter-type cosine similarity in hidden embedding spaces of Chronos-T5 for seasonality (Dataset 1) is −0.00007 and for seasonality (Dataset 2) is 0.0047 . This indicates that stronger isotropy exists (i.e., inter-type cosine similarity value is close to 0) for Chronos-T5 for seasonality (Dataset 1) which preserves the structure in the 8 Figure 6: NMSE vs isotropy analysis for 10different synthetic datasets of 5different domains. Chronos-T5’s hidden representations and causes good downstream task performance. On the other hand, a weaker isotropy exists (i.e., inter-type cosine similarity value is far from 0) for Chronos-T5 in seasonality (Dataset 2), causing a lack of structure in Chronos-T5’s hidden representations, and thus leading to bad downstream task performance. Moreover, the NMSE for seasonality (Dataset 1) is lower for Chronos-T5, PatchTST and Lag-Llma, and higher for Chronos-Bolt and Morai, while the NMSE is higher for all the baselines except Morai for non-linear (Dataset 2). A similar analysis can also be observed for all other synthetic datasets and baselines in Figure 6. This shows that the same dataset from any particular domain may cause different forecasting performances for different baselines, as it generates different contextual embedding spaces for language models with different architectures and tokenization techniques (See Appendix F for full illustration). Next, we examine the influence of isotropy on forecasting performance in two important scenarios: a) different input context lengths, and b) different levels of noises in
https://arxiv.org/abs/2505.17135v2
the input data. The first scenario is important as it provides an analysis on selecting proper input context length rather than selecting the length through random trials and errors. The second scenario is important as it gives us ideas on how the level of noise in noisy data impacts performance, since the data in the real world is mostly noisy. Isotropy in different input context lengths. We first analyze the impacts of isotropy un- der varying input context lengths. We select Choronos-T5 as an example model and non- linear (Dataset 1) as an example dataset. In Figure 11 in Appendix F, we show how the hidden representations of Chronos-T5 vary for two different input context lengths, such as L= 500 andL= 500 , for non-linear (Dataset 1), which gives an indication of having Figure 7: NMSE vs isotropy comparison across different input context lengths for synthetic datasets.different isotropic measures for different context lengths. In Figure 7, we compare the NMSE vs isotropy across two different input context lengths, L= 500 andL= 100 , for different synthetic datasets. As can be seen from the figure, the isotropy values vary across different in- put context lengths and datasets. For instance, in sea- sonality (Dataset 1), we have (NMSE = 0.0000066 , co- sine similarity =−0.00076 ) and (NMSE = 0.0793 , cosine similarity = 0.0011 ) forL= 500 andL= 100 , respec- tively. The decrease in isotropy significantly increases the NMSE for the input context length L= 100 . In contrast, in Linear (Dataset 2), the isotropy increases for the in- put context length L= 100 , which causes the decreases in NMSE for chornos-T5. In practice, the input context length is often selected randomly or through trial and error, which may cause higher forecasting errors for different datasets. Isotropy analysis enables us to understand how varying input context lengths influence the hidden repre- sentations of the language model. This insight helps guide improvements in forecasting performance by examining the isotropic properties of the contextual embedding space. Isotropy in varying noise levels in datasets. Next, we focus on the second scenario to see the impact of noisy datasets on LLM’s performance. Again, we use the Chornos-T5 as an example language model. In Figure 8, we compare the NMSE vs isotropy across two different noise levels, one without noise, and the other with Gaussian noise with σ= 0.05standard deviation. From Figure 8, we can see consistently lower isotropy (i.e., inter-type cosine similarity far from 0) for all noisy synthetic datasets as compared to the datasets without noise. For instance, in trend (Dataset 2), 9 Figure 8: NMSE vs isotropy comparison across different noise levels in synthetic datasets.we have (NMSE = 0 .000024 , cosine similarity = −0.00022 ) and (NMSE = 0.0012 , cosine similarity = 0.0040 ) forσ= 0 andσ= 0.05, respectively. The de- crease in isotropy significantly increases the NMSE for the noisy dataset. In practice, the environments of many real numerical domains, such as nature and energy, are noisy and dynamic. In such an environment, it is not always possible to measure
https://arxiv.org/abs/2505.17135v2
the noise in real time and take the necessary steps to clean the input time series for better performance. However, it is always possible to measure the isotropy from LLM hidden representations, which can be used as a measure of noise in the input datasets, and thus, help to improve the forecasting performance. 5.2 Quantitative Analysis In Figure 9, we analyze the time series forecasting perfor- mance for 12 real datasets with different baselines and its relation with isotropy. For instance, in Figure 9 e, the inter-type cosine similarity in hidden embedding spaces of Morai for retail (Dataset 1) is0.002and for retail (Dataset 2) is 0.1931 . This indicates that stronger isotropy exists for Morai for retail (Dataset 1) which preserves the structure in the Morai’s hidden representations and causes good downstream task performance. On the other hand, a weaker isotropy exists for Morai in retail (Dataset 2), causing a lack of structure in Morai’s hidden representations, and thus leading to bad Figure 9: NMSE vs isotropy analysis for 12different real datasets of 6different domains. downstream task performance. Moreover, the NMSE for retail (Dataset 2) is lower for all baselines except the Morai as compared to retail (Dataset 1), while the NMSE for energy (Dataset 1) is lower for all baselines except Lag-Llma as compared to energy (Dataset 2). A similar analysis can also be observed for all other real datasets and baselines in Figure 9. This again shows that a dataset may have different impacts on forecasting performances for different baselines, and isotropy can be used as a measure of how the dataset generates different contextual embedding spaces for different language models based on their architectures and tokenization techniques. Finally, in Figure 10, we compare the NMSE vs isotropy for varying input context lengths to observe its impact on the real datasets. We select Lag-Llma as our exam- ple model. We compare the results for two different input context length: 1) the rec- ommended input context length L= 144 and the reduced input context length L= 96. As can be seen from the figure, the violation in recommended input context length 10 Figure 10: NMSE vs isotropy compari- son across different input context lengths for real datasets.by reducing it from L= 144 toL= 96 not only decreases the NMSE performances, but also increases for some datasets. For instance, the inter-type cosine similarity values become close to 0, i.e., from 0.1091 to0.0012 and from0.2014 to0.0396 , respectively, for nature (Dataset 2) and finance (Dataset 1), which in turn improves the NMSE performances. 6 Conclusion and Limitations Isotropy in embeddings as studied here can serve as a foundation for future research on the deeper understand- ing of LLMs and their applications in various domains. Beyond isotropy, there could be other methods to approx- imate the partition function with a constant and make the logits useful for the numeric downstream tasks. Moreover, our isotropy study only ensured the existence of structure in the LLM hidden representations and provides a per- formance guarantee when the structure is preserved by isotropy. Improving the numeric downstream task
https://arxiv.org/abs/2505.17135v2
per- formance when structure is not preserved in the LLM representations is a topic of future work. 11 References Abdul Fatir Ansari, Lorenzo Stella, Caner Turkmen, Xiyuan Zhang, Pedro Mercado, Huibin Shen, Oleksandr Shchur, Syama Syndar Rangapuram, Sebastian Pineda Arango, Shubham Kapoor, Jasper Zschiegner, and Maddix et al. Chronos: Learning the language of time series. arXiv preprint arXiv:2403.07815 , 2024. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable model approach to PMI-based word embeddings. volume 4, pages 385–399, Cambridge, MA, 2016. MIT Press. doi: 10.1162/tacl_a_00106. Vadim Borisov, Kathrin Sessler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. Language models are realistic tabular data generators. In The Eleventh International Conference on Learning Representations , 2023. Xingyu Cai, Jiaji Huang, Yuchen Bian, and Kenneth Church. Isotropy in the contextual embedding space: Clusters and manifolds. In International Conference on Learning Representations , 2021. Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput, Jy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee. Lift: Language-interfaced fine-tuning for non-language machine learning tasks. Advances in Neural Information Processing Systems , 35: 11763–11784, 2022. Samuel Dooley, Gurnoor Singh Khurana, Chirag Mohapatra, Siddartha Naidu, and Colin White. Forecastpfn: Synthetically-trained zero-shot forecasting, 2023. URL https://arxiv.org/abs/ 2311.01933 . Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. A density-based algorithm for dis- covering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining , KDD’96, page 226–231. AAAI Press, 1996. Kawin Ethayarajh. How contextual are contextualized word representations? comparing the geometry of bert, elmo, and gpt-2 embeddings. In Conference on Empirical Methods in Natural Language Processing , 2019. Kuofeng Gao, Yang Bai, Jindong Gu, Shu-Tao Xia, Philip Torr, Zhifeng Li, and Wei Liu. Inducing high energy-latency of large vision-language models with verbose images. In ICLR , 2024. Azul Garza and Max Mergenthaler-Canseco. Timegpt-1. arXiv preprint arXiv:2310.03589 , 2023. Nate Gruver, Marc Finzi, Shikai Qiu, and Andrew Gordon Wilson. Large language models are zero-shot time series forecasters, 2024. URL https://arxiv.org/abs/2310.07820 . Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y . Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen. Time-LLM: Time series forecasting by reprogramming large language models. 2024. Hyunjik Kim, George Papamakarios, and Andriy Mnih. The lipschitz constant of self-attention. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 5562–5571. PMLR, 18–24 Jul 2021. Boxun Liu, Xuanyu Liu, Shijian Gao, Xiang Cheng, and Liuqing Yang. Llm4cp: Adapting large language models for channel prediction. Journal of Communications and Information Networks , 9 (2):113–125, 2024. doi: 10.23919/JCIN.2024.10582829. Jiaqi Mu and Pramod Viswanath. All-but-the-top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations , 2018. Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In The Eleventh International Conference on Learning Representations , 2023. 12 Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
https://arxiv.org/abs/2505.17135v2
models are unsupervised multitask learners. 2019. Kashif Rasul, Arjun Ashok, Andrew Robert Williams, Hena Ghonia, Rishika Bhagwatkar, Arian Khorasani, Mohammad Javad Darvishi Bayazi, George Adamopoulos, Roland Riachi, Nadhir Hassen, Marin Biloš, Sahil Garg, Anderson Schneider, Nicolas Chapados, Alexandre Drouin, Valentina Zantedeschi, Yuriy Nevmyvaka, and Irina Rish. Lag-llama: Towards foundation models for probabilistic time series forecasting, 2024. URL https://arxiv.org/abs/2310.08278 . Peter J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics , 20:53–65, 1987. ISSN 0377-0427. Dandan Wang and Shiqing Zhang. Large language models in medical and healthcare fields: applica- tions, advances, and challenges. Artificial Intelligence Review , 57(299):1–27, 2024. Colin Wei, Sang Michael Xie, and Tengyu Ma. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 16158–16170. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/ 86b3e165b8154656a71ffe8a327ded7d-Paper.pdf . Gerald Woo, Chenghao Liu, Akshat Kumar, Caiming Xiong, Silvio Savarese, and Doyen Sahoo. Unified training of universal time series forecasting transformers, 2024. URL https://arxiv. org/abs/2402.02592 . Chenwei Wu, Holden Lee, and Rong Ge. Connecting pre-trained language model and downstream task via properties of representation. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. Shengzhe Xu, Christo Kurisummoottil Thomas, Omar Hashash, Nikhil Muralidhar, Walid Saad, and Naren Ramakrishnan. Large multi-modal models (lmms) as universal foundation models for ai-native wireless systems. Netwrk. Mag. of Global Internetwkg. , 38(5):10–20, July 2024. ISSN 0890-8044. doi: 10.1109/MNET.2024.3427313. Xinli Yu, Zheng Chen, Yuan Ling, Shujing Dong, Zongyi Liu, and Yanbin Lu. Temporal data meets llm–explainable financial time series forecasting. arXiv preprint arXiv:2306.11025 , 2023. 13 A Proof of Theorem 3.1 Theorem A.1. Let the logits of the ground-truth model be bounded. Then for any f∗(k, l), there exists a set of functions {ˆzi(k, l)}|V| i=1such that for all kandTl+1, the predictive distribution of the student model ˆp(kTl+1|k1:Tl)matches that of ground-truth model p∗(kTl+1|k1:Tl)andˆf(k, l) = 0 . In other words, there exists a student model with the same pre-training loss as the ground-truth model, but its logits are ineffective for the numeric downstream tasks. Proof. We select τ∈Rsuch that ∀k, Tl+1,τ <minj∈Vb∗ j−max j∈Vz∗ i(k, l), and∀k, Tl+1,∀j∈ V. By setting ˆzj(k, l) :=z∗ i(k, l) +τ, we get ∀j∈ V, ˆzj(k, l)−b∗ j< z∗ i(k, l) + min j∈Vb∗ j−max j∈Vz∗ i(k, Tl+1)−b∗ j≤0, this implies that σ(ˆzj(k, l)−b∗ j) = 0 . Hence, ∀k, Tl+1and we have ˆf(k, l) = 0 . B Proof of Lemma 4.1 Lemma B.1. Consider the Jacobian matrix J=h ∂gi(Ψ) ∂ψji|V| i,j=1, which represents the gradient of the self-attention mapping G(Ψ)with respect to the input time series token embeddings. Then the spectral norm of Jsatisfies ∥J∥2≤ |Λ|2P|V| i=1 pi,i+1 2 ψi−P|V| j=1pi,jψj 2 + ∆, where the residual term ∆is given by ∆ =|Λ|2P|V| i̸=jpi,j ψj−P|V| q=1pi,qψq 2 +|Λ|2 2P|V| j=1|ψj|2, and the attention weights pi,jare defined as pi,j=exp(ψ⊤ iΛψj)P|V| k=1exp(ψ⊤ iΛψk). Proof. According to the analysis, the gradient of gi(Ψ)with respect to the
https://arxiv.org/abs/2505.17135v2
variable ψjis expressed asJi,j=∂gi(Ψ) ∂ψj=pi,jI+Ψ⊤Qi ΨΛδi,j+Ej,iΨΛ⊤ where the matrix Qiis defined by Qi=diag(pi,:)−pi,:p⊤ i,:. Here, pi,:∈R|V| +corresponds to the i-th row of the probability matrix P, Ej,i∈R|V|×|V|denotes a matrix with a single entry at the (j, i)-th position and zeros elsewhere, and δi,j∈ {0,1}is the Kronecker delta. We thus have ∥J∥2≤|V|X i,j=1|Ji,j|2 ≤|V|X i,j=1pi,j+|V|X i=1|Ψ⊤QiΨ|2|Λ|2+|V|X i,j=1|Ψ⊤QiEj,iΨ|2|Λ|2 ≤ |V| +|Λ|2|V|X i=1 |V|X j=1pi,j|ψj|2− |V|X j=1pi,jψj 2 +|Λ|2|V|X i,j=1|Ψ⊤Qiejψ⊤ i| ≤ |V| +|Λ|2|V|X i=1|V|X j=1pi,j|ψj−|V|X q=1pi,qψq|2+|Λ|2|V|X i,j=1pi,j|ψ⊤ i(ψj−Ψ⊤pi,:)| ≤ |Λ|2|V|X i=1 pi,i+1 2 |ψi−Ψ⊤pi,:|2+|V|+|Λ|2|V|X i̸=jpi,j|ψj−Ψ⊤pi,:|2+|Λ|2 2|V|X j=1|ψi|2 =|Λ|2|V|X i=1 pi,i+1 2 |ψi−Ψ⊤pi,:|2+|V|+|Λ|2|V|X i̸=jpi,j ψj−|V|X q=1pi,qψq 2 +|Λ|2 2|V|X j=1|ψi|2 =|Λ|2|V|X i=1 pi,i+1 2 |xi−X⊤pi,:|2+|V|+ ∆, 14 where ∆ =|Λ|2P|V| i̸=jpi,j ψj−P|V| q=1pi,qψq 2 +|Λ|2 2P|V| j=1|ψi|2. The theorem below shows that Λminimizing the objectiveP|V| i=1|ψi−Ψ⊤ΨΛψi|2contains the largest meigenvectors of the correlation matrix Ψ⊤Ψof input time series token embeddings where mis the rank of Λ. Lemma 1 implies that one of the key components in the Jacobian’s upper bound takes the form |ψi−P|V| j=1pi,jψj|2. Consequently, during optimization, it is natural to aim for a reduction in the gradient magnitude, which motivates minimizing the expressionP|V| i=1|ψi−P|V| j=1pi,jψj|2. This leads to understand the choice of WQandWKthat minimizeP|V| i=1|ψi−P|V| j=1pi,jψj|2, which is equivalent to solving the optimization problem min|Λ|F≤ρP|V| i=1|ψi−P|V| j=1pi,jψj|2, where the scalar constraint ρregulates the size of Λ. To proceed, we consider the objective in the scenario where ρis small. In this case, we can approximate the attention weights by pi,j≈1 |V|+1 |V|ψ⊤ iΛψj. Now, we define the average of embedding as ¯ψ=Ψ⊤1/|V|. It then follows thatP|V| i=1|ψi−Ψ⊤pi,:|2=P|V| i=1|ψi−¯ψ− Ψ⊤ΨΛψi|2. Assuming all input time series patterns are zero-centered, i.e., ¯ψ= 0, we haveP|V| i=1|ψi−Ψ⊤ΨΛψi|2=tr (I−Ψ⊤ΨΛ)2Ψ⊤Ψ . Theorem 4.1 establishes that the optimal Λ that minimizesP|V| i=1|ψi−Ψ⊤ΨΛψi|2is spanned by the top meigenvectors of Ψ⊤Ψ, where m equals the rank of Λ. C Proof of Theorem 4.1 Theorem C.1. Let the eigenvalues of the correlation matrix Ψ⊤Ψbe ordered as λ1≥λ2≥ ··· ≥ λD, and let γi∈RDfori= 1, . . . , D denote their associated eigenvectors. Then, the matrix Λ∗ that minimizes the quantityP|V| i=1 ψi−Ψ⊤ΨΛψi 2has the optimal form Λ=Pm i=11 λiγiγ⊤ i. Proof. Given that WQ∈RD×mandWK∈RD×m, it follows that the matrix Λhas rank m. Hence, we know minΛP|V| i=1∥ψi−Ψ⊤ΨΛψi∥2≥P|V| q=m+1λq.Now, if we set ΛtoΛ=Pm i=11 λiγiγ⊤ i, then we obtainP|V| i=1∥ψi−Ψ⊤ΨΛψi∥2=tr I−Pm i=1γiγ⊤ i2Ψ⊤Ψ =PD q=m+1λq. Therefore, the optimal solution Λfor minimizingP|V| i=1∥ψi−Ψ⊤ΨΛψi∥2is essentially character- ized as a linear combination of the top meigenvectors of Ψ⊤Ψ. Since a small gradient will prefer a small quantity ofP|V| i=1∥ψi−Ψ⊤ΨΛψi∥2, the self-attention mechanism implicitly drives the weight matrices WQandWKto align with the dominant eigen-directions of Ψ⊤Ψ. D Clustering in the Contextual Embedding Space Clustering. We begin with the isotropy assesmment by performing clustering on the LLM repre- sentations in the contextual embedding space. There are various methods for performing cultering, such as k-means, DBSCAN Ester et al. [1996]. We select K-means clustering method because it is reasonably fast in high embedding dimensions (e.g., d≥768for GPT2, ELMo, BERT etc.). We use the celebrated silhouette score analysis Rousseeuw [1987] to determine the number of clusters |C|in the contextual embedding space. After
https://arxiv.org/abs/2505.17135v2
performing K-means clustering, each observation p(i.e., one of the Jvector representations in V) is assigned to one of Cclusters. For an observation passigned to the cluster c∈C, we compute the silhouette score as follows a(p) =1 |C| −1X q∈C,p̸=qdist(p, q); b(p) = min ˜c̸=cX q∈˜cdist(p, q); s(p) =b(p)−a(p) maxb(p), a(p), where a(p)is the mean distance between an observation pand the rest in the same cluster class p, while b(p)measures the smallest mean distance from p-th observation to all observations in the other cluster class. After computing the silhouette scores s(p)of all observations, a global score is 15 computed by averaging the individual silhouette values, and the partition (with a specific number of clusters) of the largest average score is pronounced superior to other partitions with a different number of clusters. We select the best |C|that belongs to the partition that scores highest among the other partitions. E Dataset Description Real Datsets. Table 4: The complete list of datasets used for our quantitative and qualitative analysis. The table is divided into three sections, representing how the datasets were used for baseline models. Dataset Domain Freq. Num. SeriesSeries Length Prediction min avg max Length ( H) Australian Electricity Energy 30min 5 230736 231052 232272 48 Car Parts Retail 1M 2674 51 51 51 12 Covid Deaths Healthcare 1D 266 212 212 212 30 Dominick Retail 1D 100014 201 296 399 8 Exchange Rate Finance 1B 8 7588 7588 7588 30 FRED-MD Economics 1M 107 728 728 728 12 Hospital Healthcare 1M 767 84 84 84 12 NN5 (Weekly) Finance 1W 111 113 113 113 8 Weather Nature 1D 3010 1332 14296 65981 30 Transportaion Signal Transport 1D 3010 1332 14296 65981 30 Synthetic (10 kernels) Numerical - 1000000 1024 1024 1024 64 Synthetic Datasets. We use KernelSynth Ansari et al. [2024], a method to generate synthetic dataset using Gaussian processes (GPs). KernelSynth allows generation of large, diverse datasets tailored to specific patterns or statistical properties, which is particularly useful when real-world data is scarce or incomplete. In this synthetic data generation process, the GPs are defined by a mean function, µ(t), and a positive definite kernel, κ(xi, xj), which specifies a covariance function for variability across input pairs (xi, xj). A kernel bank K(which consists of linear, RBF, and periodic kernels) is used to define diverse time series patterns. The final kernel ˜κ(xi, xj)is constructed by sampling and combining kernels from Kusing binary operations like +and×. Synthetic time series are generated by sampling from the GP prior, GP(µ(t) = 0 ,˜κ(xi, xj)). The following algorithm presents the pseudocode for KernelSynth which essentially follows the approach in Ansari et al. [2024]. Algorithm 1 KERNEL SYNTH : Generating Synthetic Sequences via Gaussian Process Kernels Input : Kernel bank K, maximum kernels per time series J= 5, and length of the time series lsyn= 1024 . Output : A synthetic time series x1:lsyn. 1:j∼ U{ 1, J} ▷sample the number of kernels 2:{κ1(t, t′), . . . , κ j(t, t′)}i.i.d∼ K ▷sample jkernels from the Kernel bank K 3:κ∗(t, t′)←κ1(t, t′) 4:fori←2tojdo 5: ⋆∼ {+,×} ▷pick
https://arxiv.org/abs/2505.17135v2
a random operator (add or multiply) 6: κ∗(t, t′)←κ∗(t, t′)⋆ κi(t, t′) ▷compose kernels 7:end for 8:x1:lsyn∼ GP (0, κ∗(t, t′)) ▷draw a sample from the GP prior 9:return x1:lsyn F Full Visualization of PCA plots for different models The Variations in Chornos-T5’s hidden representations for different input context lengths is depicted in Figure 11 16 (a) (b) Figure 11: Variations in Chronos-T5’s hidden representations for different input context lengths for the same synthetic dataset “non-linear (Dataset 1)” : (a) Contextual embedding space for input context length L= 500 . (b) Contextual embedding space for input context length L= 100 . 17 F.1 Synthetic datasets PatchTSTChronos -BoltChronos -T5 Morai Lag-LlmaNon -Linear (Dataset 1): 18 PatchTSTChronos -BoltChronos -T5 Morai Lag-Llma Non -Linear (Dataset 1):19
https://arxiv.org/abs/2505.17135v2
arXiv:2505.17136v1 [cs.CL] 22 May 2025RESEARCH ARTICLE Foundation Models for Geospatial Reasoning: Assessing the Capabilities of Large Language Models in Understanding Geometries and Topological Spatial Relations Yuhan Jia, Song Gaoa,*, Ying Niea, Ivan Maji´ cb, and Krzysztof Janowiczb,c aGeoDS Lab, Department of Geography, University of Wisconsin-Madison, USA; bDepartment of Geography and Regional Research, University of Vienna, Austria; cDepartment of Geography, University of California-Santa Barbara, USA ARTICLE HISTORY Compiled May 26, 2025 ABSTRACT AI foundation models have demonstrated some capabilities for the understanding of geospatial semantics. However, applying such pre-trained models directly to geospatial datasets remains challenging due to their limited ability to represent and reason with geographical entities, specifically vector-based geometries and natural language descriptions of complex spatial relations. To address these issues, we investigate the extent to which a well-known-text (WKT) representation of geometries and their spatial relations (e.g., topological predicates) are preserved during spatial reasoning when the geospatial vector data are passed to large language models (LLMs) including GPT-3.5-turbo, GPT-4, and DeepSeek-R1-14B. Our workflow employs three distinct approaches to complete the spatial reasoning tasks for comparison, i.e., geometry embedding-based, prompt engineering-based, and everyday language-based evaluation. Our experiment results demonstrate that both the embedding-based and prompt engineering-based approaches to geospatial question-answering tasks with GPT models can achieve an accuracy of over 0.6 on average for the identification of topological spatial relations between two geometries. Among the evaluated models, GPT-4 with few-shot prompting achieved the highest performance with over 0.66 accuracy on topological spatial relation inference. Additionally, GPT-based reasoner is capable of properly comprehending inverse topological spatial relations and including an LLM-generated geometry can enhance the effectiveness for geographic entity retrieval. GPT-4 also exhibits the ability to translate certain vernacular descriptions about places into formal topological relations, and adding the geometry- type or place-type context in prompts may improve inference accuracy, but it varies by instance. The performance of these spatial reasoning tasks unveils the strengths and limitations of the current LLMs in the processing and comprehension of geospatial vector data and offers valuable insights for the refinement of LLMs with geographical knowledge towards the development of geo-foundation models capable of geospatial reasoning. KEYWORDS GeoAI; geospatial reasoning; large language models; GPT; topological spatial relations 1. Introduction Our interaction with Artificial Intelligence (AI) based systems is changing radically due to progress in generative Foundation Models (FM) and the conversational, natural-language-driven style of interaction with many of these models. While most prior AI models were developed with a limited range of downstream tasks in mind, foundation models aim to be general- purpose building blocks supporting a broad range of applications. Essentially, they are trained on a substantially broader set of data and, while giving up accuracy for any specific task during development, are easily fine-tuned before or during deployment. Large language models *A preprint draft and the final version will be available on the International Journal of Geographical Information Science; Corresponding Author. Email: song.gao@wisc.edu (LLMs) (Brown et al., 2020; Radford et al., 2019), such as Generative Pre-trained Transformers (GPT) (Achiam et al., 2023; Radford et al., 2018), and text-to-image models (Frolov et al., 2021), such
https://arxiv.org/abs/2505.17136v1
as DALL-E (Ramesh et al., 2021), are specific types of foundation models. Most of these models are generative, i.e., they return novel, synthetic output such as natural language answers or imagery instead of providing answers by (information) retrieval as was common in prior systems, e.g., from the field of expert systems. While foundation models may not inherently prescribe a specific interaction style, they can be trained or fine-tuned for various types of interactions by carefully crafting the training dataset for the intended purpose. For example, OpenAI’s Codex is trained using paired code examples and comments, enabling natural language instructions to guide code generation effectively (Chen et al., 2021). Similarly, Contrastive language-image pre-training (CLIP) facilitates tasks like image search from paired textual descriptions. Reinforcement learning with human feedback (RLHF) is another approach that aligns model outputs with user intent, improving conversational dialog flow, adherence to prompts, and reducing harmful content. The resulting conversational style of interaction is part of their broad appeal but also causes new challenges. Together, these breakthroughs have opened the door towards conversation-style artificial GIS analysts (“GeoMachina”) (Janowicz et al., 2020). For instance, ChatGPT-4 can understand instructions for frequent GIS tasks like reading in a dataset (Mooney et al., 2023), performing simple spatial analysis steps (by generating PySAL code), or even suggesting appropriate next steps. Consequently, researchers started exploring the capabilities and limits of current AI in representing spatial data (Ji and Gao, 2023), generating maps (Zhang et al., 2023), extracting place semantics (Hu et al., 2023), automating GIS operations (Li and Ning, 2023; Zhang et al., 2024), generating code (Gramacki et al., 2024), and drawing inferences from such data (Mai et al., 2024). Interestingly, the gaps this early research revealed are not unexpected as they have been documented as pain points of prior AI systems before (Janowicz et al., 2015). Prominently featured among these shortcomings is the representation of and reasoning with topological spatial relations (Cohn and Renz, 2008). Even more, this is true across foundation models, i.e., LLMs and text-to-image models struggle similarly. For instance, ChatGPT (OpenAI, 2022) will provide a metric distance (e.g., several kilometers) when asked about the border of two neighboring countries. Similarly, DALL-E frequently fails to generate images of regions or parts described using terms such as bordering, adjacent, contained, or specific types of maps (Zhang et al., 2023). This is a critical insight as it implies that current work on geo-foundation models (Xie et al., 2023), e.g., location embeddings (Mai et al., 2022b), may benefit the broader AI community across models. To better understand the limitations of LLMs in handling spatial data and to develop foundation models for advancing geospatial artificial intelligence (GeoAI) (Gao et al., 2023), this work aims to explore the potential of representing spatial object geometries in the WKT format to enable LLMs to perform GIS operations and enhance geospatial reasoning. In this work, we present intensive experiments with well-known text (WKT) representation of geometries as inputs for LLMs and with natural language descriptions of (vague) spatial configurations. However, it is important to note that, unlike other types of data,
https://arxiv.org/abs/2505.17136v1
accurate geometries (e.g., points, polylines, and polygons) and their spatial relations, as used in GIS, are not usually expressed in natural language text for such models to consume during training. Without explicitly addressing such structural deficiencies, the proposed approach is not suggested to be directly applicable in practice. The research contributions (RC) of our work are as follows: •RC1: We develop a workflow to assess the ability of LLMs to reason with topological spatial relations, more specifically, a subset of topological relations specified according to the Dimensionally Extended 9-Intersection Model (DE-9IM). To do so, we will compare two approaches. First, we will encode the geometries and their topological relations in an embedding space using LLMs. Second, we will use a prompt engineering method to pass WKT format of geometries directly to the LLMs. •RC2: To test the capabilities of LLMs, we firstly utilize the WKT representation of two 2 geometries to predict the topological spatial relation between them, and then we use one of the geometries and the topological spatial relation to predict the second geometry. To do so, we will utilize the pre-trained text embedding models and also use prompt engineering to elicit the target geometry. •RC3: Finally, we study the ability of LLMs to extract the formalized topological spatial relations between geographic entities from vernacular descriptions (i.e., everyday language) of the relations between geographic entities, e.g., as found in administrative place descriptions from DBpedia/Wikipedia. The remaining paper is organized as follows. We first review the literature on spatial relations, parts of the qualitative spatial reasoning and conceptual neighborhoods, large language models, and GeoAI foundation models in Section 2. We then introduce the methodology and workflow used in this research in Section 3, followed by the experiments design and dataset processing in Section 4. After that, we present the experiment results about topological spatial relation qualification and retrieval tasks using LLMs in Section 5. We further discuss the the confusion between the topological predicates with their corresponding conceptual neighborhoods in Section 6. Finally, we conclude this paper and offer insights into future work in Section 7. 2. Related work 2.1. Spatial relations Spatial relations refer to the connection between spatial objects regarding their geometric properties (Guo, 1998), which specify the location of one object related to another one (Carlson and Logan, 2001) or more other objects (Majic et al., 2021). On the one hand, describing spatial relations in natural language is essential for understanding our surroundings in spatial cognition and navigating through space (Freksa et al., 1998). On the other hand, a reverse parsing process, where exact spatial relations are identified from natural language descriptions, is vital to improving the quality of information retrieval and human-computer interaction in tasks such as map reading (Head, 1984), geographic question answering (Gao and Goodchild, 2013; Mai et al., 2020; Scheider et al., 2021), spatial query and reasoning (Du et al., 2005; Guo et al., 2022; Wang, 2000), disaster management (Cervone et al., 2016; Wang et al., 2016), driving and robotics navigation (Tellex et al., 2011; Wallgr¨ un et al., 2014). Typically, binary spatial relations
https://arxiv.org/abs/2505.17136v1
use the format of a triplet {subject, predicate (preposition), object}to describe the relative positions of objects in space. In this format, the subject is an entity being described in relation to another entity, the predicate (preposition) is the descriptor between the subject and object, and the object is the entity that the subject is being related to in terms of position or location. For example, “Santa Barbara is situated northwest of Los Angeles” would be expressed as {Santa Barbara, northwest of, Los Angeles }in the format of spatial relations. Even though spatial relations pervade in our daily life conversations, people tend to frequently use a limited number of predicates to describe topological ,directional , and distance relations (Frank, 1992; Mark and Egenhofer, 1994). These expressions are qualitative in nature, offering approximate descriptions of an infinite range of possible spatial configurations. Nevertheless, speakers can convey complex spatial layouts by combining these basic predicates with contextual cues. For example, we might describe the locale of Santa Barbara as “Santa Barbara is connected via U.S. Highway 101 to Los Angeles about 100 miles to the southeast.”, or the position of a person as standing “in front of the building, facing east.” The ability to combine and modify spatial predicates allows us to express a wide range of spatial relationships with a relatively small vocabulary but increases the difficulty of representing and understanding the meanings of such spatial relation descriptions for computers. The flexibility and ambiguity inherent in natural language often obscure the precise geometry of spatial arrangements, creating a disconnection between semantic interpretation and physical spatial layout. The abundance of web documents containing geographical references offers the opportunity to retrieve spatially- aware information and support qualitative spatial reasoning from natural language texts (Jones 3 et al., 2004). To bridge the semantic-physical gap, prior work has focused on extracting spatial relations between named geographic entities by interpreting linguistic cues in text. These efforts include parsing grammatical and spatial semantic structures (Kordjamshidi et al., 2011; Loglisci et al., 2012; Skoumas et al., 2016), as well as applying supervised machine learning models trained on annotated data with spatial linguistic features (Wu et al., 2023a; Yuan, 2011). The resulting qualitative spatial relations, enriched by contextual narratives (Wallgr¨ un et al., 2015), provide a foundation for computational models that link natural language semantics to the structured representations of physical space. 2.2. Formalism of topological relations and conceptual neighborhoods In the field of GIS, attempts have been made to formalize the conversion between quantitative computational models of spatial relations and qualitative spatial representations from human discourse (Chen et al., 2015; Cohn and Hazarika, 2001). In Clementini et al. (1994), topological relations are defined as spatial relations that are preserved under such transformations as rotation, scaling, and rubber sheeting. For topological spatial relations, region connection calculus (RCC) (Randell et al., 1992) and point-set topology intersection models (IM), e.g., 4-IM based on intersections of the boundaries andinteriors of two objects (Egenhofer and Franzosa, 1991), and 9-IM which also considers the exteriors of two objects (Egenhofer and Herring, 1991), are widely used approaches. RCC-8
https://arxiv.org/abs/2505.17136v1
(Cui et al., 1993) is a set of eight jointly exhaustive and pairwise disjoint relations defined for regions. The basic relations include topological predicates: equal (EQ), externally connected (EC), disconnected (DC), partially overlaps (PO), tangential (TPP/TPPi) and nontangential (NTPP/NTPPi) relations, which have been shown to be cognitively adequate to be well distinguished by humans (Renz and Nebel, 1998). Point-set topology intersection models analyze whether intersections between the interiors, boundaries, and exteriors of two objects are empty or non-empty point sets. The Dimensionally Extended 9-intersection model (DE-9IM) (Clementini et al., 1993) further considered the dimensionality of each geometry in the intersection matrix so that the 9-IM is not a binary operation of intersects. Based on the DE-9IM model, five mutually exclusive relations are identified (Clementini and Di Felice, 1996), including {disjoint, touches (meets), crosses, overlaps, within }. the Open Geospatial Consortium (OGC) later added {intersects, contains, equals}to the set for the convenience of GIS software users, and included in the GeoPandas Python package for programmers. The recent development of RCC∗-9 expands the dimensions of RCC-8 and allows for a unified framework to model topological spatial relations (Clementini and Cohn, 2014, 2024). However, since DE-9IM predicates were selected for better user interaction and have been implemented by OGC, this work focuses on DE-9IM. In Mark and Egenhofer (1994), human subject testing was conducted to evaluate their model for spatial relations between lines and regions. The participants were presented with pairs of lines or regions and were asked to rate the spatial relation between them using a Likert scale that ranged from “no relation” to “strongly related”. The pairs of lines and regions were generated based on the 19 topologically distinct spatial relations defined in the authors’ model. The human judgments were then compared to the predicted spatial relations generated by their model. The results showed that the model’s predicted topological spatial relations matched the human judgments with a high degree of accuracy, indicating the effectiveness of the model in capturing human perception of topological spatial relations. In both RCC and IM lineage, the idea of smooth transitions from one topological relation to another has been discussed early on. This means that, for example, if two polygon objects are disjoint, they would first require a touch relationship before moving to overlap. In this sense, some relationships are more similar or closer to each other than others, and this is known as the conceptual neighborhood of topological relations. Figure 1 shows the neighborhood graphs using the RCC-8 (Figure 1a) and 9-IM (Figure 1b) nomenclature. Since the DE-9IM example only preserves the connection of a topological relation with its “closest” relation, the inside/contains do not connect with equal in the graph. In addition to the conceptual 4 neighborhood, Egenhofer and Al-Taha (1992) proposed a formula for calculating the topological distance between topological relations using matrix representations, where the smaller distance means more similar between the two topological relations. We adopt the topological distance for evaluation later in this paper to provide a more nuanced perspective on whether LLMs’ differentiation of topological relations aligns with human perception. (a)
https://arxiv.org/abs/2505.17136v1
(b) Figure 1.: The conceptual neighborhood of topological relations in RCC-8 (Randell et al., 1992) on the left (redrawn for comparison) and 9-IM (Egenhofer and Al-Taha, 1992) on the right. 2.3. Large language models and GeoAI foundation models The launch of ChatGPT by OpenAI (2022) marked a significant turning point, drawing widespread interest in Large Language Models (LLMs) and conversational AI from the public. Language-based foundation models boast an impressive range of parameters, from 110 million in BERT (Devlin et al., 2018) to 1.5 billion in GPT-2 (Radford et al., 2019), and up to 137 billion in LaMDA (Google’s Bard) (Thoppilan et al., 2022) and 175 billion in GPT-3 (Brown et al., 2020), demonstrating a significant variation in network architectures, scale, and purposes. Despite these differences, they share a common achievement: they have acquired a sophisticated understanding of language patterns and semantics, setting new performance standards in natural language processing tasks. Other types of foundation models include vision-based (e.g., vision transformer–ViT (Dosovitskiy et al., 2020) and segment anything model–SAM (Kirillov et al., 2023)) and vision-language multimodal foundation models (e.g., Flamingo with 80 billion parameters (Alayrac et al., 2022) and GPT-4 with over 1 trillion parameters (Achiam et al., 2023)). These pre-trained foundation models have been applied directly or transferred to a wide range of cross-domain tasks after fine-tuning or few-shot/zero-shot learning, e.g. education (Kasneci et al., 2023), healthcare (Yang et al., 2022), transportation (Zheng et al., 2023), etc. These foundation models have been trained on large-scale datasets that also contain geographical knowledge such as descriptions of locations and places in textual documents as well as spatial elements in maps, geo-referenced photos, and satellite imagery. Recently, researchers and institutions have begun the early exploration of integrating foundation models into GeoAI research and education. For example, Mai et al. (2024) found that task-agnostic LLMs have the capability to surpass fully supervised deep learning models designed for specific tasks in understanding geospatial semantics, including toponym recognition, health data time-series forecasting, urban function, and scene classifications. Hu et al. (2023) fused a few geo-knowledge examples into GPT models to improve the extraction of location descriptions from disaster- related social media messages. Manvi et al. (2023) found that geospatial knowledge can be effectively extracted from LLMs with auxiliary map data from OpenStreetMap. Additionally, spatial-context-aware prompts with pre-trained visual-language models can improve the accuracy of urban land use classification and urban function inference (Huang et al., 2024; Wu et al., 2023b), In GIS, evaluations have been conducted to assess the qualitative spatial reasoning capabilities of LLMs in identifying and reasoning spatial relations using symbolic representations of spatial objects, such as RCC-8 (Cohn, 2023; Cohn and Blackwell, 2024a) and cardinal 5 directions (Cohn and Blackwell, 2024b). While LLMs perceive the spatial structure through sequences of textual input (Yamada et al., 2023) and leverage commonsense reasoning during their inference process (Cohn and Hernandez-Orallo, 2023), they also demonstrate human-like misconceptions and distortions about space (Fulman et al., 2024). Several studies (Fernandez and Dube, 2023; Mai et al., 2022a; Tucker, 2024) have proposed integrating vector data as a backbone for spatial reasoning.
https://arxiv.org/abs/2505.17136v1
GPT-4 has shown the capability to generate coordinates for outlines of countries, rivers, lakes, and continents that approximate their actual geographic locations (Das, 2023). In Ji and Gao (2023), LLM-generated embeddings can preserve geometry types and some coordinate information in the WKT representation of geometries. However, performing qualitative spatial reasoning and executing spatial tasks from implicit textual descriptions involving coordinates remains a significant challenge (Majic et al., 2024). In addition, geospatial analysis workflows and operations can be automated when combing LLMs with spatial analysis tools (Li and Ning, 2023; Zhang et al., 2024). ChatGPT can even achieve a promising grade when taking an introduction to GIS exam (Mooney et al., 2023). In the field of Cartography, Tao and Xu (2023) explored the use of ChatGPT-4 for creating thematic maps and mental maps with appropriate prompts. However, Zhang et al. (2023) pointed out the ethical concerns on AI-generated maps’ inaccuracies, misleading information, unanticipated features, and reproducibility. In August 2023, NASA and IBM released their GeoAI Foundation Model– Prithvi, which was trained on NASA’s Earth Observation remote sensing imagery (i.e., the harmonized Landsat and Sentinel-2 satellite dataset) (Jakubik et al., 2023) and has been found to have a good performance and transferability on flood inundation mapping (Li et al., 2023). Alongside such remarkable achievements, there are concerns that need to be addressed together with the development and advancement of foundation models for GeoAI and geosciences (i.e., Geo-Foundation Models), such as geographical bias, diversity, spatial heterogeneity, limited human annotations, sustainability, privacy and security risks (Hu et al., 2024; Janowicz, 2023; Rao et al., 2023; Xie et al., 2023). 3. Methodology 3.1. Preliminaries and Workflow This research focuses on assessing the ability of LLMs to represent textual descriptions of geometries and understand topological spatial relations between geometric objects. The overall framework of this research is shown in Figure 2. Given a study area, we first retrieve spatial objects from both a spatial database and a textual description about places from a Web document knowledge database (e.g., DBpedia/Wikipedia). When the documents contain vernacular description of topological relations between two places, formalized DE- 9IM topological spatial relations will be extracted from the spatial footprints (geometries) in the format of triplets as ground truth. The obtained geometric, attributive, and relational information is used as input for downstream tasks (e.g., qualify topological relations, process spatial query, and convert vernacular description of relations), where task-specific prompts are designed accordingly. The task output from the LLMs is then compared to the ground truth topological relation triplets to evaluate their ability to encode and reason about geometries and topological spatial relations. The following subsections will further provide details on each evaluation task and the corresponding workflow. The definitions and notations used in this paper are listed in Table 1. 3.2. Determining topological spatial relations In the original work of DE-9IM (Clementini et al., 1993), the five defined topological predicates {disjoint, touches (meets), crosses, overlaps, within }were considered mutually exclusive. However, the statement no longer holds with the introduction of “contains” and “equals” to the set by the OGC standard. Therefore, to ensure the
https://arxiv.org/abs/2505.17136v1
uniqueness of the topological spatial 6 Figure 2.: Overview of the workflow in this research. Table 1.: Notations Notation Description A/B The objectID of spatial objects A or B gA The geometry of Athat can be processed in GIS tools GeomType (A) The geometry type of A, (e.g. Point, LineString, and Polygon when gAis a simple feature) g◦ AThe interior of gA dim(g)The dimension of a geometry g. dim(g) =  −g=∅ 0gcontains at least one Point without Linestrings or Polygons 1gcontains at least one Linestrings without Polygons 2gcontains at least one Polygon WKT (A) The WKT format of gA Enc(A) The location encoding of gAusing an LLM model to encode WKT (A) R The set of predicates to represent the topological spatial relations in this research, i.e. {equals, disjoint, crosses, touches, contains, within, overlaps }, as defined by OGC and implemented in GeoPandas. rel A predicate that can be used to represent the topological spatial relation, rel∈R Rel(A, B) The topological spatial relation between the subject Aand the object B [Enc(A);Enc(B)] The concatenation of the embeddings of AandB D(rel1, rel 2) The topological distance between two relations rel1andrel2on the conceptual neighborhood graph (Egenhofer and Al-Taha, 1992) SA(rel, B ) The relevancy score of a retrieved subject Agiven the reference object Band the desired topological spatial relation rel. relations between two objects, we interpret “within” as “within (but not equals)” and “contains” as “contains (not equals)” in this work. Accordingly, we modify the decision tree in Clementini et al. (1993) to do the reasoning about the topological relations between two spatial objects, as illustrated in Figure 3. Based on the decision process, the topological spatial relations do not apply to every combination of geometry types. The definitions and possible geometry type combinations of the seven predicates used in this research are listed in table 2. Several visual examples of the topological spatial relations between two geometries can be seen in Figure 10. 7 Figure 3.: The decision tree for the topological spatial relations. Table 2.: The named topological spatial predicates with the 9-intersection Boolean code (T: true; F: false; *: free value) and corresponding applicable geometry type combinations of a predicate. Predicate with 9-intersection code Geometry Type Combination equals: T*F**FFF* Point/Point, LineString/LineString, Polygon/Polygon within: T*F**F***Point/LineString, Point/Polygon, LineString/LineString, LineString/Polygon, Polygon/Polygon contains: T*****FF*LineString/Point, LineString/LineString, Polygon/Point, Polygon/LineString, Polygon/Polygon overlaps: T*T***T** LineString/LineString, Polygon/Polygon touches: FT******* or F***T****Point/LineString, Point/Polygon, LineString/Point, LineString/LineString, LineString/Polygon, Polygon/Point, Polygon/LineString, Polygon/Polygon crosses: T*T****** LineString/LineString, LineString/Polygon, Polygon/LineString disjoint: FF*FF**** Applicable to ALL 3.3. Representing geospatial data as text An embedding is a multi-dimensional numeric vector representation of objects to capture the complex patterns and relationships in the data. While researchers have explored different approaches to embed geometries using spatially explicit models (Mai et al., 2022b; Yan et al., 2017; Zhu et al., 2022), this study presents a novel perspective by hypothesizing that LLMs can effectively encode the WKT format of geospatial vector data (points, polylines, and polygons) and preserve crucial geometric information. We adopt sentence embedding models (Logeswaran and Lee, 2018; Neelakantan et al., 2022; Reimers and Gurevych, 2019) to generate neural embeddings of
https://arxiv.org/abs/2505.17136v1
the input geometry WKT strings, which allows for the comparison and retrieval of spatial information through the semantic search (Hu et al., 2015; Muennighoff, 2022). 3.4. Evaluation Tasks 3.4.1. Topological spatial relation qualification In Wolter and Wallgr¨ un (2012), spatial relation qualification is defined as the process of inferring qualitative spatial relations from quantitative data. The first task aims to leverage LLMs to classify the topological spatial relationships between subject entity A and object entity B into one of seven predefined topological predicates (in Section 3.2), combined with their geometry 8 types. The input and output of Task 1 are described as follows: Input : The input for this task is the WKT representations of geometries A and B, denoted as WKT(A) and WKT(B). Example inputs: •WKT (A): POINT (-89.3551 43.123) •WKT (B): POLYGON ((-89.3552 43.124, -89.355 43.124, -89.355 43.122, -89.3552 43.122, -89.3552 43.124)) Output : The output is a tuple that describes the topological spatial relationship between the two geometries, in the format of ( GeomType (A), predicate, GeomType (B)). Given the example inputs, the expected output of a correct classification would be: •(Point, within, Polygon) Use Case : Task 1 can be relevant to linking the geometries that occur in the same spatial context. For example, suppose one document already provides location, geometry and attribute information on housing resources and public transportation facilities. In that case, the LLM may directly use geographic information and other contexts to suggest affordable and accessible housing by public transportation facilities. Figure 4.: The workflow for the topological spatial relation qualification task. The workflow of task 1 is shown in Figure 4. Given an input triplet that describes the topological spatial relation between subject A and object B, i.e., (subject, predicate, object), we first retrieve the WKT strings, and Geometry types of A and B. We then adopt two approaches (embedding-based and prompt-based) to perform the task, utilizing an appropriate LLM, to function as either a text encoder or a reasoner. For encoding, a pre-trained sentence embedding model generates the embeddings of the geometries of A and B. The embeddings are concatenated as the input for a random forest classifier (Breiman, 2001). For reasoning, a more powerful generative model, such as GPT-4 and DeepSeek-R1, are employed to perform the task defined in the prompt. Four prompt engineering techniques are adopted to potentially guide the LLMs towards producing a more valid and accurate output of the topological spatial relation, including standard zero-shot learning, standard few-shot learning (Radford et al., 2019), few-shot chain- of-thought (CoT) prompting (Wei et al., 2022), and zero-shot COT prompting(Kojima et al., 2022). In few-shot CoT, we follow the decision tree in Figure 3 to generate the intermediate steps to determine the topological spatial relations as examples. While the identification of topological spatial relations might appear straightforward to the human brain, it involves multi- step reasoning. The DE-9IM framework (Clementini et al., 1993) decomposes the problem into intersections of the boundaries, interiors, and exteriors of two geographic entities, with 9 dimensional requirements that map to topological predicates intuitive to users. We hypothesized that
https://arxiv.org/abs/2505.17136v1
few-shot prompting and explicit reasoning steps, guided by CoT, could improve the model’s performance on this qualification task. The example inputs and outputs of the topological spatial relation qualification task using the above-mentioned different prompt engineering techniques are illustrated in Figure 5. Figure 5.: Topological spatial relation qualification example inputs and outputs with different prompt techniques. The metrics for evaluating the topological spatial relation qualification task are as follows. (1) Validity a. Valid format of the output: LLMs should follow the instructions to use the given format of the output in (Geometry Type A, Predicate, Geometry Type B). b. Valid geometry types: LLMs should preserve the Geometry Type A and Geometry Type B from the given WKT format of geometries. c. Valid combinations of geometry types for the topological predicates as shown in table 2. (2) Accuracy For valid outputs, we can compute the accuracy when the output topological spatial predicate matches the ground truth. (3) Topological distance in the conceptual neighborhood graph In this work, we use the shortest path distance between two topological predicates in the conceptual neighborhood graph (see Figure 1), where the distance of each edge equals 1. Since Figure 1 was originally proposed for region-to-region (Polygon/Polygon) relations in 9IM, we mapped their topological predicates to the seven DE-9IM predicates that we use. For other geometry type combinations, we refer to Mark and Egenhofer (1994) and Reis et al. (2008) to extract the conceptual neighborhood graphs. With the topological distance measurement, 10 we can further analyze which pairs of predicates can easily confuse LLMs and whether such confusion is directed, by comparing the false-negative and false-positive results. 3.4.2. Spatial query processing In Sack and Urrutia (1999), a generic spatial query is defined as the retrieval of subjects from a set of candidate geometric entities that are in a specific relation relwith the query object Bon the basis of geometric information only. Our second task aims to evaluate whether LLMs can jointly encode a topological relation and one geometry to capture the feasible geometries that meet the query requirement. The input and output of the Task 2 are as follows: Input : The input for this task is the WKT representations of geometry B, denoted as WKT (B), and a given predicate of topological spatial relations rel. Example input: •Predicate: within •WKT (B): POLYGON ((-89.3552 43.124, -89.355 43.124, -89.355 43.122, -89.3552 43.122, -89.3552 43.124)) Output : The output is the identifier of a subject entity Awhose topological spatial relationship with B is described by the predicate. Use Case : Task 2 is valuable for retrieving textual reports that involve locations, spatial layouts, and geospatial semantics. This analysis relies on accurate queries using spatial predicates. For instance, it would be beneficial to analyze the selection of a nearby competitor’s site report when considering opening a business in the same neighborhood. Figure 6.: The workflow for the spatial query processing task. The evaluation workflow of Task 2 is shown in Figure 6. Given a query specifying the topological spatial relation relwith the query object B(WKT (B)), we first retrieve the subjects
https://arxiv.org/abs/2505.17136v1
from the study area spatial database as ground truth. We format the query as the input to an LLM using two approaches. First, this query can be directly formulated as a sentence, such as “Retrieve a geometry within POLYGON ((-89.3552 43.124, -89.355 43.124, -89.355 43.122, -89.3552 43.122, -89.3552 43.124).” Alternatively, synthetic geometries can be created using a generative model to expand the query, connecting the query with the search space. The (expanded) query text is inputted into the sentence embedding model to generate the embeddings. The geometries in WKT format for spatial entities are also processed by the same embedding model, to generate the embeddings ( Enc(g1), Enc (g2), ..., Enc (gn)). The most relevant subject geometries are retrieved based on the cosine similarity of their geometry 11 embeddings and the query embeddings. We perform the evaluation as a link prediction task in the “filtered” setting (Bordes et al., 2013), which excludes other subjects related to B by the topological predicate relfrom the database and concentrates on the retrieval of the subject in the triplet. This approach addresses the biases introduced by the significant difference in the number of spatially related subjects across predicates and the objects. Finally, the retrieved subjects are evaluated by their actual topological spatial relation to the reference subject. In the following, we introduce how to format the direct query and the expanded query with LLM-generated geometries in detail: (1) Direct Query Given the WKT format of the geometry of a known reference object (e.g., LINESTRING (- 89.4534 43.035, -89.454 43.0351)) and a designated topological spatial relation (e.g., “crosses”), the query formulation is as follows: “Retrieve a geometry that crosses the LINESTRING (- 89.4534 43.035, -89.454 43.0351).” If the search focuses on a specific geometry type, the query can be articulated as “Retrieve a LINESTRING geometry that crosses the LINESTRING (- 89.4534 43.035, -89.454 43.0351).” (2) Expanded query with LLM-generated geometries In Carpineto and Romano (2012) and Hu et al. (2015), (geospatial) query expansion is used to augment the user’s original query with new features (e.g., geographic or thematic characteristics) that share a similar meaning as the expected output of semantic search. The method can address the lack of semantic similarity between the query and the desired geometry. We extend the Query2Doc model (Wang et al., 2023) to the spatial query expansion, where we leverage an LLM to generate a synthetic geometry that can possibly be the response to the query. The prompt template for the generation of geometric objects or subjects is listed in Figure 7. We adopt the following prompting approaches for geometry generation. •Zero-shot: LLMs generate geometries directly from the given spatial query. •Zero-shot + Self-check: LLMs are asked to verify the spatial relations before generating the output. •Few-shot: Give a few pairs of example queries and corresponding subjects while maintaining spatial relations and object geometry type. •Few-shot + Negative examples: Apart from the plausible examples, we also include the negative examples that are not the correct responses for the given query. The examples are formatted as “Retrieve a Geometry Type which ... Good Response:...
https://arxiv.org/abs/2505.17136v1
Bad Response: ...” We further incorporate the LLM-generated geometries into the spatial queries to assess the usefulness of the expanded queries. The evaluation includes two parts: First, LLMs’ ability to generate valid synthetic geometries as a basis for the expanded queries. Second, query processing performance through semantic search using both direct queries and expanded queries. (1) Validity of the LLM-generated geometries a. Valid WKT format of geometries to be successfully parsed by the GIS tool for creating geometry instances. b. Correct topological spatial relation Relwith the query object B. (2) Mean Reciprocal Rank (MRR) and Hits@K of the retrieval performance We employ two commonly used metrics in geographic information retrieval, Mean Reciprocal Rank (MRR) (Yan et al., 2017) and Hits@K, in the “filtered” setting (Bordes et al., 2013). A desirable model is characterized by higher MRR and Hits@K values. 12 Figure 7.: Prompt template used for geometry generation in the spatial query processing task. 3.4.3. Conversion of vernacular relation descriptions In Chen et al. (2018), a vernacular description of spatial relations between places is an alternative to formal spatial relations in metric space, which occurs in everyday communication in a flexible format of a preposition, verb, phrase, or even implicit text description. The third task aims to evaluate how LLMs can convert the vernacular description (i.e., everyday language) of a topological relationship between two geographic entities into one of seven predefined topological predicates based on the given context. This task is inspired by LLM’s commonsense model of the world and naive geographical knowledge about space, and the domain-specific knowledge of the formalism in calculus to bridge the gap between the vernacular(narrative) descriptions and the formalized topological predicates. For example, ChatGPT is able to provide the rationale behind the statement “When an island is in the middle of a lake, the island touches the lake if the lake is considered as a separate region (not fully containing the island)” by identifying the lake in this scenario is a double-border object using commonsense knowledge reasoning (rather than precise geometries). It then maps this understanding to the “touches” topological relation, applying expertise in the GIS domain. The input and output of the Task 3 are as follows: Input : The input for this task includes a sentence that describes the topological relationship between two places in everyday language, along with the contextual information of the two places. Example input: •Sentence: Place Ais home to Place B •Context: Place Ais a city. Place Bis a university Output : The output will rephrase the sentence using the formalized topological predicates. •Answer: Place Acontains Place B Use Case : Parsing vernacular descriptions of spatial relations between places into formal ones can better support the users interacting in natural language and the use of spatial analysis tools that rely on formal topological relations. For example, interpreting vague terms in travel reports to determine if cross-border human behavior exists and interpreting the territorial changes and alignment of contemporary boundaries in the historical context. We adopt the workflow in Figure 8 to evaluate the capability of LLMs in task 3. The
https://arxiv.org/abs/2505.17136v1
workflow begins with collecting geographic entities from a Web document knowledge database (DBpedia, structured knowledge based on Wikipedia), where named entity recognition is used to extract 13 place names and vernacular spatial relation descriptions. These place names, such as “UW- Madison” and “Madison, Wisconsin” are then used to retrieve relevant geographic data and corresponding attributes from a spatial database to provide context such as geometry type and place type, with their topological spatial relations identified through GIS tools. The collected spatial relations between two places are formatted as “A {vernacular topological relation }B” (e.g., A is home to B) for evaluation, where A and B are symbolic placeholders representing two places. The specific locations in geometries are not disclosed, allowing for a generalized discussion of topological spatial relations without actual geographic context. The context will be provided at the end of the text input to support in-context reasoning. The context evaluated in our experiments is in Table 3. The prompts are crafted with the template as shown in Figure 9 and fed into an LLM (e.g., GPT-4) to convert vernacular descriptions to topological spatial relations. We run the model multiple times to identify the possible converted topological predicates and the preference of an LLM. The output topological predicates are then compared with the ground truth predicates calculated by the GIS tool for evaluation. We also compare the performance when no contextual information is provided. This workflow allows us to evaluate the effectiveness of LLMs for analyzing informal topological relations between two entities and to assess the impact of contextual information on performance. Figure 8.: The workflow for the vernacular relations conversion task. Figure 9.: Prompt template used in the vernacular relation conversion task. 14 Table 3.: The textual description from DBPedia and topological predicate conversion examples. Description Context Predicate Example bordered by No context touchesGlendora is bordered by Azusa. →A is bordered by B. along Geometry type crossesLuling is along the San Marcos River. →A is along B. A is Polygon, and B is LineString. located on Place type crossesLocated on Interstate 10, Weimar is a small community. →A is located on B. A is city, B is highway. on the shore of Place name overlapsRacine is located on the shore of Lake Michigan. →A is on the shore of B. A is Racine in Wisconsin, B is lake Michigan. The evaluation metrics for the vernacular relation conversion task are as follows. (1) Frequency: The count of correctly returned predicates across all experiments. (2) Accuracy: The ratio of the frequency of correctly returned predicates to the total number of generated outputs for each conversion pair. (3) Entropy: The information entropy (Shannon, 1948) of the returned predicates assesses the level of randomness in converting vernacular descriptions into topological predicates. Smaller entropy values indicate a higher likelihood of certain predicates being preferred over others. The metric is computed as: H=−X rel∈Rprellog(prel), where prelrepresents the probability of a specific topological predicate relappearing in the outputs for the given context-conditioned conversion pair. 4. Data and Experiments 4.1. Data processing 4.1.1. Extracting topological spatial relations from spatial
https://arxiv.org/abs/2505.17136v1
database We construct real-world multi-sourced geospatial datasets for our study. The study area for Task 1 and Task 2 is the city of Madison, Wisconsin, United States. The following datasets are collected. •OpenStreetMap road network data (including links and intersections) using OSMnx .1 •Points of interest (POIs) categorized by SLIPO .2 •Land parcels from Wisconsin Statewide Parcel Map Initiative .3 •Census block groups from U.S. Census Bureau .4 Our evaluation tasks focus on the spatial objects with Point, LineString , and Polygon geometry types, assessing their topological spatial relations. All the computations are performed by using the GeoPandas package in Python. Task 1 and Task 2 share the same dataset of triplets. For each combination of {geometry type A, predicate, geometry type B}, we obtain 200 triplets. Among these, 160 are allocated for training the random forest classification model, while the remaining 40 triplets are reserved for evaluation. Additionally, we set aside 25 extra triplets as candidate examples to facilitate 1http://osmnx.readthedocs.io/ 2http://slipo.eu/ 3https://www.sco.wisc.edu/parcels/data/ 4https://www2.census.gov/geo/tiger/TIGER2020PL/LAYER/BG/ 15 few-shot learning. Due to the imbalanced distribution of topological spatial relations within the real-world dataset, we employ multiple strategies for sampling a sufficient number of triplets for fair comparisons: (1) For topological spatial relations including “within”, “contains”, “overlaps”, “touch” and “crosses”, we opt to select a subset of spatial objects and conduct spatial joins to obtain the required triplets. (2) Regarding the “equals” relationship. we manually created the equivalent spatial entities to preserve the topological spatial relations while making direct identification from geometry coordinate matching challenging. a. For Point , include only points with identical coordinates. b. For LineString , interpolate an additional 10% of points along the lines, ensuring that the added points did not alter the original shape. c. For Polygon , loop the origin point and interpolate additional points along the boundaries. (3) We restrict the occurrence of “disjoint” to cases where the subject geometry does not touch or overlap but lies within a smaller buffer of the objects (i.e., nearby entities), to avoid easy identification when two spatial entities are far apart, thus enhancing the evaluation on the differentiation of topological predicates. InTask 2 , we further exclude the “disjoint” relations since most real-world geographic entities are disjoint from each other, yielding 40 ×26=1040 triplets for retrieving the subject or object geographic entities. 4.1.2. Topological spatial relations from DBpedia/Wikipedia ForTask 3 , we have gathered a total of 1078 unique triplets based on the recognized geographic entities from DBpedia/Wikipedia documents, which we combine with everyday descriptions of topological spatial relationships. We then utilize this extracted data to evaluate GPT-4’s capabilities in task 3 as described in Section 3.4.3. Specifically, we downloaded and refined place descriptions within the States of Wisconsin, Texas, and California from the knowledge base DBpedia5, which is the linked data format of Wikipedia and has been previously used in place name disambiguation task (Hu et al., 2014). The data extraction and processing steps are structured as follows: (1) Named entity recognition: From each administrative region’s abstract “dbo:abstract”), we extract all place names that can be found in OpenStreetMap, forming the
https://arxiv.org/abs/2505.17136v1
basis for subsequent topological spatial relation identification. (2) Textual spatial relation extraction: For each pair of place names within a DBpedia abstract, we use GPT-4 to extract topological spatial relation terms found between the entities in the text. When hierarchical place relationships are described, our approach only captures direct relations between a subject and each individual object, omitting implicit transitive relations among the objects themselves. For instance, from the sentence “a city A in a County B, State C,” we extract (A, in, B) but skip (A, in, C) and (B, in, C). (3) Manual verification: We manually review all the extracted spatial relation descriptions to ensure that they indicate topological relations and that the use of the two place names as subjects or objects in the sentence is semantically correct. (4) Description unification: The text descriptions on DBpedia are standardized for consistency. For example, phrases like “is home to”, “home to” or “home of”, are unified as “is home to”. (5) Context-conditioned conversion pairs extraction: We identify how vernacular descriptions depend on the following context to convert them to formal topological predicates. 5https://www.dbpedia.org/ 16 a. Invariant to context: If a vernacular description consistently corresponds to the same topological predicate, we create a context-conditioned conversion pair (description, predicate, N/A). For descriptions that can be converted to multiple formal topological predicates, we associate them with specific contexts for one-on-one conversion. b. Place types as context: If grouping by description, place type Aand place type B, results in a unique topological predicate, we create the pair (description, predicate, place type A/place type B). Place types are extracted from OpenStreetMap data tags. c. Geometry types as context: If grouping by description, geometry type Aand geometry type B, results in a unique predicate, we create the pair (description, predicate, geometry type A/geometry type B). d. Place names as context: Each pair of places can have a unique topological relationship. We create the pair (description, predicate, place name A/place name B), assuming the LLMs have some knowledge about place names. It is possible that more than one context can assist with one-on-one mapping from a vernacular description to a formal topological predicate. We may retain multiple contexts to compare their effectiveness. For example, the conversion between “is bordered by” and “touches” can be identified using place types (is bordered by, town/city, touches), geometry types (is bordered by, Polygon/MultiPolygon, touches), and place names (is bordered by, Aliso Viejo, California/Laguna Beach, California, touches). (6) Data filtering: Only frequently observed context-conditioned conversion pairs are retained for evaluation. a. In the cases of invariant to context, place type as context, and geometry types as context, we retain pairs that occur at least 5 times for evaluation. b. In the case of place names as context, we first filter (description, predicate) that occur at least 5 times, and then sample 5 pairs for each combination. Among the 1078 records extracted from DBPedia abstracts of places in the states of Wisconsin, Texas, and California, 212 explicitly refer to directional and distance spatial relations and were thus removed as this research focused on topological
https://arxiv.org/abs/2505.17136v1
relations. The analytical results of task 3 using the remaining records will be presented in Section 5.3. 4.2. Experiment Models In this research, we perform evaluation tasks based on the following models: 4.2.1. Embedding models We encode WKT geometries into embeddings and process spatial queries using “text- embedding-ada-002” and “text-embedding-3-large” provided by OpenAI6, with output embedding dimensions of 1536 and 3072 respectively. 4.2.2. Reasoning models In our evaluation tasks, we employ GPT-3.5-turbo, GPT-4, and DeepSeek-R1-14B as the LLM- based reasoning models. While performance varies by task, these models have demonstrated potential in commonsense reasoning and in-context learning on certain benchmarks. GPT-3.5- turbo and GPT-4 are primarily optimized for few-shot learning, whereas DeepSeek-R1-14B emphasizes zero-shot capabilities and may experience a decline in performance when few-shot prompting is applied (Guo et al., 2025). 6https://openai.com/blog/new-embedding-models-and-api-updates 17 4.2.3. Model settings (1) Random Forest classifier: The number of estimators (trees) in the Task 1 classifier is set to 100. (2) Temperature settings for GPT-3.5-turbo and GPT-4: For the topological relation qualification task, we set the temperature to 0 to encourage more deterministic outputs. However, achieving full reproducibility remains challenging, even with a temperature of 0, as discussed by Blackwell et al. (2024). Conversely, generating synthetic geometries to support semantic search in Task 2, employs a higher temperature of 0.7 for greater creativity. (3) Temperature settings for DeepSeek-R1-14B: The temperature of the topological relation qualification task is set to be 0.6 to better exploit the reasoning ability of DeepSeek, given its emphasis on deeper, more deliberate thinking. 5. Results 5.1. Topological spatial relation qualification 5.1.1. Validity of the output Before diving into the effectiveness of using LLMs to qualify spatial relationships, a validity check is necessary because of the inherent nondeterministic nature of generative AI models. Furthermore, beyond validating the output as a valid format of {Geometry type A, predicate, Geometry type B}, it is essential to focus on grounding the qualitative spatial reasoning in the matched geometry types and topological relations. The validity results of the output are shown in Table 4. The random forest classifier using the LLM-generated embeddings consistently produced valid output on the test dataset. This highlights that the sentence embedding models can effectively preserve geometry types in the WKT format of geometries, aligning with previous research which encoded WKT by aggregating the token embeddings from GPT-2 and BERT (Ji and Gao, 2023). While GPT-4 and GPT-3.5- turbo largely adhere to the instructions in the desired format, even with the CoT generation, it is more challenging for DeepSeek-R1-14B to strictly output the desired format (but still achieved over 0.9 validity accuracy). When tested with few-shot prompting, the DeepSeek-R1-14B largely ignored the provided examples and adhered to its typical reasoning patterns. As a result, we did not include these results in our evaluation. The highest validity of GPT-4 model suggests that a language model that is characterized by a larger number of parameters, broader training data, and stronger alignment with human instructions, may also possess a better understanding of the definitions of the DE-9IM topological predicates. 5.1.2. Classification metrics Table 5 presents the results of
https://arxiv.org/abs/2505.17136v1
the topological predicate classification task. Both the embedding- based random forest and geospatial question-answering with GPT models can achieve an accuracy of over 0.6. This suggests that identifying topological spatial relationships from the WKT format of geometries with LLMs is promising but remains challenging. Failure to recover the topological spatial relations from embeddings suggests a potential information loss through text tokenization. Incorrectly classified topological relations often cluster within the conceptual neighborhoods or resemble each other (with a small distance), while confusion may also arise from the diverse semantics of topological spatial predicates. For GPT-3.5-turbo and GPT-4, among the four types of prompts (introduced in Section 3.4.1), few-shot learning achieved the best performance with pairs of geometries and their topological relationships for LLMs to learn in context. GPT-4 with few-short promoting achieved 0.66 accuracy. The findings highlight the importance of prompt engineering in the use of LLMs and the critical role of understanding spatial contexts in improving geospatial 18 Table 4.: The validity accuracy of the outputs (N/A: not available). Approach LLM Prompt Format Geometry type Predicate Random Foresttext-embedding -ada-002N/A 1 1 1 text-embedding -3-largeN/A 1 1 1 Question answeringGPT-3.5-turboZero-shot 0.959 1 0.911 Zero-shot-dim 0.999 1 0.927 Few-shot 1 1 0.901 Zero-shot-CoT 0.944 1 0.944 Few-shot-CoT 0.998 1 0.894 GPT-4Zero-shot 1 0.996 0.997 Zero-shot-dim 1 0.999 0.999 Few-shot 1 0.999 0.992 Zero-shot-CoT 0.984 0.990 0.968 Few-shot-CoT 1 0.999 0.999 DeepSeek-R1-14BZero-shot 0.936 0.996 0.913 Zero-shot-dim 0.919 0.998 0.913 Table 5.: The topological predicate classification metrics. Approach LLM Prompt Accuracy Dist(Incorrect) Random Foresttext-embedding -ada-002N/A 0.633 1.449 text-embedding -3-largeN/A 0.632 1.419 Question answeringGPT-3.5-turboZero-shot 0.423 1.331 Zero-shot-dim 0.408 1.360 Few-shot 0.479 1.595 Zero-shot-CoT 0.443 1.370 Few-shot-CoT 0.465 1.174 GPT-4Zero-shot 0.632 1.238 Zero-shot-dim 0.635 1.212 Few-shot 0.666 1.272 Zero-shot-CoT 0.610 1.256 Few-shot-CoT 0.627 1.225 DeepSeek-R1-14BZero-shot 0.534 1.257 Zero-shot-dim 0.557 1.260 query processing accuracy and reliability. However, chain-of-thought (CoT) prompts, which have demonstrated improvement in many other tasks (Wei et al., 2022), did not yield the expected benefit in our spatial reasoning evaluation experiments. As mentioned by Yang et al. (2024), CoT reasoning can sometimes induce unreliable or counterproductive outputs in spatial reasoning tasks. Upon analyzing the generated rationale, we observed that when LLMs are prompted with “Let’s think step by step”, they attempt to check the topological spatial predicates one by one based on their respective definitions from the OGC standard. Few-shot-CoT prompts, on the other hand, were explicitly designed with examples grounded in scientific definitions and logical decision processes proposed in Clementini et al. (1993), aiming to “teach” the models to reason about topological spatial relations from analysis on interiors, boundaries, and exteriors. Despite this structured approach, the accuracy declined due to cascading errors in intermediate steps, such as failing to determine whether the interiors of two geometries intersect at the very beginning. With an explicit reasoning process, DeepSeek-R1-14B outperformed GPT-3.5- turbo (20B parameters). Analysis of its thought generation reveals that, rather than always iteratively checking candidate answers, the model often employed more intuitive reasoning strategies, such as mental mapping (e.g., “Let me plot them mentally”) and self-verification (e.g., “In WKT, a LINESTRING is just a sequence
https://arxiv.org/abs/2505.17136v1
of points connected by straight lines. If it starts and ends at the same point, it doesn’t automatically become a polygon”). Although these reasoning patterns may appear convincing when interpreting individual geometries, they often fall short when reasoning about spatial relations between two geometries. This is mainly due 19 to an overreliance on superficial, linear interpretations of coordinate information, rather than a holistic understanding of topological spatial relationships across the plane. 5.2. Spatial query processing Based on the superior performance in task 1, the experiments in task 2 only used GPT-4 as the geometry generator and text-embedding-3-large as the embedding model. 5.2.1. Direct query We first identified an effective query format for geospatial semantic search (Hu et al., 2015), which is the foundation for applying query expansion in understanding geospatial semantics. As shown in Table 6, specifying the subject geometry type would achieved higher performance due to a narrowed mapping space to the same geometry type. In the following experiments, we assumed that the user query with the geometry type (e.g. retrieving a street from the spatial database implies LineString), and further investigated the factors that may impact the effectiveness of query expansion using LLMs. Table 6.: Spatial query performance comparison results. Target Query Format MRR Hits@5 Hits@10 Hits@20 SubjectDirect queryAbstract as “geometry” 0.081 0.131 0.161 0.194 Specify the subject geometry type 0.152 0.212 0.26 0.29 Expanded queryDirect query + one LLM-generated geometry0.18 0.238 0.278 0.328 Direct query + three LLM-generated geometry0.169 0.232 0.28 0.32 ObjectDirect queryOriginal predicate 0.105 0.131 0.17 0.211 Reversed predicate 0.152 0.219 0.256 0.297 Expanded queryOriginal predicate + one LLM-generated geometry0.15 0.215 0.261 0.302 Reversed predicate + one LLM-generated geometry0.179 0.248 0.294 0.333 5.2.2. Synthetic geometry generation An effective LLM-generated geometry is expected to maintain the same topological spatial relation with the given object as the subject entity while being close to the subject entity in the embedding space. Table 7 compares the validity of the LLM-generated geometries produced by different prompting approaches. We find that 1) GPT-4 effectively comprehends spatial queries and generates geometries in a valid WKT format, and 2) GPT-4 demonstrates a notable level of spatial reasoning regarding the reference object, even in the zero-shot setting, as indicated by the high relation-preserving accuracy (over 0.72) and the low topological distance in the conceptual neighborhood graph (Reis et al., 2008). Figure 10 presents examples of the LLM-generated geometries generated by GPT-4. In the following section, we will check the usefulness of such synthetic geometries generated by zero-shot prompts in enriching spatial query processing. Table 7.: Validity of LLM-generated geometries using different prompts. Prompt Valid WKT Geometry Type Predicate Topological Distance Zero-shot 0.999 1 0.763 1.142 Zero-shot-Check 0.998 1 0.755 1.075 Few-shot 0.996 1 0.728 1.177 Few-shot-Negative 0.997 1 0.754 1.212 20 Figure 10.: Synthetic geometries generated by GPT-4 for LineString/LineString relations 5.2.3. Expanded query on subject retrieval As shown in Table 6, retrieving a subject based on the embeddings encoded from the expanded spatial queries remains challenging. However, including an LLM-generated geometry enhanced the probability of ranking the target subject higher among all candidates. Over 23%
https://arxiv.org/abs/2505.17136v1
of the subjects were ranked within the top 5 candidates. But adding additional synthetic geometries did not appear to provide further improvements. 5.2.4. Performance comparison on object retrieval While the above experiment primarily focuses on retrieving the subject in a triplet, we proceed to evaluate the performance of object retrieval. For a given triplet, we tested the queries formulated with either the original predicate describing the spatial relationship from the subject to the object (e.g., “Retrieve a Point which A contains”), or the reversed predicate referring to the subject (e.g., “Retrieve a Point which is within A”). The results of object retrieval are also summarized in Table 6. Among the object-retrieval query formats, the queries with the original predicate that maintained the subject-to-object directionality yielded worse performance. When we manually reversed the topological spatial relation and treated the object as the subject, the performance matched its counterpart for subject retrieval in Table 6, highlighting the importance of structuring spatial queries to align with everyday language patterns commonly used for spatial reference. 5.3. Vernacular topological relation conversion In Task 3, we collected textual descriptions of topological spatial relations between two places and attempted to identify mapping patterns between these descriptions and the corresponding context-conditioned topological spatial relations. These mappings were then used as input to GPT-4 (giving its superior performance in previous tasks) to evaluate their ability to convert textual descriptions into formal topological spatial relations. 5.3.1. Conversion pairs invariant to context Table 8 lists the six descriptions that consistently map to the same topological relationship in our dataset. However, the results show varying levels of conversion accuracy from vernacular descriptions to preferred formal topological predicates. While the ground truth topological relationships were likely to be implied from “share border with” and “is the location of”, “is an enclave of” was interpreted as within instead of touches ordisjoint . Even though GPT-4 could infer an overlaps relation from “has part of the population in” in all ten experiments, the model might be unsure about its answer and would provide multiple topological predicate alternatives. Despite the subtle difference between “midway” and “halfway”, a higher entropy of “halfway” indicates greater randomness in the conversion. 21 Table 8.: Result of topological relation conversion pairs invariant to context. Description Predicate Frequency Accuracy Entropy share border with touches 10 1.000 0.000 has part of the population in overlaps 10 0.588 0.435 is the location of contains 9 0.818 0.244 is midway between C and disjoint 6 0.600 0.560 is halfway between C and disjoint 6 0.500 0.817 is an enclave of touches 1 0.100 0.167 5.3.2. Conversion pairs conditioned on place types or geometry types The comparisons between scenarios with and without place-type/geometry-type context are illustrated in Table 9 and Table 10 respectively. Our initial hypothesis was that including contextual information in the prompt would reduce ambiguity, resulting in a higher frequency of correct predicate predictions, improved accuracy, and lower entropy using LLMs. However, these improvements were highly instance-dependent and not consistently observed across all the conversion pairs evaluated in our experiments, indicating GPT-4’s limitation in considering all
https://arxiv.org/abs/2505.17136v1
possible interpretations of the given vernacular description. This limitation was also evident in pairs with 0 accuracy, where the model consistently outputted the same incorrect answer. In other instances, GPT-4 still struggled to determine the appropriate topological predicates for certain vernacular descriptions. Table 9.: Result of topological relation conversions using place type as context Description Predicate Spatial Context Frequency AccuracyAccuracy without Con- textEntropyEntropy without Con- text is home to contains city/amenity 10 1 1 0 0 borders touches city/municipality 10 1 1 0 0 is located in within town/county 10 1 1 0 0 is located in within city/state 10 1 1 0 0 is bordered by touches town/city 10 1 1 0 0 is adjacent to touches city/municipality 10 1 0.909 0 0.157 borders touches city/city 10 1 1 0 0 is in within village/county 10 0.909 0.909 0.157 0.157 is located in within village/county 10 0.909 1 0.157 0 is partly in overlaps city/county 10 0.833 1 0.232 0 is bounded by touches city/city 7 0.7 0.333 0.314 0.327 connect C and crosses industrial/city 8 0.4 0.474 0.52 0.355 extend into overlaps city/county 5 0.357 0.421 0.561 0.491 is surrounded by touches city/city 2 0.167 0 0.232 0 is between C and touches town/town 0 0 0.3 0.211 0.773 is surrounded by touches town/city 0 0 0 0 0 is within touches city/municipality 0 0 0 0 0 Table 10.: Result of topological relation conversions using geometry type as context Description Predicate Spatial Context Frequency AccuracyAccuracy without ContextEntropyEntropy without Context is in within Polygon/MultiPolygon 10 1 0.909 0 0.157 is neighboring touches Polygon/Polygon 10 1 1 0 0 is bordered by touches Polygon/MultiPolygon 10 1 1 0 0 is the county seat of within Polygon/Polygon 10 1 1 0 0 extend into overlaps Polygon/Polygon 8 0.714 0.421 0.307 0.491 connect C and crosses LineString/MultiPolygon 9 0.412 0.474 0.348 0.355 is surrounded by touches Polygon/MultiPolygon 0 0 0 0 0 is on crosses Polygon/LineString 0 0 0 0 0.327 22 Table 11.: Accuracy and Entropy changes for conversion pairs with place names AccuracyTopological relation conversion pairs (Order by the absolute values in change,∗with entropy reduction) Improves1) is bounded by →touches∗ 3) is suburb of →touches 5) is part of →within∗ 7) is partly in →touches 9) is between C and →disjoint∗2) is surrounded by →touches 4) is on →crosses 6) is between C and →touches∗ 8) is suburb of →disjoint 10) is near →touches UnchangedRemains 1: 1) includes →contains 3) is bordered by →touches 5) is located in →within Remains 0: 7) is bordered by →disjoint 9) on the shore of →overlaps2) borders →touches 4) is neighboring →touches 6) is the county seat of →within 8) is in →overlaps∗ 10) is within →touches Declines1) is mostly in →overlaps 3) is along →crosses∗ 5) is situated on →overlaps 7) extend into →overlaps 9) is home to →contains2) is near →disjoint 4) is partly in →overlaps 6) is adjacent to →touches 8) is in →within 10) connect C and →crosses∗ 5.3.3. Conversion pairs with place names The accuracy and entropy of the topological
https://arxiv.org/abs/2505.17136v1
relation conversions with place names were also compared to the metrics obtained without the context. As shown in Table 11, mentioning place names did not necessarily improve the accuracy of the conversion or guide the LLM to a preferred answer. GPT-4’s explanation indicates that 1) It focuses on the topological relationships between general geographic locations or boundaries rather than leveraging specific knowledge about each place; 2) The approach tends to exclude predicates possibly with inaccurate and abstract geometries. For instance, in analysis “A is along B” when A is Brazos Bend, Texas, and B is Brazos River, the reasoning begins with “This suggests a specific geographical relationship between a place (A) and a river (B). The term ‘along’ typically indicates that A is situated in a linear arrangement adjacent to B, but not necessarily crossing it or being contained within it. ” 6. Discussion In this section, we would like to further discuss whether the confusion between the topological predicates aligns with the corresponding conceptual neighborhood of topological spatial relations (Egenhofer and Al-Taha, 1992; Egenhofer and Mark, 1995; Formica et al., 2018), the confusion in geometry generation, and the confusion in vernacular description conversion. 6.1. Confusion between topological predicates in topological spatial relation qualification When using GPT-4 (zero-shot learning) for topological spatial relation qualification, the confusion matrices for all the geometry type combinations are drawn in Figure 11. We compare the topological predicate pairs that may confuse GPT models with the classic conceptual neighborhood graphs in Figure 1. The observations are twofold: 1) The most frequently confused topological spatial relation for a given predicate depends on the geometry types involved. For example, consider the predicate “overlaps.” In a Linestring/Linestring relationship, it is rarely classified correctly and is often confused with “crosses”, “equals” or “touches”. However, in a Polygon/Polygon relationship, “overlaps” is more likely to be correctly identified, though it may occasionally be confused with “contains” or “disjoint”. Another illustrative example involves the predicate “touches.” A Point “touches” a Linestring or Polygon, or a Linestring “touches” a Polygon is frequently mistaken as ”within” while such confusion is less between two geometries of 23 the same dimension, such as two Polygons or two Linestrings. These examples suggest the varied degree to which an LLM understands formal geometry boundaries associated with geometry types, particularly their dimensions, which is crucial in identifying formal topological spatial relations. However, the constraint of formal definitions may contradict common conceptual interpretations, such as excluding a polygon from containing its boundary, leading to fewer occurrences in GPT-4’s response. 2) For the same geometry type combination, distinguishing certain pairs of topological spatial relations is more challenging than others. These pairs mostly fall within the conceptual neighborhood, though exceptions exist. Take “Linestring/Linestring” as an example. GPT-4 can identify “crosses”, “disjoint”, and “equals” more accurately. However, it struggles with predicates like “contains” and “overlaps,” frequently confusing them with “crosses” or “touches”. The four topological spatial relations all require that the two geometries share elements like points or line segments and might be interchangeable in daily use. This challenge highlights that the ambiguous semantics of
https://arxiv.org/abs/2505.17136v1
these predicates can encompass scenarios broader than their strict formal definitions. Overall, the issues observed in Task 1 actually reflect an alignment with everyday spatial reasoning. While formal definitions are precise and dimension-contingent, everyday language and intuitive reasoning often blur the distinctions. Figure 11.: Confusion matrices between topological predicates in relation qualification. 6.2. Confusion between topological predicates in geometry generation The confusion pattern changes when leveraging an LLM to generate geometries given a spatial query and the required geometry type, as shown in Figure 12. The findings can be summarized as follows: 1) Directionality in describing the topological relation between two geometry types matters. For example, generating a Polygon that “crosses” a LineString proves challenging for 24 GPT-4, while the reversed query—generating a LineString that “crosses” a Polygon—is handled more effectively. Similarly, the model is more successful in generating a Polygon that “contains” a Point, LineString, or another Polygon but struggles to produce a Point, LineString, or Polygon that is “within” a Polygon. This asymmetry can be attributed to the model’s approach of extracting coordinates from the query geometry to construct the second geometry. This reliance limits the model’s ability to conceptualize spatial relationships beyond the provided coordinates. 2) The geometry type of the reference object affects the results. Figure 13 provides examples of LineString/LineString topological spatial relations where the generated topological spatial relations were different from the predicate in the spatial query. As observed from these examples, even if the spatial queries specify the reference object geometry type as LineString, the model sometimes applies definitions for Polygons when a line forms a closed shape. In this case, when we manually changed the reference object geometry type into Polygon and recompute its topological relations with the LLM-generated geometry, 223 out of 391 queries (across all prompts) with closed geometries were found to exhibit the desired topological relationship. This observation suggests GPT-4’s perception based on the provided coordinates over the geometry type specified in the text, inspiring us to further explore the cognition potential of the LLMs. Figure 12.: Confusion matrices between spatial predicates in geometry generation. 6.3. Confusion between topological spatial relations in conversion While GPT-4 can accurately convert several vernacular descriptions to corresponding formal predicates, there are instances where its performance falls short. This can be attributed to the mismatch between crispy geometry and the vague human perception of place boundaries. We can further divide it into three categories. 1) The abstraction of spatial entities’ shapes in the spatial 25 Figure 13.: Invalid synthetic geometries generated by GPT-4 for LineString/LineString relations with close-shape objects. dataset may differ from those used in descriptions. For example, when converting “is along” for Brazos Bend, Texas, and Brazos River, Texas, GPT-4 returned “touches” when considering Brazos Bend as a region (Polygon) and “within” when considering Brazos Bend as a Point. 2) The computed topological spatial relations can be sensitive to the marked shape points, while human perception can tolerate such systematic errors, yielding the description of the relations in the conceptual neighborhood of the ground truth. A typical example is when the ground truth label
https://arxiv.org/abs/2505.17136v1
of “is suburb of” is “disjoint”, but the two cities look like they “touch” each other on the map. 3) Official geographic boundaries might differ from people’s perception of a place (Gao et al., 2017). In our dataset, “is within” and “is an enclave of” can map to “touches”. But the LLM would constantly output “within”. For instance, the City of Shullsburg, Wisconsin, and the Town of Shullsburg, Wisconsin, illustrate this discrepancy7. Although the City of Shullsburg is enclosed by the Town of Shullsburg, the city boundary is separated from the town boundary, creating a hole in the town boundary. In summary, even though GPT-4’s responses can be partly interpreted from the conceptual neighborhood of topological spatial relations, challenges remain due to the vagueness of real-world geographic entity boundaries and human perception of shapes and places. 7. Conclusion and future work This study focuses on the evaluation of the ability of LLMs including GPT-3.5, GPT-4, and DeepSeek-R1-14B to process, represent, and reason with topological spatial relations. Consequently, we designed a workflow to assess the efficacy of LLMs in addressing three typical problems on topological spatial relations. The core idea involves converting geometric objects into textual strings (WKT), which can then be decoded and utilized for spatial reasoning. The first task, topological spatial relation qualification, focuses on determining if such textual representation can retain the necessary geometric information for deriving named topological predicates. The second task explores the feasibility of conducting geospatial queries through semantic search, where LLMs can generate a geometry to augment the query and also generate embeddings. The third task presents an everyday scenario where an LLM serves as a translator to convert vernacular descriptions of spatial relations into formalized topological predicates based on their capability to understand linguistic patterns. From the multi-source geospatial datasets, we extract triplets to represent topological spatial relations in real-world spatial configurations. Using the triplets as input, we have compared the performance on the three evaluation tasks with ground truth data. In Task 1, both the random forest and GPT-based reasoning models can identify most relations correctly (over 0.6 accuracy on average), while some relations can be confounding. For GPT-3.5-turbo and GPT- 4, few-shot prompt engineering is essential to improve the performance while CoT prompting 7https://en.wikipedia.org/wiki/Shullsburg_(town),_Wisconsin 26 strategy had a negative impact on our topological spatial relation inference task. The thought generation process and the self-verification allow DeepSeek-R1-14B to perform spatial reasoning more intuitively and outperformed GPT-3.5-turbo in accuracy. Further comparison with the conceptual neighborhood allows for a more quantitative understanding of the errors. Even though task 2 further verifies the challenge of replacing spatial queries with semantic search. However, improvements can be observed when we customize the query and augment it with LLM-generated geometries. The LLM-generated geometries are not only valid WKT but also have high accuracy (up to 0.76) in preserving topological spatial relations (or within their conceptual neighbors). In task 3, the improvement of LLMs to reduce ambiguity in spatial queries is relatively limited. However, in most cases, the generated outputs fall into the conceptual neighborhood of the ground-truth topological predicate. Moreover,
https://arxiv.org/abs/2505.17136v1
given various contexts, the changes in the preferred response show the ability of the LLM to reason about it using commonsense knowledge and the typical spatial configurations. Interestingly, adding the geometry-type context in prompts has derived more performance improvement compared to the cases with adding the place-type context, but the performance of adding context or without context varies by instance. In conclusion, through the three tasks and intensive experiments, we systematically approach the overarching question of LLM’s ability in understanding geometry information and their topological spatial relations, moving from the broader challenge to more targeted strategies involving spatial context, tailored prompting techniques, and specialized domain knowledge in GIScience. However, it is essential to acknowledge the limitations of our work. First, our focus was primarily on in-context learning, and we did not explore fine-tuning approaches, which could potentially yield further performance improvements. Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) presents a promising approach for enhancing the qualitative spatial reasoning capabilities LLMs by integrating external spatial databases, GIS tools and domain-specific knowledge from GIScience. Unlike in-context learning, which allows for intuitive qualitative spatial reasoning, the effective implementation of RAG relies on the precise generation of formalism-based spatial queries from natural language input along with reliable high-resolution datasets. Improving the translation from natural language to into symbolic form and logic also opens the door to neurosymbolic approaches (Sheth et al., 2023), in which LLMs serve as translators that convert user text input into symbolic representations, which are then processed by symbolic engines with strong capabilities in explanation, verification, and formal reasoning. Realizing this potential requires addressing the challenges identified in this work, such as ambiguities in linguistic spatial descriptions, conceptual neighborhood relationships in topological reasoning, and the correct use of available GIS functions and analytical workflows. Additionally, our dataset is currently limited to the city and state levels, and further investigation into multi-scale spatial relations is still needed to fully capture the complexity of spatial interactions across different geographical scales and heterogeneous datasets. Moreover, the scope of this work is limited to topological relations in natural languages. While we can handcraft datasets with ground truth labels of formalized predicates for evaluation, the mathematical computation makes LLMs less competent than spatial databases. Directional and distance relations introduce more vagueness in language use due to factors such as shape and scale, especially when two places cannot be viewed as points. In future work, we plan to explore other spatial relations using datasets such as the Geograph project8, which provides rich expressions of various spatial relations associated with geometries, text descriptions and photos (M Hall et al., 2011). This will enable us to evaluate LLMs’ or multi-modal foundation models’ capabilities (e.g., vision-language geo-foundation models) in geospatial reasoning from a more comprehensive perspective (Mai et al., 2024). Lastly, we rephrased our own text instead of directly using paragraphs from DBpedia to allow for flexibility in introducing different context information. However, this approach may result in some loss of authenticity in language use, such as anaphora, which is prevalent in original text documents and worth exploring in future 8http://www.geograph.org.uk 27
https://arxiv.org/abs/2505.17136v1
research. In summary, this research demonstrates the promise and limitations of using state-of-the- art LLMs to analyze topological spatial relations, while offering insights for future research of advancing LLMs with geographical knowledge, aiming to develop GeoAI foundation models capable of qualitative spatial reasoning and other spatial intelligence tasks. Disclosure statement The authors report there are no competing interests to declare. Data and Codes Availability Statement The data and codes supporting the main findings of this study are available at Figshare: https://doi.org/10.6084/m9.figshare.25127135.v1 and the GitHub repository at https: //github.com/GeoDS/GeoFM-TopologicalRelations . Acknowledgments Song Gao acknowledges the funding support from the National Science Foundation funded AI institute [Grant No. 2112606] for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funder(s). Notes on contributors Yuhan Ji : Yuhan Ji is a PhD student GIScience at the Department of Geography, University of Wisconsin-Madison. Her main research interests include transportation, geospatial data science, and GeoAI approaches to human mobility. Song Gao : Dr. Song Gao is an associate professor in GIScience at the Department of Geography, University of Wisconsin-Madison. He holds a Ph.D. in Geography at the University of California, Santa Barbara. His main research interests include GeoAI, geospatial data science, spatial networks, human mobility and social sensing. Ying Nie : Ying Nie is an undergraduate student at the Department of Computer Sciences, University of Wisconsin-Madison. Her main research interests include geospatial data science and GeoAI. Krzysztof Janowicz : Dr. Krzysztof Janowicz is a Professor in Geoinformatics at the Department of Geography and Regional Research, University of Vienna. His research interests are knowledge representation and reasoning as they apply to spatial and geographic data, e.g. in the form of knowledge graphs. Ivan Majic : Dr. Ivan Majic is a Postdoc researcher in Geoinformatics at the Department of Geography and Regional Research, University of Vienna. His research interests include Spatial Data Science, GeoAI, and Qualitative Spatial Reasoning. References Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. (2023). GPT-4 technical report. arXiv preprint arXiv:2303.08774 . 28 Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al. (2022). Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems , 35:23716–23736. Blackwell, R. E., Barry, J., and Cohn, A. G. (2024). Towards reproducible llm evaluation: Quantifying uncertainty in llm benchmark scores. arXiv preprint arXiv:2410.03492 . Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., and Yakhnenko, O. (2013). Translating embeddings for modeling multi-relational data. Advances in neural information processing systems , 26. Breiman, L. (2001). Random forests. Machine Learning , 45:5–32. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems , 33:1877–1901. Carlson, L. A. and Logan, G. D. (2001).
https://arxiv.org/abs/2505.17136v1
Using spatial terms to select an object. Memory & Cognition , 29(6):883–892. Carpineto, C. and Romano, G. (2012). A survey of automatic query expansion in information retrieval. ACM Computing Surveys (CSUR) , 44(1):1–50. Cervone, G., Sava, E., Huang, Q., Schnebele, E., Harrison, J., and Waters, N. (2016). Using Twitter for tasking remote-sensing data collection and damage assessment: 2013 boulder flood case study. International Journal of Remote Sensing , 37(1):100–124. Chen, H., Vasardani, M., Winter, S., and Tomko, M. (2018). A graph database model for knowledge extracted from place descriptions. ISPRS International Journal of Geo-Information , 7(6):221. Chen, J., Cohn, A. G., Liu, D., Wang, S., Ouyang, J., and Yu, Q. (2015). A survey of qualitative spatial representations. The Knowledge Engineering Review , 30(1):106–136. Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. D. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., et al. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 . Clementini, E. and Cohn, A. G. (2014). RCC*-9 and CBM. In International Conference on Geographic Information Science , pages 349–365. Springer. Clementini, E. and Cohn, A. G. (2024). Extension of RCC*-9 to complex and three-dimensional features and its reasoning system. ISPRS International Journal of Geo-Information , 13(1):25. Clementini, E. and Di Felice, P. (1996). A model for representing topological relationships between complex geometric features in spatial databases. Information sciences , 90(1-4):121–136. Clementini, E., Di Felice, P., and Van Oosterom, P. (1993). A small set of formal topological relationships suitable for end-user interaction. In International symposium on spatial databases , pages 277–295. Springer. Clementini, E., Sharma, J., and Egenhofer, M. J. (1994). Modelling topological spatial relations: Strategies for query processing. Computers & graphics , 18(6):815–822. Cohn, A. G. (2023). An evaluation of ChatGPT-4’s qualitative spatial reasoning capabilities in RCC-8. arXiv preprint arXiv:2309.15577 . Cohn, A. G. and Blackwell, R. E. (2024a). Can large language models reason about the region connection calculus? arXiv preprint arXiv:2411.19589 . Cohn, A. G. and Blackwell, R. E. (2024b). Evaluating the Ability of Large Language Models to Reason About Cardinal Directions. In Adams, B., Griffin, A. L., Scheider, S., and McKenzie, G., editors, 16th International Conference on Spatial Information Theory (COSIT 2024) , volume 315 ofLeibniz International Proceedings in Informatics (LIPIcs) , pages 28:1–28:9, Dagstuhl, Germany. Schloss Dagstuhl – Leibniz-Zentrum f¨ ur Informatik. Cohn, A. G. and Hazarika, S. M. (2001). Qualitative spatial representation and reasoning: An overview. Fundamenta informaticae , 46(1-2):1–29. Cohn, A. G. and Hernandez-Orallo, J. (2023). Dialectical language model evaluation: An initial appraisal of the commonsense spatial reasoning abilities of llms. arXiv preprint arXiv:2304.11164 . Cohn, A. G. and Renz, J. (2008). Qualitative spatial representation and reasoning. Foundations of Artificial Intelligence , 3:551–596. Cui, Z., Cohn, A. G., and Randell, D. A. (1993). Qualitative and topological relationships in spatial databases. In Advances in Spatial Databases: Third International Symposium, SSD’93 Singapore, June 23–25, 1993 Proceedings 3 , pages 296–315. Springer. Das, S. (2023). Evaluating the capabilities of large language models for spatial and situational understanding . PhD thesis, PhD thesis, Thesis (MA). University of Cambridge. Devlin, J.,
https://arxiv.org/abs/2505.17136v1
Chang, M.-W., Lee, K., and Toutanova, K. (2018). BERT: Pre-training of deep bidirectional 29 transformers for language understanding. arXiv preprint arXiv:1810.04805 . Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 . Du, S., Qin, Q., Chen, D., and Wang, L. (2005). Spatial data query based on natural language spatial relations. In Proceedings. 2005 IEEE International Geoscience and Remote Sensing Symposium, 2005. IGARSS’05. , volume 2, pages 1210–1213. IEEE. Egenhofer, M. J. and Al-Taha, K. K. (1992). Reasoning about gradual changes of topological relationships. In Goos, G., Hartmanis, J., Frank, A. U., Campari, I., and Formentini, U., editors, Theories and Methods of Spatio-Temporal Reasoning in Geographic Space , volume 639 of Lecture Notes in Computer Science , pages 196–219. Springer Berlin Heidelberg, Berlin, Heidelberg. Egenhofer, M. J. and Franzosa, R. D. (1991). Point-set topological spatial relations. International journal of geographical information systems , 5(2):161–174. Egenhofer, M. J. and Herring, J. R. (1991). Categorizing binary topological relations between regions, lines, and points in geographic databases. Technical report, Department of Surveying Engineering, University of Maine, Orono, Maine, USA. Egenhofer, M. J. and Mark, D. M. (1995). Modelling conceptual neighbourhoods of topological line-region relations. International journal of geographical information systems , 9(5):555–565. Fernandez, A. and Dube, S. (2023). Core building blocks: Next gen geo spatial gpt application. arXiv preprint arXiv:2310.11029 . Formica, A., Mazzei, M., Pourabbas, E., and Rafanelli, M. (2018). Approximate answering of queries involving polyline–polyline topological relationships. Information Visualization , 17(2):128–145. Frank, A. U. (1992). Qualitative spatial reasoning about distances and directions in geographic space. Journal of Visual Languages & Computing , 3(4):343–371. Freksa, C., Habel, C., and Wender, K. F. (1998). Spatial cognition: An interdisciplinary approach to representing and processing spatial knowledge , volume 1404. Springer Science & Business Media. Frolov, S., Hinz, T., Raue, F., Hees, J., and Dengel, A. (2021). Adversarial text-to-image synthesis: A review. Neural Networks , 144:187–209. Fulman, N., Memduho˘ glu, A., and Zipf, A. (2024). Distortions in judged spatial relations in large language models. The Professional Geographer , 76(6):703–711. Gao, S. and Goodchild, M. F. (2013). Asking spatial questions to identify GIS functionality. In 2013 Fourth International Conference on Computing for Geospatial Research and Application , pages 106– 110. IEEE. Gao, S., Hu, Y., and Li, W. (2023). Handbook of Geospatial Artificial Intelligence . CRC Press. Gao, S., Janowicz, K., Montello, D. R., Hu, Y., Yang, J.-A., McKenzie, G., Ju, Y., Gong, L., Adams, B., and Yan, B. (2017). A data-synthesis-driven method for detecting and extracting vague cognitive regions. International Journal of Geographical Information Science , 31(6):1245–1271. Gramacki, P., Martins, B., and Szyma´ nski, P. (2024). Evaluation of code llms on geospatial code generation. In Proceedings of the 7th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery , pages 54–62. Guo, D., Ge, S., Zhang, S., Gao, S., Tao, R., and Wang, Y. (2022). DeepSSN: A deep convolutional neural network to assess spatial scene similarity. Transactions
https://arxiv.org/abs/2505.17136v1
in GIS , 26(4):1914–1938. Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., et al. (2025). Deepseek-r1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948 . Guo, R. (1998). Spatial objects and spatial relationships. Geo-spatial Information Science , 1(1):38–42. Head, C. G. (1984). The map as natural language: a paradigm for understanding. Cartographica: The International Journal for Geographic Information and Geovisualization , 21(1):1–32. Hu, Y., Goodchild, M., Zhu, A.-X., Yuan, M., Aydin, O., Bhaduri, B., Gao, S., Li, W., Lunga, D., and Newsam, S. (2024). A five-year milestone: reflections on advances and limitations in GeoAI research. Annals of GIS , 30(1):1–14. Hu, Y., Janowicz, K., and Prasad, S. (2014). Improving wikipedia-based place name disambiguation in short texts using structured data from dbpedia. In Proceedings of the 8th workshop on geographic information retrieval , pages 1–8. Hu, Y., Janowicz, K., Prasad, S., and Gao, S. (2015). Metadata topic harmonization and semantic search for linked-data-driven geoportals: A case study using ArcGIS Online. Transactions in GIS , 19(3):398–416. Hu, Y., Mai, G., Cundy, C., Choi, K., Lao, N., Liu, W., Lakhanpal, G., Zhou, R. Z., and Joseph, 30 K. (2023). Geo-knowledge-guided gpt models improve the extraction of location descriptions from disaster-related social media messages. International Journal of Geographical Information Science , 37(11):2289–2318. Huang, W., Wang, J., and Cong, G. (2024). Zero-shot urban function inference with street view images through prompting a pretrained vision-language model. International Journal of Geographical Information Science , 38(7):1414–1442. Jakubik, J., Roy, S., Phillips, C., Fraccaro, P., Godwin, D., Zadrozny, B., Szwarcman, D., Gomes, C., Nyirjesy, G., Edwards, B., et al. (2023). Foundation models for generalist geospatial artificial intelligence. arXiv preprint arXiv:2310.18660 . Janowicz, K. (2023). Philosophical foundations of GeoAI: Exploring sustainability, diversity, and bias in GeoAI and spatial data science. arXiv preprint arXiv:2304.06508 , pages 1–17. Janowicz, K., Gao, S., McKenzie, G., Hu, Y., and Bhaduri, B. (2020). GeoAI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. International Journal of Geographical Information Science , 34(4):625–636. Janowicz, K., Van Harmelen, F., Hendler, J. A., and Hitzler, P. (2015). Why the data train needs semantic rails. AI Magazine , 36(1):5–14. Ji, Y. and Gao, S. (2023). Evaluating the effectiveness of large language models in representing textual descriptions of geometry and spatial relations (short paper). In 12th International Conference on Geographic Information Science (GIScience 2023) , volume 43, pages 1–6. Schloss-Dagstuhl-Leibniz Zentrum f¨ ur Informatik. Jones, C. B., Abdelmoty, A. I., Finch, D., Fu, G., and Vaid, S. (2004). The spirit spatial search engine: Architecture, ontologies and spatial indexing. In Geographic Information Science: Third International Conference, GIScience 2004, Adelphi, MD, USA, October 20-23, 2004. Proceedings 3 , pages 125–139. Springer. Kasneci, E., Seßler, K., K¨ uchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., G¨ unnemann, S., H¨ ullermeier, E., et al. (2023). ChatGPT for good? on opportunities and challenges of large language models for education. Learning and Individual Differences , 103:102274. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L.,
https://arxiv.org/abs/2505.17136v1
Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., et al. (2023). Segment anything. arXiv preprint arXiv:2304.02643 . Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Y. (2022). Large language models are zero-shot reasoners. Advances in neural information processing systems , 35:22199–22213. Kordjamshidi, P., Van Otterlo, M., and Moens, M.-F. (2011). Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Transactions on Speech and Language Processing (TSLP) , 8(3):1–36. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., K¨ uttler, H., Lewis, M., Yih, W.- t., Rockt¨ aschel, T., et al. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems , 33:9459–9474. Li, W., Lee, H., Wang, S., Hsu, C.-Y., and Arundel, S. T. (2023). Assessment of a new GeoAI foundation model for flood inundation mapping. In Proceedings of the 6th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery , pages 102–109. Li, Z. and Ning, H. (2023). Autonomous GIS: the next-generation AI-powered GIS. International Journal of Digital Earth , 16(2):4668–4686. Logeswaran, L. and Lee, H. (2018). An efficient framework for learning sentence representations. arXiv preprint arXiv:1803.02893 . Loglisci, C., Ienco, D., Roche, M., Teisseire, M., and Malerba, D. (2012). An unsupervised framework for topological relations extraction from geographic documents. In Database and Expert Systems Applications: 23rd International Conference, DEXA 2012, Vienna, Austria, September 3-6, 2012. Proceedings, Part II 23 , pages 48–55. Springer. M Hall, M., Smart, P. D., and Jones, C. B. (2011). Interpreting spatial language in image captions. Cognitive processing , 12:67–94. Mai, G., Cundy, C., Choi, K., Hu, Y., Lao, N., and Ermon, S. (2022a). Towards a foundation model for geospatial artificial intelligence (vision paper). In Proceedings of the 30th International Conference on Advances in Geographic Information Systems , pages 1–4. Mai, G., Huang, W., Sun, J., Song, S., Mishra, D., Liu, N., Gao, S., Liu, T., Cong, G., Hu, Y., et al. (2024). On the opportunities and challenges of foundation models for GeoAI (vision paper). ACM Transactions on Spatial Algorithms and Systems , 10(2):1–46. Mai, G., Janowicz, K., Cai, L., Zhu, R., Regalia, B., Yan, B., Shi, M., and Lao, N. (2020). SE-KGE: A location-aware knowledge graph embedding model for geographic question answering and spatial 31 semantic lifting. Transactions in GIS , 24(3):623–655. Mai, G., Janowicz, K., Hu, Y., Gao, S., Yan, B., Zhu, R., Cai, L., and Lao, N. (2022b). A review of location encoding for GeoAI: methods and applications. International Journal of Geographical Information Science , 36(4):639–673. Majic, I., Naghizade, E., Winter, S., and Tomko, M. (2021). RIM: a ray intersection model for the analysis of the between relationship of spatial objects in a 2D plane. International Journal of Geographical Information Science , 35(5):893–918. Majic, I., Wang, Z., Janowicz, K., and Karimi, M. (2024). Spatial task-explicity matters in prompting large multimodal models for spatial planning. In Proceedings of the 7th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery , pages 99–105. Manvi, R., Khanna, S., Mai, G., Burke, M., Lobell, D., and Ermon, S. (2023). GeoLLM: Extracting geospatial
https://arxiv.org/abs/2505.17136v1
knowledge from large language models. arXiv preprint arXiv:2310.06213 . Mark, D. M. and Egenhofer, M. J. (1994). Modeling spatial relations between lines and regions: combining formal mathematical models and human subjects testing. Cartography and geographic information systems , 21(4):195–212. Mooney, P., Cui, W., Guan, B., and Juh´ asz, L. (2023). Towards understanding the geospatial skills of ChatGPT: Taking a geographic information systems (GIS) exam. In Proceedings of the 6th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery , pages 85–94. Muennighoff, N. (2022). Sgpt: GPT sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904 . Neelakantan, A., Xu, T., Puri, R., Radford, A., Han, J. M., Tworek, J., Yuan, Q., Tezak, N., Kim, J. W., Hallacy, C., et al. (2022). Text and code embeddings by contrastive pre-training. arXiv preprint arXiv:2201.10005 . OpenAI (2022). Introducing ChatGPT. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al. (2018). Improving language understanding by generative pre-training. OpenAI blog , 0(1):12. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. (2019). Language models are unsupervised multitask learners. OpenAI blog , 1(8):24. Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. (2021). DALL ·E: Creating images from text. Randell, D. A., Cui, Z., and Cohn, A. G. (1992). A spatial logic based on regions and connection. Principles of Knowledge Representation and Reasoning , 92:165–176. Rao, J., Gao, S., Mai, G., and Janowicz, K. (2023). Building privacy-preserving and secure geospatial artificial intelligence foundation models (vision paper). In Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems , pages 1–4. Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using siamese BERT- networks. arXiv preprint arXiv:1908.10084 . Reis, R. M., Egenhofer, M. J., and Matos, J. L. (2008). Conceptual neighborhoods of topological relations between lines. In Headway in Spatial Data Handling: 13th International Symposium on Spatial Data Handling , pages 557–574. Springer. Renz, J. and Nebel, B. (1998). Spatial reasoning with topological information. In Spatial Cognition: An Interdisciplinary Approach to Representing and Processing Spatial Knowledge , pages 351–371. Springer. Sack, J.-R. and Urrutia, J. (1999). Handbook of computational geometry . Elsevier. Scheider, S., Nyamsuren, E., Kruiger, H., and Xu, H. (2021). Geo-analytical question-answering with GIS. International Journal of Digital Earth , 14(1):1–14. Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal , 27(3):379–423. Sheth, A., Roy, K., and Gaur, M. (2023). Neurosymbolic artificial intelligence (why, what, and how). IEEE Intelligent Systems , 38(3):56–62. Skoumas, G., Pfoser, D., Kyrillidis, A., and Sellis, T. (2016). Location estimation using crowdsourced spatial relations. ACM Transactions on Spatial Algorithms and Systems (TSAS) , 2(2):1–23. Tao, R. and Xu, J. (2023). Mapping with ChatGPT. ISPRS International Journal of Geo-Information , 12(7):284. Tellex, S., Kollar, T., Dickerson, S., Walter, M., Banerjee, A., Teller, S., and Roy, N. (2011). Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 25, pages 1507–1514. Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng,
https://arxiv.org/abs/2505.17136v1
H.-T., Jin, A., Bos, T., 32 Baker, L., Du, Y., et al. (2022). LaMDA: Language models for dialog applications. arXiv preprint arXiv:2201.08239 . Tucker, S. (2024). A systematic review of geospatial location embedding approaches in large language models: A path to spatial ai systems. arXiv preprint arXiv:2401.10279 . Wallgr¨ un, J. O., Klippel, A., and Baldwin, T. (2014). Building a corpus of spatial relational expressions extracted from web documents. In Proceedings of the 8th workshop on geographic information retrieval , pages 1–8. Wallgr¨ un, J. O., Klippel, A., and Karimzadeh, M. (2015). Towards contextualized models of spatial relations. In Proceedings of the 9th Workshop on Geographic Information Retrieval , pages 1–2. Wang, F. (2000). A fuzzy grammar and possibility theory-based natural language user interface for spatial queries. Fuzzy sets and systems , 113(1):147–159. Wang, L., Yang, N., and Wei, F. (2023). Query2doc: Query expansion with large language models. arXiv preprint arXiv:2303.07678 . Wang, Z., Ye, X., and Tsou, M.-H. (2016). Spatial, temporal, and content analysis of Twitter for wildfire hazards. Natural Hazards , 83:523–540. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. (2022). Chain- of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems , 35:24824–24837. Wolter, D. and Wallgr¨ un, J. O. (2012). Qualitative spatial reasoning for applications: New challenges and the sparq toolbox. Qualitative Spatio-Temporal Representation and Reasoning: Trends and Future Directions: Trends and Future Directions , page 336. Wu, K., Zhang, X., Dang, Y., and Ye, P. (2023a). Deep learning models for spatial relation extraction in text. Geo-spatial Information Science , 26(1):58–70. Wu, M., Huang, Q., Gao, S., and Zhang, Z. (2023b). Mixed land use measurement and mapping with street view images and spatial context-aware prompts via zero-shot multimodal learning. International Journal of Applied Earth Observation and Geoinformation , 125:103591. Xie, Y., Wang, Z., Mai, G., Li, Y., Jia, X., Gao, S., and Wang, S. (2023). Geo-foundation models: Reality, gaps and opportunities. In Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems , pages 1–4. Yamada, Y., Bao, Y., Lampinen, A. K., Kasai, J., and Yildirim, I. (2023). Evaluating spatial understanding of large language models. arXiv preprint arXiv:2310.14540 . Yan, B., Janowicz, K., Mai, G., and Gao, S. (2017). From ITDL to Place2Vec: Reasoning about place type similarity and relatedness by learning embeddings from augmented spatial contexts. In Proceedings of the 25th ACM SIGSPATIAL international conference on advances in geographic information systems , pages 1–10. Yang, J., Yang, S., Gupta, A. W., Han, R., Fei-Fei, L., and Xie, S. (2024). Thinking in space: How multimodal large language models see, remember, and recall spaces. arXiv preprint arXiv:2412.14171 . Yang, X., Chen, A., PourNejatian, N., Shin, H. C., Smith, K. E., Parisien, C., Compas, C., Martin, C., Costa, A. B., Flores, M. G., et al. (2022). A large language model for electronic health records. npj Digital Medicine , 5(1):194. Yuan, Y. (2011). Extracting spatial relations from document for geographic information retrieval. In 2011 19th International Conference on Geoinformatics , pages 1–5. IEEE.
https://arxiv.org/abs/2505.17136v1
Zhang, Q., Kang, Y., and Roth, R. (2023). The Ethics of AI-Generated Maps: DALLE-2 and AI’s Implications for Cartography (Short Paper). In 12th International conference on geographic information science (GIScience 2023) , pages 93–98. Schloss Dagstuhl–Leibniz-Zentrum f¨ ur Informatik. Zhang, Y., Wei, C., He, Z., and Yu, W. (2024). GeoGPT: An assistant for understanding and processing geospatial tasks. International Journal of Applied Earth Observation and Geoinformation , 131:103976. Zheng, O., Abdel-Aty, M., Wang, D., Wang, Z., and Ding, S. (2023). ChatGPT is on the horizon: Could a large language model be all we need for intelligent transportation? arXiv preprint arXiv:2303.05382 . Zhu, R., Janowicz, K., Cai, L., and Mai, G. (2022). Reasoning over higher-order qualitative spatial relations via spatially explicit neural networks. International Journal of Geographical Information Science , 36(11):2194–2225. 33
https://arxiv.org/abs/2505.17136v1
Cog-TiPRO: Iterative Prompt Refinement with LLMs to Detect Cognitive Decline via Longitudinal V oice Assistant Commands Kristin Qi1, Youxiang Zhu1, Caroline Summerour2, John A. Batsis2, Xiaohui Liang1 1Computer Science, University of Massachusetts, Boston, MA, USA 2School of Medicine, University of North Carolina, Chapel Hill, NC, USA Abstract —Early detection of cognitive decline is crucial for enabling interventions that can slow neurodegenerative disease progression. Traditional diagnostic approaches rely on labor- intensive clinical assessments, which are impractical for frequent monitoring. Our pilot study investigates voice assistant systems (V AS) as non-invasive tools for detecting cognitive decline through longitudinal analysis of speech patterns in voice commands. Over an 18-month period, we collected voice commands from 35 older adults, with 15 participants providing daily at-home V AS interactions. To address the challenges of analyzing these short, unstructured and noisy commands, we propose Cog-TiPRO, a framework that combines (1) LLM-driven iterative prompt refinement for linguistic feature extraction, (2) HuBERT-based acoustic feature extraction, and (3) transformer-based temporal modeling. Using iTransformer, our approach achieves 73.80% accuracy and 72.67% F1-score in detecting MCI, outperforming its baseline by 27.13%. Through our LLM approach, we identify linguistic features that uniquely characterize everyday command usage patterns in individuals experiencing cognitive decline. Index Terms —Cognitive decline detection, LLMs, time-series I. I NTRODUCTION Mild Cognitive Impairment (MCI), a precursor to dementia, is characterized by subtle declines in cognitive functions such as attention, memory, and language. Early detection of these changes enables timely intervention to mitigate progression to dementia. However, detection and continuous monitoring of MCI remain challenging due to the absence of gold stan- dard biomarkers. Additionally, determining whether cognitive changes represent pathological decline requires complex clin- ical judgment. Currently, cognitive impairment often remains undetected until significant progression to dementia has oc- curred, leaving patients and families unprepared for managing its cognitive and functional impacts [1]. Traditional diagnostics for MCI rely on clinical assessments, including neuroimaging, laboratory tests, and standardized neuropsychological examinations such as the Montreal Cog- nitive Assessment (MoCA) [2]. While effective for diagnosis, these methods have limitations: They require administration by trained healthcare professionals in clinical settings, making them labor-intensive and expensive for frequent monitoring. Since speech impairments such as increased pauses and lan- guage disfluency are among the earliest symptoms of cognitive decline, speech processing represents a promising approach for non-invasive and cost-effective MCI detection that could potentially be implemented in everyday environments [3].Prior research in cognitive monitoring through speech tech- nology has predominantly utilized structured speech tasks (e.g., picture description), self-reported questionnaires, and controlled protocol-based interactions [4]. Recent studies have shown potential in analyzing voice interactions with smart speakers [5] and phone conversations [6], demonstrating asso- ciations with neuropsychological test scores. However, these approaches may not capture unstructured free-speech patterns that characterize everyday voice interactions. Furthermore, existing methods often analyze cross-sectional rather than longitudinal data, which is crucial for detecting the subtle progression of cognitive decline. To address traditional assessment limitations, we leverage V oice Assistant Systems (V AS) such as Amazon Alexa for passive data collection and continuous monitoring. V AS de- vices are increasingly present in homes and provide
https://arxiv.org/abs/2505.17137v1
advan- tages to assist older adults through voice-based, low-cost, and home-accessible features. V AS interactions capture real-world command usage that potentially provides more observational data than clinical assessments. Furthermore, V AS enables continuous data collection through both designed speech tasks and natural voice interactions during daily activities that allow for the detection of pattern changes over time. Analyzing V AS interaction data for cognitive decline detec- tion presents several challenges. V AS commands are typically short and unstructured, which are different from the sponta- neous speech traditionally used in cognitive assessments. Daily V AS usage produces diverse speech samples without clinical standardization, often containing irrelevant information and noises that complicate analysis. Additionally, extracting fea- tures from everyday, at-home interactions requires identifying aspects of behavioral patterns relevant to cognitive decline while maintaining interpretability for healthcare professionals. To address these challenges, we propose Cog-TiPRO, a framework combining: (1) large language model (LLM)-driven iterative prompt refinement for linguistic feature extraction, (2) multimodal fusion of acoustic and linguistic features, and (3) transformer-based temporal modeling. Our approach leverages LLMs to summarize linguistic features from voice commands and identify cognitive decline indicators. The iterative prompt refinement improves linguistic feature quality relevant to MCI detection. By integrating multimodal features with temporal modeling, Cog-TiPRO effectively captures the subtle progres-arXiv:2505.17137v1 [cs.CL] 22 May 2025 sion of cognitive decline through everyday voice interactions. Our contributations can be summarized as: •We conducted an 18-month longitudinal study collecting V AS interactions for cognitive monitoring through daily voice commands from 15 older adults in their homes. •We propose Cog-TiPRO, a framework combining LLM- driven iterative prompt refinement, multimodal fusion, and time-series modeling to detect cognitive decline. •Our iterative prompt optimization effectively identifies interpretable linguistic and behavioral features relevant to cognitive decline from everyday voice interactions. II. R ELATED WORKS A. Detecting Cognitive Decline of Dementia Cognitive decline detection research has primarily utilized the DementiaBank corpus for dementia detection that contains speech recordings from structured clinical interviews, such as picture description and fluency tasks [4]. These datasets present limitations: most contains cross-sectional data rather than longitudinal change tracking. Traditional approaches fo- cus on identifying paralinguistic and linguistic markers indica- tive of dementia, including reduced word choices, increased pauses, reduced lexical diversity, and syntactic simplification. Recent natural language processing technologies enable au- tomatic extraction of these features from speech transcripts, with transformer-based models such as BERT [7] demonstrat- ing effectiveness in capturing semantic and lexical changes associated with dementia. However, these studies apply to the onset of dementia and Alzheimer’s disease rather than MCI, presenting challenges in early-stage detection. B. Large Language Model Application in Dementia Detection LLMs such as GPT models have transformed automatic information extraction by leveraging their extensive knowl- edge and in-context learning capabilities to identify linguistic patterns, even with limited training data. Recent studies by Chen et al. [8] and Du et al. [9] show that LLMs excel at filtering irrelevant information while focusing on domain- specific knowledge indicative of dementia, improving classifi- cation accuracy and interpretability in dementia detection. III. V OICE COMMANDS DATA COLLECTION A. Data Collection Details We conducted an
https://arxiv.org/abs/2505.17137v1
18-month pilot study (2022-2024) with 35 older adults aged ≥65(16 females, 19 males; 61–80 years old, mean: 72, SD: 5.1) to collect V AS voice interactions for cognitive decline detection. It underwent seven session-based assessments and daily in-home voice interactions. A subset of 15 participants (7 females, 8 males; age mean: 74) consented to daily in-home voice command collection. Participants were classified as MCI or HC based on MoCA scores ( ≥26 for HC, <26 for MCI). In the initial assessment, 8 were MCI and 7 were HC (labels). Ten participants had prior V AS experiences. All participants had average MDPQ-16 (Mobile Device Proficiency Questionnaire) [10] score above4.5/5, indicating that they had adequate technical proficiency. The study combined two data collection elements: 1) Session-Based Assessments: Participants completed seven quarterly virtual sessions via Zoom. The initial session focused on setup and training, with subsequent sessions (1–7) including MoCA scoring, cognitive assessments, and inter- views about V AS usefulness, feedback, and privacy concerns. A research assistant (RA) administered all activities. 2) At-Home Voice Commands: The 15 participants inter- acted with Alexa daily, with recordings and transcripts stored in Amazon’s secure portal. All other data, such as consents, interviews, demographics, and MoCA scores, are stored in the HIPAA-compliant REDCap database. In the initial session, the RA provided setup training. The RA monitored participants’ device usage and gave email support when participation sig- nificantly decreased. On average, each participant generated approximately 47 commands weekly. B. Data Preprocessing We clean the data for both audio recordings and transcripts. For audio, we remove background noise and resample all files to a 16 kHz mono channel. For transcripts, we discard error cases, including system errors (“audio could not be understood”), multiple devices in the user’s home (“audio was not intended for this device”), and empty commands. To analyze command semantics, we remove the wake-up phrase “Alexa”. One participant used three alternatives: “computer”, “echo”, and “ziggy”. We identify the first words of commands, rank them by frequency, and remove the top four high-ranked words that are determined to be wake-up phrases. IV. P RELIMINARIES A. Problem Formulation and Notation We study cognitive status assessment using a longitudinal dataset of N= 15 participants, each participant iwith a cog- nitive label y(i)∈ {HC,MCI}determined by clinical MoCA evaluation. Our data comprises voice interactions collected over 18 months, indexed as t∈ {1,2, . . . , 18}. For each participant iat month t, we collect voice commands A(i) tand their corresponding transcripts X(i) t, from which we derive acoustic features v(i) t∈Rdand linguistic features u(i) t∈Re. These are concatenated to form multimodal representations: z(i) t= [v(i) t;u(i) t]∈Rd+e(1) The complete time-series sequence of multimodal features for participant iis denoted as Z(i)={z(i) 1,z(i) 2, . . . , z(i) t} ∈ Rt×(d+e), where each z(i) tis month t’s multimodal features. B. Cognitive Status Detection We formulate cognitive status detection as a sequence classification problem over the temporal progression of voice commands. Given a participant’s sequential inputs Z(i)up to month t, our objective is to predict the participant’s cognitive status at t+ 1using a time-series model:
https://arxiv.org/abs/2505.17137v1
ˆy(i) t+1=TΘ(z(i) 1,z(i) 2, . . . , z(i) t) (2) Fig. 1. The Cog-TiPRO framework combines (1) iterative prompt refinement optimization for linguistic feature extraction, (2) multimodal fusion of acoustic and linguistic features, and (3) transformer-based temporal modeling to detect cognitive status of MCI vs. HC. where TΘdenotes a transformer-based time-series model with parameters Θ. The core challenge is to capture patterns in command usage over time that effectively indicate MCI. V. M ETHOD In this section, we present Cog-TiPRO, which integrates three components: (1) iterative prompt refinement optimization for enhanced linguistic feature extraction, (2) multimodal fusion of acoustic and linguistic features, and (3) transformer- based time-series modeling for longitudinal analysis. Figure 1 illustrates our proposed framework. Algorithm 1 describes the full Cog-TiPRO process. A. LLM-Driven Linguistic Aspect and Feature Extraction To address the challenges of unstructured, short voice com- mands and limited training data, we develop an LLM-driven approach that identifies aspects related to cognitive markers and command usage as linguistic features. Below, we detail our iteratively optimized prompting strategy to generate feature summaries, which are then embedded using the Sentence- BERT (SBERT) all-MiniLM-L6-v2 model [11]. This approach leverages the language understanding and summarization ca- pabilities of LLMs to identify aspects relevant to cognitive decline from command usage. The iterative refinement of LLM prompts aims to produce increasingly discriminative features. 1) Initial Prompt Design: Our initial prompt Pinitconsists of three components: Pinit={Pcontext, P instruction , P exemplars } (3) •Pcontext provides background information about MCI- related cognitive markers, including lexical complexity, syntactic structure, disfluency, and semantic patterns. •Pinstruction guides LLMs to identify relevant linguistic features by leveraging LLMs’ pretrained knowledge base. •Pexemplars includes few-shot examples of command tran- scripts from both cognitive status groups (MCI vs. HC). These examples help LLMs’ understanding of command patterns between the two groups. 2) Iterative Prompt Optimization Details: Our algorithm begins with an initial prompt Pinitand performs automatic prompt optimization using a minibatch of training ( Dtrain) and validation data ( Dval) from monthly transcripts. The iterative refinement runs for three iterations, based on evidence that3–5 iterations can yield effective prompts [12]. The optimized prompt improves the extraction of linguistic features relevant to MCI detection. Specifically, the process consists of: Extractor: At each iteration j, the Extractor identifies cog- nitive markers and command usage patterns from minibatch Dtrainto generate linguistic features. We use Pinitin the first iteration, then Pj(the optimized prompt) in subsequent ones to extract additional features. u(i) t=LLM Extractor (X(i) t, Pj) (4) where X(i) tis the transcript for participant iat month t. Refiner: The LLM refines prompts in three steps: •It uses the extracted linguistic features from the Extrac- toralong with prompt Pjto instruct the LLM to act as a classifier and make direct MCI vs. HC predictions, denoted as LLM Classifier . •It evaluates the LLM’s predictions using F1-score on a validation minibatch ( Dval) and performs error analysis to identify specific cases where the LLM produces incorrect predictions with Pj. •The LLM provides feedback by analyzing misclassified cases and explicitly suggesting edits to create Pj+1that help avoid similar errors.
https://arxiv.org/abs/2505.17137v1
After these steps, the current prompt Pjis updated to Pj+1 to better capture additional linguistic features: Pj+1=LLM Refiner (Pj,AnalyzeErrors (errors j)) (5) Selector: After three iterations, we select the best prompt P∗that achieves the highest F1-score on the validation mini- batch for generating linguistic feature summaries: P∗= arg max j∈{1,2,3}µ(Dval) (6) where µdenotes the F1-score evaluation metric. The optimized prompt P∗is used to extract linguistic features and generate summaries for these features that are then fed into SBERT. B. Multimodal Feature Embeddings 1) Speech Feature Embeddings: We employ the pre-trained HuBERT model [13] to extract acoustic feature embeddings from monthly command audio. HuBERT is trained via self- supervision on large-scale speech corpora to capture speaker characteristics, including prosody (pitch, rhythm), spectral patterns, and articulation relevant to MCI detection. Formally, letH(·)be the HuBERT feature extractor. For each audio command a(i) t, we obtain an acoustic embedding: v(i) t= H(A(i) t),v(i) t∈R768. 2) Linguistic Feature Embeddings: Using the optimized prompt P∗, we obtain LLM-summarized linguistic features from monthly command transcripts. We use SBERT to embed these summaries as linguistic feature embeddings: u(i) t=SBERT (LLM Extractor (X(i) t, P∗)),u(i) t∈R384(7) 3) Temporal Sequence Modeling with Transformer: We use a time-series model to capture temporal patterns in the sequence of historical data from months 1 to t(t= 18 ). The model takes as input the sequential multimodal embeddings Z(i)={z(i) 1,z(i) 2, ...,z(i) t} ∈Rt×1152, where each z(i) trepre- sents the multimodal features from month t:z(i) t= [v(i) t;u(i) t]. We implement and compare transformer-based models such as PatchTST [14] and iTransformer [15], which have shown strong performance in modeling time-series data in several domains. The model takes Z(i)as input and outputs a cognitive status prediction: ˆy(i)=T(Z(i); Θ), where Θdenotes the learnable parameters of the transformer. Algorithm 1: Cog-TiPRO Require: A(i) t,X(i) t,Pinit,y(i), MaxIter=3, µ 1:P←Pinit,P∗←Pinit,µbest←0 2:forj= 1 to MaxIter do 3: u(i) t←LLM Extractor (X(i) t, P) 4: Θj←LLM Classifier (u(i) t, y(i)| D train) 5: µj←µ(Θj| D val) 6: ifµj> µ bestthen 7: P∗←P;µbest←µj 8: end if 9: errors j← {(i, t)|Θj(u(i) t)̸=y(i)} 10: P←LLM Refiner(P,AnalyzeErrors (Θj,errors j)) 11:end for 12:fori, tdo 13: v(i) t←H(A(i) t); summary(i) t←LLM Extractor (X(i) t, P∗) 14: u(i) t←SBERT (summary(i) t);z(i) t←[v(i) t;u(i) t] 15:end for 16:Z(i)← {z(i) 1, . . . , z(i) T} 17:Θ← T(Z(i), y(i)) 18:return P∗,Θ VI. E XPERIMENTAL SETUP Given the limited sample size, we use leave-one-subject- out (LOSO) cross-validation for robust evaluation. Each ex- periment is run 5 times with different random seeds, and we report the average results. We evaluate performance using classification metrics: accuracy and F1-score. For prompt optimization, we use a minibatch size of 64 (51 train, 13 validation). Transformer models are trained using the AdamW optimizer with a learning rate of 1e-4, a cosine annealing schedule with 100 warm-up steps, and a batch size of 8 until reaching early stopping criteria (validation loss has no drop after 10 epochs). Maximum training is 50 epochs. For LLMs, we obtain results by directly prompting without fine-tuning. Baseline comparisons include BERT-series modelsand LLMs such as Qwen2 [16], Flan-T5 [17], GLM-4 [18], Llama-3
https://arxiv.org/abs/2505.17137v1
[19], and GPT-series models (3.5-turbo, 4o-mini, and 4.1-mini) [20]. Training uses binary cross-entropy loss on an NVIDIA A100 GPU. All pretrained models use their default dimensional settings. VII. R ESULTS AND DISCUSSION A. Comparison with Baseline Models Table I compares our Cog-TiPRO framework with various baseline models for MCI detection. Each approach is evaluated using both PatchTST and iTransformer. The results reveal the performance of different models and the impact of LLMs in identifying linguistic features. TABLE I COMPARISON OF COG-TIPRO PERFORMANCE USING PATCH TST AND ITRANSFORMER FOR ACCURACY (ACC)AND F1- SCORE . Methods LLM PatchTST iTransformer Acc (%) F1 (%) Acc (%) F1 (%) Acoustic HuBERT - 66.67 68.00 66.67 68.00 Linguistic BERT-Base - 60.00 63.33 49.99 53.26 RoBERTa-Base - 66.67 67.57 63.33 56.67 MedBERT - 60.00 63.33 53.33 60.00 BioBERT - 53.33 60.00 46.67 53.38 Qwen2-1.5B-Instruct ✓ 43.33 46.38 50.20 51.82 Qwen2-7B-Instruct ✓ 66.67 65.61 63.32 66.00 Flan-T5-Base ✓ 60.00 63.33 53.33 60.00 GLM-4-9B-Chat ✓ 53.33 46.38 53.33 51.85 Llama-3.2-1B ✓ 56.67 56.00 53.33 60.00 Llama-3.1-8B-Instruct ✓ 60.55 50.99 66.67 65.61 GPT3.5 ✓ 61.42 60.63 60.00 50.60 GPT4o-mini ✓ 53.93 58.00 53.33 56.38 GPT4.1-mini ✓ 51.50 46.70 53.33 48.83 Multimodal Ours(GPT3.5) ✓ 63.33 66.20 70.00 66.90 Ours(GPT4o-mini) ✓ 66.67 63.33 73.80 72.67 Ours(GPT4.1-mini) ✓ 62.00 61.33 71.33 70.67 We observe that HuBERT-based acoustic features achieve better results than many linguistic-only models. This finding suggests that HuBERT effectively extracts acoustic features that can serve as markers of cognitive function. Among non- LLM linguistic models, RoBERTa-Base achieves the strongest performance (66.67% acc-(uracy)), 67.57% F1), followed by BERT-Base and MedBERT (both 60.00% acc). BioBERT shows the lowest performance (53.33% acc), indicating that domain-specific pretraining on medical data is more beneficial than on biomedical data for cognitive assessment tasks. For LLM-based models, we evaluate their capabilities with- out fine-tuning to assess their base effectiveness in identifying cognitive markers from voice commands. The results show notable performance variation. Larger models such as Qwen2- 7B (66.67% acc, 65.61% F1) significantly outperform their smaller variants like Qwen2-1.5B (43.33% acc, 46.38% F1). This difference indicates that higher model capacity enables better handling of complex tasks that require contextual un- derstanding of unstructured voice commands. However, model architecture also affects performance. Flan-T5-Base (60.00% acc) outperforms the larger GLM-4-9B (53.33% acc), in- dicating that Flan-T5’s bidirectional context modeling more efficiently captures linguistic patterns in voice command. Llama-3-series and GPT-series models show comparable and sometimes better performance compared with other mod- els. GPT3.5 achieves strong results (61.42% acc, 60.63% F1). Notably, GPT4.1-mini shows less stable results, potentially due to its optimization for different types of text generation tasks rather than analytical classification. Effectiveness of Cog-TiPRO: Cog-TiPRO consistently im- proves performance across most LLM variants. When com- bined with GPT4o-mini and iTransformer, it achieves the highest performance (73.80% acc, 72.67% F1), showing a 19.87% improvement over its baseline (53.93% acc). GPT3.5 with iTransformer ranks second (70.00% acc, 66.90% F1). These results indicate that advanced LLMs are more effective at identifying MCI-relevant aspects and patterns in command usage when guided by our iterative prompt refinement. This demonstrates the strength of our LLM-driven strategy.
https://arxiv.org/abs/2505.17137v1
Addi- tionally, multimodal fusion outperforms single-modality base- lines, suggesting that feature fusion provides complementary information to enhance feature quality and detection accuracy. When comparing transformer architectures, iTransformer consistently outperforms PatchTST. This highlights that ef- fective cognitive monitoring via speech requires more than just analyzing linguistic content. The integration of multimodal features and temporal patterns better captures the progression of cognitive decline. With GPT4o-mini, iTransformer achieves 73.80% accuracy versus PatchTST’s 66.67% (7.13% increase). iTransformer better captures temporal patterns in our data and supports its selection in our ablation study. B. Ablation Study of Main Components TABLE II ABLATION STUDY OF COG-TIPRO COMPONENTS USING GPT4 O-MINI WITH I TRANSFORMER . Metric Full w/o Prompt w/o Temporal w/o Acoustic Acc (%) 73.80 46.67 60.00 64.39 F1 (%) 72.67 43.33 62.92 69.06 Table II presents the ablation study of Cog-TiPRO com- ponents using GPT4o-mini with iTransformer. The baseline model without prompt refinement achieves the lowest perfor- mance (46.67% acc and 43.33% F1). This substantial per- formance degradation (27.13% decrease) indicates the impor- tance of iterative prompt optimization in generating enriched linguistic features. When prompt optimization is included, the additional incorporation of time-series transformers yields an improvement of 13.80%. This highlights the value of incorporating sequential temporal modeling in longitudinal analysis for more accurate prediction. Although the inclusion of acoustic features shows a somewhat less significant im- provement of 9.41%, acoustic characteristics provide essential complementary information not captured by transcripts alone. Combining all three components yields the best prediction performance. This finding indicates that integrating LLM-driven linguistic features, acoustic features, and temporal modeling delivers optimal identification of aspects and usage patterns in voice commands for detecting cognitive decline. C. Analysis of LLM-extracted Linguistic Features Table III categorizes LLM-extracted features from voice commands into seven types, aligned with existing literature on cognitive decline markers. These indicators reflect distinct command usage behaviors between MCI and HC. Our analy- sis reveals that MCI participants exhibit difficulties increas- ing command variety, often repeat identical queries within short timeframes, and demonstrate degradation in organizing thoughts and language. LLM-extracted command usage pat- terns also reveal that MCI participants are less able to respond to unsuccessful attempts, engage in information-seeking, and perform conversational follow-ups despite their awareness of device functionality. In contrast, HC participants demonstrate greater flexibility and exploratory behavior in language and topic selection, more diverse word choices, and enhanced information-seeking engagement. This comprehensive feature set presents a profile of linguistic changes of MCI that can be passively monitored through everyday voice interactions. Importantly, these early language changes are detectable in unstructured voice command settings, yet may not be captured in traditional clinical assessments. TABLE III LLM- EXTRACTED LINGUISTIC FEATURES ASSOCIATED WITH MCI (1) Reduced lexical diversity and word-finding difficulty: •Impaired word retrieval with vague placeholders (“that thing”), re-use of identical noun phrases. •Commands that don’t match context (“turn off the rock”), inability to access specific nouns. (2) Weak semantic / thematic coherence: •Abrupt topic jumps ( <10s: music →math→lights), disconnected context (“play thunderstorm” right after “divide 57.5 by 16.9”). •Random insertions of irrelevant words without context (“Lent”), garbled tokens, failure to maintain
https://arxiv.org/abs/2505.17137v1
thematic coherence. (3) Difficulty in self-correction: •Starts correction but reverts to error (“wake me at 7... uh 8... no, 7”), abandoned corrections (“never mind”). •Persisting with incorrect commands despite system feedback, limited recognition of unsuccessful attempts and inability to request new ones. (4) Weaker grammatical structures: •Missing tense/number agreement, incomplete clause, omission of nec- essary elements, subject-verb disagreement. •Imperative-only style on “play/stop/volume” with rare “WH”-clauses, rare conversational or informational queries. (5) Repetitive command patterns and actions: •Bursts of ≥3 identical requests within <30s, re-asking identical infor- mation despite immediate answer, no or unrelated follow-ups. •Rapid on/off toggles of devices, unnecessary multiple timer/alarm resets, alternating command patterns. (6) Pragmatic function limitations: •Day-long logs limited to alarm/light/volume, fewer exploratory queries. •Very short commands when the task normally elicits longer phrasing. In HC, same tasks use detailed phrasing. (7) Disfluency overload: •Filler pauses then command (um/uh. . . stop alarm), long stretches of hedging before action. D. Visualization of Prompt Optimization and Error Analysis Figure 2 illustrates our iterative prompt refinement process across three stages. The left panel shows the basic initial prompt Pinit. The middle panel displays the intermediate iteration, incorporating feedback from error analysis shown at the bottom. The right panel presents the final optimized prompt P∗, which achieves the highest validation performance. Fig. 2. Prompt optimization process: initial template (left), intermediate template (middle), and final optimized prompt (right). The bottom panel shows an example of error analysis feedback for edits. VIII. L IMITATIONS Our study has several limitations. First, despite 18 months of longitudinal data, our pilot study includes a small sample size. We plan to expand the number of participants in future work to apply Cog-TiPRO to broader and more diverse cohorts. Second, variability in home environments and reliance on voice commands may impact the quality of features related to cognitive decline and affect model performance. Our fu- ture studies will incorporate task designs and data collection that better capture established cognitive decline markers to improve prediction accuracy. Furthermore, we will address participants’ concerns regarding data privacy, usage, and bias in future studies. Lastly, our iterative prompt optimization increases computational overhead and inference time, which is a challenge for deployment in low-resource settings and real- time monitoring. We will study this problem in future work. IX. C ONCLUSION In this paper, we demonstrate the viability of using V AS for the longitudinal collection and analysis of voice commands to detect cognitive decline. By collecting daily interactions from 15 older adults over 18 months in their homes, we capture speech patterns that reflect cognitive functioning through our Cog-TiPRO framework. Cog-TiPRO leverages LLMs to ex- tract linguistic features indicative of MCI from unstructured, short, and noisy voice commands. The integration of LLM- driven prompt refinement, acoustic feature extraction, and tem- poral modeling enables more accurate detection of MCI. Using iTransformer, our approach achieves 73.80% accuracy and72.67% F1-score, outperforming its baseline by 27.13%. The linguistic features extracted not only reveal patterns specific to everyday V AS interactions but also align with established clinical markers of MCI, demonstrating the potential of passive monitoring for early
https://arxiv.org/abs/2505.17137v1
cognitive decline detection. REFERENCES [1] R. C. Petersen, “Mild cognitive impairment as a diagnostic entity,” Journal of internal medicine , vol. 256, no. 3, pp. 183–194, 2004. [2] J. Hobson, “The montreal cognitive assessment (moca),” Occupational Medicine , vol. 65, no. 9, pp. 764–765, 2015. [3] A. K ¨onig, A. Satt, A. Sorin, R. Hoory, O. Toledo-Ronen, A. Der- reumaux, V . Manera, F. Verhey, P. Aalten, P. H. Robert et al. , “Automatic speech analysis for the assessment of patients with predementia and alzheimer’s disease,” Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring , vol. 1, no. 1, pp. 112–124, 2015. [4] J. T. Becker, F. Boller, O. L. Lopez, J. Saxton, and K. L. McGonigle, “The pittsburgh aphasia corpus: A resource for the study of aphasia and dementia,” Aphasiology , vol. 8, no. 6, pp. 511–524, 1994. [5] K. Qi, J. Shi, C. Summerour, J. A. Batsis, and X. Liang, “Exploiting lon- gitudinal speech sessions via voice assistant systems for early detection of cognitive decline,” in HealthCom . IEEE, 2024, pp. 1–6. [6] J. Robin, Y . Jia, T. Shirai, M. Thurmon, J. Du, A. Sanseau, G. Hristu, M. Dabbah, V . Rao, S. Kinoshita et al. , “Smartphone-based digital phenotyping for dry eye toward point-of-care diagnosis,” The Ocular Surface , vol. 18, no. 2, pp. 286–294, 2020. [7] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) . Association for Computational Linguistics, 2019, pp. 4171–4186. [8] C.-P. Chen and J.-L. Li, “Profiling patient transcript using large language model reasoning augmentation for alzheimer’s disease detection,” in 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) . IEEE, 2024, pp. 1–4. [9] P. Du, Novoa-Laurentiev et al. , “Enhancing early detection of cognitive decline in the elderly: a comparative study utilizing large language models in clinical notes,” EBioMedicine , vol. 109, 2024. [10] N. A. Roque and W. R. Boot, “A new tool for assessing mobile device proficiency in older adults: the mobile device proficiency questionnaire,” Journal of Applied Gerontology , vol. 37, no. 2, pp. 131–156, 2018. [11] N. Reimers and I. Gurevych, “Sentence-bert: Sentence embeddings using siamese bert-networks,” arXiv preprint arXiv:1908.10084 , 2019. [12] K. Opsahl-Ong, M. J. Ryan, J. Purtell, D. Broman, C. Potts, M. Zaharia, and O. Khattab, “Optimizing instructions and demonstrations for multi- stage language model programs,” EMNLP , 2024. [13] W.-N. Hsu, B. Bolte, Y .-H. H. Tsai, K. Lakhotia, R. Salakhutdinov, and A. Mohamed, “Hubert: Self-supervised speech representation learning by masked prediction of hidden units,” IEEE/ACM transactions on audio, speech, and language processing , vol. 29, pp. 3451–3460, 2021. [14] M. Nie, D. Xue, R. Gao, X. Xu, W. Nie, Z. Tasneem, L. Yu, Y . Li, J. Song, A. Z. Qin et al. , “Time series is a special sequence: Forecasting with sample convolution and interaction,” arXiv preprint arXiv:2106.09305 , 2022. [15]
https://arxiv.org/abs/2505.17137v1
Y . Liu, H. Hu, S. Jiang, H. Xie, X. Yu, and other, “itransformer: Inverted transformers are effective for time series forecasting,” in Advances in Neural Information Processing Systems , vol. 36, 2023. [16] J. Bai, S. Lv, H. Qin, Z. Liu, S. Liu, Z. Lin et al. , “Qwen technical report,” arXiv preprint arXiv:2309.16609 , 2023. [17] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y . Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma et al. , “Scaling instruction-finetuned language models,” arXiv preprint arXiv:2210.11416 , 2022. [18] T. GLM, A. Zeng, B. Xu, B. Wang, C. Zhang, D. Yin et al. , “Chatglm: A family of large language models from glm-130b to glm-4 all tools,” arXiv preprint arXiv:2406.12793 , 2024. [19] A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan et al. , “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783 , 2024. [20] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al. , “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774 , 2023.
https://arxiv.org/abs/2505.17137v1
arXiv:2505.17139v1 [cs.CL] 22 May 2025EarthSE: A Benchmark Evaluating Earth Scientific Exploration Capability for Large Language Models OpenEarthTeam∗ Shanghai Artificial Intelligence Laboratory Figure 1: Cross-domain Evaluation of Mainstream LLMs on EarthSE. (a) EarthSE evaluates the capabilities of different LLMs in Earth’s five spheres. (b) The multi-task evaluation demonstrates pronounced capability limitations in tasks such as calculation and term explanation. Abstract Advancements in Large Language Models (LLMs) drive interest in scientific ap- plications, necessitating specialized benchmarks such as Earth science. Existing benchmarks either present a general science focus devoid of Earth science speci- ficity or cover isolated subdomains, lacking holistic evaluation. Furthermore, current benchmarks typically neglect the assessment of LLMs’ capabilities in open-ended scientific exploration. In this paper, we present a comprehensive and professional benchmark for the Earth sciences, designed to evaluate the capabilities of LLMs in scientific exploration within this domain, spanning from fundamen- tal to advanced levels. Leveraging a corpus of 100,000 research papers, we first construct two Question Answering (QA) datasets: Earth-Iron , which offers exten- sive question coverage for broad assessment, and Earth-Silver , which features a higher level of difficulty to evaluate professional depth. These datasets encompass five Earth spheres, 114 disciplines, and 11 task categories, assessing foundational knowledge crucial for scientific exploration. Most notably, we introduce Earth- Gold with new metrics, a dataset comprising open-ended multi-turn dialogues specifically designed to evaluate the advanced capabilities of LLMs in scientific exploration, including methodology induction, limitation analysis, and concept proposal. Extensive experiments reveal limitations in 11 leading LLMs across dif- ferent domains and tasks, highlighting considerable room for improvement in their scientific exploration capabilities. The benchmark is available on HuggingFace. ∗For detailed information about the authors, see Appendix A. 1 Introduction In recent years, the rapid development of large language models (LLMs) has continuously pushed the boundaries of their capabilities, demonstrating remarkable performance in general knowledge question-answering tasks [ 23]. Against this backdrop, researchers are actively expanding the ap- plications of LLMs to specialized and challenging scientific subfields [ 53,2]. This trend not only enhances the reasoning ability of LLMs for complex scientific problems but also holds potential for AI-assisted scientific discovery [ 26,47]. To systematically evaluate the performance of mainstream LLMs in scientific tasks, multiple science-oriented benchmarks [41, 33, 32] have been proposed. However, a comprehensive benchmark for evaluating LLMs in the critical field of Earth science remains conspicuously absent. Existing benchmarks predominantly fall into two categories: a) general science benchmarks (e.g., ScienceQA [ 27], SciBench [ 41]) which lack the necessary specificity and depth in Earth science, often featuring questions of a common-sense nature. b) single-subdomain benchmarks (e.g., ClimaQA [ 29], OceanBench [ 2]) that concentrate on particular areas such as climate or ocean science, thus failing to encompass the broad and interdisciplinary spectrum of Earth science. Furthermore, the prevalent question-answering (QA) format in most benchmarks overlooks the evaluation of LLMs in open-ended scientific exploration tasks. Consequently, the construction of acomprehensive andspecialized Earth science benchmark that incorporates assessments of scientific exploration capabilities represents an underexplored yet vital area of research. In this paper, we present EarthSE, a comprehensive and
https://arxiv.org/abs/2505.17139v1
specialized benchmark dataset for Earth science that uniquely incorporates evaluations of scientific exploration capabilities. To ensure both sufficient scale and high quality, we curate a corpus of over 100,000 Earth science academic papers as our primary data source. Through semantic analysis of titles and keywords, we categorize these papers into five major spheres [ 28] and 114 sub-disciplines. Leveraging the publication venue and citation counts, we further stratify a subset of 10,000 papers into three distinct levels. The first level paper collection constitutes the largest portion, emphasizing comprehensiveness. The second level focuses on papers from high-impact journals, prioritizing specialized knowledge. The third level comprises highly cited papers, specifically designed for evaluating scientific exploration. Scientific exploration demands multi-level competencies, spanning fundamental domain knowledge mastery to advanced critical reflection and innovative improvements. The former, amenable to explicit evaluation criteria, suits question-answering formats. The latter, lacking unified standards, is better assessed through open-ended multi-turn dialogues . Correspondingly, we develop two pipelines: one for QA data using predefined task formulations to directly generate high-quality pairs from papers, and another for dialogue data structuring papers to extract scientific inquiry workflows for multi-turn dialogues. Both pipelines include automated cleaning and human verification for quality. Consequently, we release three distinct datasets, each with unique characteristics. Earth-Iron (QA) encompasses 4133 questions across 11 tasks and 114 sub-disciplines (Figure 2), emphasizing foun- dational and broad scientific exploration capabilities. Earth-Silver (QA) prioritizes high-difficulty, specialized knowledge. Earth-Gold (dialogue) focuses on evaluating advanced scientific exploration capabilities such as summarization, reflection, and innovation within open-ended dialogues. We summarize the contributions of this paper as follows: • We built two QA datasets to evaluate the fundamental capabilities of scientific exploration: Earth-Iron , containing 4133 questions that span 114 subfields for broad assessment, and Earth-Silver , presenting more challenging and professional inquiries for deeper evaluation. •We innovatively propose Earth-Gold and a new metric (i.e., SES) to assess advanced capabilities of scientific exploration (e.g., methodology induction, limitation analysis, and concept proposal) through open-ended multi-turn dialogue . •Our systematic evaluation of 11 leading LLMs across these datasets reveals significant deficiencies in complex Earth science reasoning and open-ended scientific exploration. 2 Related Work General Science Benchmark. In recent years, LLMs have increasingly focused on scientific reasoning [ 23], leading to the development of a range of benchmarks designed to assess model performance across diverse scientific domains [ 38,51]. General-purpose science benchmarks such as 2 Figure 2: EarthSE Covers 114 Disciplines in Earth Sciences. ScienceQA [ 27], SciBench [ 41] and MMLU-Pro [ 42] cover disciplines from elementary to graduate levels. For instance, ScienceQA includes over 21,000 multi-modal questions spanning natural, social, and language sciences. MMLU-Pro extends MMLU [ 16] with broader college-level subjects to evaluate deeper reasoning. SciBench emphasize college-level problem solving, drawing from canonical textbooks. Expert-curated datasets like GPQA [ 33], SuperGPQA [ 10], and HLE [ 32] offer high-quality, graduate-level questions to assess fine-grained domain expertise. However, these benchmarks primarily prioritize breadth over depth, often underrepresenting Earth sciences. As a unique and complex discipline covering five spheres, Earth science requires more targeted evaluation. Specialized Subject Benchmark. In addition
https://arxiv.org/abs/2505.17139v1
to general benchmarks, several efforts have focused on subdomains within Earth sciences. For example, OceanGPT [ 2] introduces OceanBench, a bench- mark for oceanographic tasks such as QA, fact verification, and applied writing. ClimaQA [ 29] and ClimateWatch [ 19] assess reasoning over climate data and models. ClimateBERT [ 43] and Clima- Text [ 39] support stance detection and document classification in climate discourse. Geological [ 8] tasks are addressed by GeoBench, which supports entity recognition and relation extraction, and SeafloorAI [ 30], which evaluates multimodal reasoning over sedimentary and geomorphological features. While these benchmarks are domain-relevant, each focuses on a narrow component of the Earth, lacking a unified framework for evaluating LLMs across the full Earth science spectrum. Moreover, the majority of current benchmarks emphasize question answering , focusing on factual recall and reasoning over established knowledge. Yet, a critical frontier lies in enabling LLMs to support scientific exploration and discovery , a more open-ended capability involving summary and limitation analysis of existing methods, and hypothesis generation. Table 1: Comparison between Existing Benchmarks and EarthSE. Comparison across (1) question volume, (2) data sources, (3) Earth sphere coverage, (4) graduate-level difficulty, (5) multiple subsets and (6) scientific exploration assessment. EarthSE uniquely offers comprehensive domain coverage, professional difficulty, and evaluation of scientific exploration in Earth sciences. Benchmark #Ques. Source Earth Cover Grad-Diff Multi-Sub Scientific Exploration ScienceQA [27] 21,208 High School Courses × × × × MMLU-Pro [42] 12,032 Exam Questions × × × × SciBench [41] 869 10 Textbooks × × × × GPQA [33] 448 Expert Curated × ✓ ✓ × OceanBench [2] 12,426 Domain Texts × ✓ × × ClimaQA [29] 3502 18 Textbooks × ✓ ✓ × GeoBench [8] 2439 Exam Questions × ✓ × × EarthSE 4133 100,000 Earth Science Papers ✓ ✓ ✓ ✓ 3 EarthSE: Earth Science Exploration Benchmark Overview. Scientific exploration requires capabilities from basic Earth science knowledge to advanced reflection and proposing new methods. Starting from 100,000 papers in the Earth sciences, we constructed three datasets. Earth-Iron (QA) contains 4133 questions covering 114 sub-disciplines, 3 Figure 3: Construction Process of EarthSE. Automated construction of three-tiered benchmarks (Earth-Iron, Earth-Silver, Earth-Gold) from 100K papers, enabling comprehensive evaluation of knowledge coverage, professional proficiency, and scientific exploration capabilities. focusing on a comprehensive evaluation of fundamental scientific exploration abilities. Earth-Silver (QA) features more difficult and challenging questions, focusing on more specialized Earth science knowledge. Earth-Gold (dialogue) evaluates advanced scientific exploration abilities through open-ended dialogue, focusing on reflecting on existing problems and proposing new approaches. 3.1 Task Definition from Foundational to Advanced Foundational Science Task Definition. To comprehensively evaluate the capabilities of LLMs in Earth exploration tasks, spanning from foundational to advanced levels, we defined 11 fundamental research tasks, as detailed in Table 2. These tasks cover a broad spectrum, ranging from basic understanding and reasoning to advanced research skills. These capabilities form the foundation for LLMs to conduct scientific exploration. For example, the "Dataset" task specifically focuses on datasets within the Earth sciences. Given the significant diversity of data (such as one-dimensional seismic wave data and two-dimensional remote sensing images)
https://arxiv.org/abs/2505.17139v1
across the numerous sub-disciplines of Earth science, enhancing LLMs’ understanding of these varied datasets is crucial. Scientific Exploration Task Definition. Beyond these 11 fundamental scientific exploration tasks, we have also defined a scientific discovery task formulated as open-ended dialogue. Through system- atic analysis of these papers, we identify a recurrent research pattern: “analyzing limitations of existing work→proposing novel methods” [ 49]. This inspires our formalization of scientific exploration as an iterative self-negation process, mathematically expressed as: (Mi+1, Li+1) =LLM (Mi, Li), where Mdenotes methodology, Lrepresents limitation analysis of M, andiindicates dialogue turns. This recursive framework simulates the human scientific process of critically examining prior work’s constraints and progressively improving upon them. Consequently, models with genuine scientific discovery potential must demonstrate robust self-critique and self-improvement capabilities [21]. 3.2 Paper Corpus Collection The study uses Earth science academic papers as data because: (a) their dense, high professional knowledge facilitates quality data creation; (b) their structured format aligns with general scientific discovery processes. During collection, we obtain and convert 100,000 PDFs to structured JSON using MinerU [ 40]. Semantic similarity [ 9] on abstracts and Earth sphere keywords (see Figure 7 for details) accurately classifies papers into five Earth spheres, as detailed in Table 3. We define the initial collection of 100,000 papers as the base dataset Pbase. From this collection, we first select a subset Phjcomprising 10,000 papers published in high-impact Earth science journals (see Table 7 for details). We then extract the top 10% most cited papers from Phjto form the high-citation core dataset Phcwith 1,000 papers. The entire selection process maintains strict balance across all 4 Table 2: Foundational Task Categories and Descriptions. Task DescriptionUnderstandingTerm Explanation [15] Requires defining technical concepts (e.g., "Define ’medium-range weather forecast’ and explain its significance"). Knowledge QA [46] Requires factual explanations with detailed descriptions of distributions or characteristics (e.g., "Describe the distribution of fishery resources in the East China Sea"). Fact Verification [52] Involves validating claims’ accuracy (e.g., "Verify whether the reported sea level rise data is correct").ReasoningAnalysis [5] Demands logical reasoning to draw conclusions from data (e.g., "Determine which evidence supports a given conclusion"). Relation Extraction [22] Needs analysis of interconnections between entities (e.g., "Summarize ecological relationships between butterflies and plants"). Calculation [34] Involves multi-step mathematical operations with numerical answers (e.g., "Calculate mean annual precipitation from the dataset").ResearchTool Utilization [50] Involves recommending domain-specific methodologies (e.g., "Suggest atmospheric models for weather prediction"). Literature Citation [4] Demands proper academic references (e.g., "List key publications on ocean circulation including citations like (Ravuri et al., 2021)"). Dataset [35] Requires introducing or recommending research datasets (e.g., "Recommend ERA5 [17] for medium-range weather forecasting"). Experimental Design [6] Needs detailed methodological planning (e.g., "Design an experiment to investigate soil moisture’s impact on photosynthesis"). Code Generation [13] Involves writing functional code (e.g., "Visualize rainfall data using Python’s Matplotlib library [3]"). five spheres, with detailed distributions provided in Table 3. Figure 4 further illustrates the citation distribution patterns across different spheres in the high-citation dataset Phc.Pbase,Phj,Phjare used to construct Earth-Iron, Earth-Silver, Earth-Gold respectively, as depicted in Figure 3. Table 3: Number of Papers on the
https://arxiv.org/abs/2505.17139v1
Five Spheres of Earth. Earth Sub-domain Pbase Phj Phc Biosphere 21,248 1,554 201 Lithosphere 22,820 2,357 236 Atmosphere 24,213 2,401 240 Hydrosphere 23,425 2,254 226 Cryosphere 11,402 1,969 217 Total 103,108 10,535 1,120 Figure 4: Citations of Papers in Phc. 3.3 Earth-Iron/Silver: QA Benchmark for Foundational Tasks Earth-Iron and Earth-Silver are constructed from PbaseandPhjrespectively, using the same pipeline as shown in the first row of Figure 3. Characteristically, Earth-Iron offers a large and comprehensive set of questions, while Earth-Silver features more difficult and specialized questions. Task Selection. Before QA construction, a small LLM analyzes each paper’s abstract to determine the most suitable task type from 11 foundational tasks shown in Table 2 for QA generation. This content-aware assignment enhances question relevance. For instance, papers containing substantial numerical results preferentially generate computation questions. QA Construction. In the question generation phase, paper content and task prompts are fed into GPT-4 [ 1] to produce QA pairs. To ensure answer accuracy, the model is required to provide complete Chain-of-Thought (CoT) [ 44] reasoning that includes both the final answer and its derivation process. These CoTs not only form integral parts of the questions but also serve as critical references for subsequent data cleaning, effectively mitigating errors caused by LLM hallucinations [45]. Data Cleaning. The data cleaning process employs a dual-phase strategy: (a) rule-based pri- mary cleaning for formatting issues (missing options, irregular answers, improper references); (b) semantics-based advanced cleaning using LLMs to detect deeper problems (multiple correct options, irrelevant/incorrect answers). Throughout this process, CoT reasoning plays a pivotal role, as its explicit step-by-step derivation provides reliable evidence for error detection. 5 After data cleaning, we performed difficulty screening using mainstream LLMs. Based on testing results from both proprietary and open-source models, we removed questions with accuracy rates exceeding 80% to ensure sufficient overall challenge. For questions with accuracy rates between 60% and 80%, human experts determined their retention based on the question’s value. 3.4 Earth-Gold: Open-end Dialogue Benchmark for Exploration Task Most existing benchmarks predominantly employ question-answering formats, which, while effective for knowledge assessment, fail to capture the open-ended exploration paradigm characteristic of genuine scientific research. To evaluate LLMs’ capabilities in open scientific exploration, we construct the Earth-Gold dataset derived from Phc, a collection of 1,000 highly-cited Earth science papers, as shown in the second row of Figure 3. These papers exemplify superior scientific reasoning patterns, making them ideal prototypes for studying scientific exploration behaviors. Paper Structurization Following the scientific exploration task definition in Section 3.1, we decompose each paper in Phcinto four structured components: •Existing Method Summary ( M0): Extracted from related work sections, providing com- prehensive synthesis of current methodologies. •Limitation Analysis ( L0): Derived from motivation sections, identifying precise shortcom- ings that constitute the starting point for new research. •Novel Method Proposal ( M1): Abstracted from methods sections, capturing the core innovations. •New Method Limitations ( L1): Distilled from discussion sections, anticipating potential constraints of proposed solutions. Dialogue Construction With M0,L0,M1, and L1extracted, we employ GPT-4 to generate two-turn dialogues simulating human-AI collaborative scientific exploration. The first turn requires the AI assistant
https://arxiv.org/abs/2505.17139v1
to summarize existing methods and critically analyze their limitations given a research direction; the second turn directs the assistant to propose improved methods addressing these limitations while objectively assessing the new methods’ potential constraints. This dialogue structure authentically replicates human ideation processes assisted by AI, demanding both extensive scientific knowledge and critical thinking abilities from LLMs. Human Expert Validation Following dialogue generation, domain experts conduct rigorous quality control using multi-criteria evaluation: a) Information Density Scoring : Higher scores are assigned to dialogues containing specific analytical methods, datasets, or well-defined events (e.g., earthquakes, typhoons), while vague discussions receive lower scores. b) Methodological Quality Assessment : Proposed methods receive higher scores if they are systematically enumerated and concretely implementable. Non-specific proposals are filtered out. Through this stringent validation process, we curate the final Earth-Gold dataset, which exclusively comprises high-quality scientific exploration dialogues, thus establishing a new benchmark for evaluating LLMs’ research capabilities. A New Metric: Scientific Exploration Score (SES). Earth-Gold evaluates open-ended scientific dialogue, where standard correctness metrics are inadequate due to the inherent diversity of valid sci- entific exploration beyond definitive answers. To address this, we innovatively measure performance using retention rate and diversity. For each user question in the dialogues, the LLM generates M diverse responses (temperature=0.6) [31], and then we compute: •Retention Rate ( r): GPT-4 ranks Mgenerated answers and the reference answer by reflective depth and innovation. Retention rate r= (i−1)/M(where iis the reference answer rank) quantifies the model’s preservation of high-quality scientific reasoning. If r= 0, it means that all LLM answers are worse than the reference answers. •Diversity ( d): We compute embeddings vifor each response using sentence-transformers, then derive the mean vector ¯v. The average cosine similarity ¯sbetween ¯vandviis calculated as Equation 1. Since lower similarity indicates higher diversity, define d= 1/¯s. ¯s=1 MMX i=1cos(vi,¯v),where ¯v=1 MMX i=1vi. (1) 6 Table 4: Evaluation Metrics for Leading Models on the Earth-Iron and Earth-Silver Bench- marks. For MC (Multiple Choice), TF (True/False), and FIB (Fill-in-the-Blank) questions, Accuracy (ACC) is used as the evaluation metric, reported in percentage (%). For FR (Free Response) questions, Win Rate (WR), Semantic Similarity (SS) are employed as an evaluation metrics. Earth-Iron (More Comprehensive) Earth-Silver (More Challenging) Model MC ↑TF↑ FIB↑FR-WR ↑FR-SS↑MC↑TF↑ FIB↑FR-WR ↑FR-SS↑ Llama-3.1-8B [12] 59.41 74.36 2.52 13.70 0.76 36.00 54.00 2.02 4.40 0.72 Llama-3.1-70B [12] 91.56 87.91 6.63 61.85 0.80 56.00 63.60 4.00 18.40 0.80 Qwen-2.5-72B [48] 92.42 86.26 11.96 92.05 0.79 53.60 64.40 9.20 44.40 0.78 DeepSeek-V3 [25] 93.40 81.14 18.99 97.60 0.81 58.00 56.40 12.80 75.20 0.81 GPT-4o [18] 93.28 88.28 19.12 82.00 0.81 55.60 69.60 18.40 22.00 0.80 Gemini-1.5 [37] 90.83 75.82 13.65 95.60 0.79 54.40 44.80 8.00 62.80 0.78 Gemini-2.0 [36] 92.67 87.55 14.69 77.10 0.77 54.40 72.40 11.60 34.80 0.75 Gemini-2.5 [36] 93.15 77.84 17.02 95.81 0.75 58.00 55.60 13.65 74.30 0.75 Claude-3.5 [20] 91.08 83.52 12.48 12.05 0.79 56.80 60.80 9.60 4.40 0.77 Claude-3.7 [24] 94.01 61.90 20.68 75.00 0.80 62.40 41.20 17.20 28.40 0.79 Grok-3 [7] 93.03 88.64 21.85 98.70 0.81 53.20 70.40 15.20 83.60 0.81 Mean 89.53 81.20 14.50 72.86 0.78 54.40 59.38 11.06
https://arxiv.org/abs/2505.17139v1
41.15 0.77 The Scientific Exploration Score (SES) combines retention ( r) and diversity ( d) multiplicatively: SES = r×d. Since ¯sis relatively close to 1, we normalize ¯sto[0.9,1]in actual calculations for better comparison, that is, SES =r 10×(¯s−0.9). Higher SES values indicate superior open-ended scientific thinking, reflecting both answer quality (retention) and conceptual breadth (diversity). 4 Experiment 4.1 Experimental Setup This paper introduces three evaluation datasets: Earth-Iron and Earth-Silver as question answering (QA) datasets, and Earth-Gold as an open-ended dialogue dataset. The QA datasets incorporate 4 question formats: multiple-choice (MC), fill-in-the-blank (FIB), true/false (TF), and free-response (FR) questions. For MC, FIB, and TF questions, we use accuracy (ACC) as the metric; for FR questions, we employ the win rate (WR) against the reference answer evaluated by GPT-4, and semantic similarity (SS) as metrics. These metrics are detailed in Appendix E. For Earth-Gold, we use the SES defined in Section 3.4 to evaluate the performance of scientific exploration dialogues. 4.2 Earth-Iron/Silver: Assessing Broad Foundational Capabilities Earth-Iron is a comprehensive QA benchmark consisting of 4133 questions spanning the Earth sciences domain. Table 4 presents comparative performance metrics across these question formats for various LLMs. Most models perform well on multiple-choice questions but struggle with fill- in-the-blank tasks. In free-response questions, performance varies significantly; some models (e.g., Claude-3.5) provide overly general answers, resulting in poor Win Rate against the reference answers. We compute each model’s overall competency across Earth’s five spheres, as detailed in Figure 1. This figure visually illustrates the capability distribution of different models across the five spheres. Notably, Grok-3 achieves state-of-the-art (SOTA) performance across all spheres. While most models exhibit a relatively balanced capability distribution, such as DeepSeek-V3, Gemini-2.5, some show specific weaknesses, for example, GPT-4o in the cryosphere. The right panel in Figure 1 illustrates the capabilities of different models across the 11 fundamental scientific tasks. Most models exhibit a similar capability distribution, particularly the SOTA LLMs, likely due to similar training paradigms. Notably, most models perform relatively poorly on the calculation task. Additionally, the term explanation metric is generally low, indicating a potential weakness in LLMs’ understanding of specialized Earth science terminology. 7 Figure 5: Case Study from Earth-Silver. Some models make mistakes when applying formulas in multiple steps, resulting in low accuracy in calculation questions. Compared to Earth-Iron, Earth-Silver is constructed from a higher-quality corpus of original papers, resulting in a significantly more challenging dataset. The experimental results on the right side of Table 4 show that most LLMs achieve only 54% accuracy on multiple-choice questions (compared to a 25% random chance rate). This not only reflects the high difficulty of Earth-Silver but also indicates considerable room for improvement in the performance of current LLMs within the Earth sciences. On fill-in-the-blank tasks, the average performance of LLMs is notably low, at just 11%. Figure 5 presents representative examples from these challenging tasks alongside model responses. The case study analysis reveals frequent formula misuse and computational errors in Earth science- specific calculations, indicating critical knowledge gaps in domain-specific reasoning. 4.3 Earth-Gold: Evaluating Open-end Science Exploration Capability Table 5: Scientific Exploration
https://arxiv.org/abs/2505.17139v1
Capabilities of Main- stream Models on Earth-Gold. Earth-Gold assesses LLMs’ research potential through open-ended scientific dialogues, employing three metrics: Retention (propor- tion of responses outperforming references), Diversity (measure of divergent thinking), and their composite Sci- entific Exploration Score (SES). Model Retention (%)↑Diversity ↑SES↑ Llama-3.1-8B 8.00 3.9813 0.3301 Llama-3.1-70B 11.78 1.4891 0.2453 Qwen-2.5-72B 7.11 1.7158 0.1375 DeepSeek-V3 38.00 1.6942 0.6599 GPT-4o 9.44 1.0347 0.0981 Gemini-1.5 19.67 1.4437 0.1989 Gemini-2.0 18.22 2.6290 0.6505 Gemini-2.5 50.56 2.7016 1.3710 Claude-3.5 14.67 1.5517 0.2396 Claude-3.7 31.89 1.7130 0.5465 Grok-3 17.22 1.5284 0.2727Earth-Gold, a core innovation, evaluates LLMs in open scientific exploration dialogues using our novel Scientific Exploration Score (SES), which assesses divergent thinking (diversity d) and answer quality (retention rate r). Table 5 shows performance with each model generat- ingM= 3 responses for randdcalculation. The results indicate that most models achieve a retention rate of less than 50%, suggesting that over half of the generated responses un- derperform the reference answers. Regarding the diversity metric, most models exhibit low diversity scores. This indicates that when re- sponding to open-ended questions, LLMs tend to generate similar answers across multiple at- tempts, which significantly contrasts with the divergent thinking characteristic of human sci- entists during scientific exploration. Our analysis identifies three primary issues in these subpar responses: (1) overly generic content lacking specific details, (2) non-specific analyses of limitations, and (3) excessively broad proposed solutions. Figure 6 provides a visual comparison between high-quality and low-quality responses. 8 Figure 6: Case Study from Earth-Gold. Earth-Gold is used to evaluate the performance of LLMs in open scientific exploration. For the same user questions, the reference answers contain more details, such as the yellow text, while the bad answers are very general. 4.4 CoT Guidance Enhances Performance on Challenging Questions Table 6: The Impact of Using CoT Guidance on FIB ACC. Inspired CoT Steps FIB (%)↑ DeekSeek-V3 0 (Baseline) 12.80 1 21.60 2 29.60 3 45.60 GPT-4o 0 (Baseline) 18.40 1 25.60 2 38.80 3 55.60FIB ACC in Table 4 show lower accuracy, likely due to their inher- ent difficulty compared to MC/TF. Since our question construction generates step-by-step explanations resembling CoT reasoning, we investigate if providing LLMs with initial CoT steps during inference improves performance on these challenging questions. Table 6 illustrates the model’s accuracy with varying numbers of provided CoT steps. As is evident, increasing the number of CoT steps offered as hints significantly improves the model’s accuracy. This suggests that for more challenging questions, even without modifying the model’s fundamental capabilities, leveraging CoT- augmented reasoning at inference time can substantially boost performance. This finding provides a promising foundation for inference-time scaling strategies. 5 Conclusion The increasing interest in leveraging Large Language Models (LLMs) for scientific applications un- derscores the need for specialized benchmarks, particularly in fundamental yet multifaceted domains like Earth science, which current general or fragmented benchmarks inadequately address, especially regarding open-ended scientific exploration. To this end, this paper introduces a comprehensive and professional Earth science benchmark comprising two QA datasets, Earth-Iron and Earth-Silver, and a novel open-ended dialogue dataset, Earth-Gold, built upon a 100,000-paper corpus. Earth-Iron provides
https://arxiv.org/abs/2505.17139v1
broad foundational assessment across five Earth spheres, 114 disciplines, and 11 tasks, while Earth-Silver offers challenging, professional-level questions. Notably, Earth-Gold evaluates advanced scientific exploration abilities through multi-turn dialogues. Experiments show LLMs perform reasonably on basic Earth-Iron QA but significantly worse on challenging Earth-Silver. Earth-Gold reveals below low retention and diversity in open-ended dialogues. These quantitative results highlight current LLMs’ limitations in Earth science knowledge depth and genuine scientific exploration, indicating significant room for improvement. The limitation of this work lies in the fact that it does not integrate the 11 tasks to create a more complex task chain, highlighting a gap in achieving automated scientific discovery. 9 References [1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]Zhen Bi, Ningyu Zhang, Yida Xue, Yixin Ou, Daxiong Ji, Guozhou Zheng, and Huajun Chen. Oceangpt: A large language model for ocean science tasks. arXiv preprint arXiv:2310.02031 , 2023. [3]Ekaba Bisong. Matplotlib and seaborn. In Building machine learning and deep learning models on google cloud platform: A comprehensive guide for beginners , pages 151–165. Springer, 2019. [4]Courtni Byun, Piper Vasicek, and Kevin Seppi. This reference does not exist: an exploration of llm citation accuracy and relevance. In Proceedings of the Third Workshop on Bridging Human–Computer Interaction and Natural Language Processing , pages 28–39, 2024. [5]Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Hongshuai Wang, Yongge Li, Mujie Lin, et al. Sciassess: Benchmarking llm proficiency in scientific literature analysis. arXiv preprint arXiv:2403.01976 , 2024. [6]Yanxi Chen, Yaliang Li, Bolin Ding, and Jingren Zhou. On the design and analysis of llm-based algorithms. arXiv preprint arXiv:2407.14788 , 2024. [7]Murillo Edson de Carvalho Souza and Li Weigang. Grok, gemini, chatgpt and deepseek: Com- parison and applications in conversational artificial intelligence. INTELIGENCIA ARTIFICIAL , 2(1), 2025. [8]Cheng Deng, Tianhang Zhang, Zhongmou He, Qiyuan Chen, Yuanyuan Shi, Yi Xu, Luoyi Fu, Weinan Zhang, Xinbing Wang, Chenghu Zhou, et al. K2: A foundation language model for geoscience knowledge understanding and utilization. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining , pages 161–170, 2024. [9]R Devika, Subramaniyaswamy Vairavasundaram, C Sakthi Jay Mahenthar, Vijayakumar Varadarajan, and Ketan Kotecha. A deep learning model based on bert and sentence transformer for semantic keyphrase extraction on big social data. IEEE Access , 9:165252–165261, 2021. [10] Xinrun Du, Yifan Yao, Kaijing Ma, Bingli Wang, Tianyu Zheng, King Zhu, Minghao Liu, Yiming Liang, Xiaolong Jin, Zhenlin Wei, et al. Supergpqa: Scaling llm evaluation across 285 graduate disciplines. arXiv preprint arXiv:2502.14739 , 2025. [11] Eugene Garfield et al. The impact factor. Current contents , 25(20):3–7, 1994. [12] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [13] Qiuhan Gu. Llm-based code generation method for golang compiler testing. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the
https://arxiv.org/abs/2505.17139v1
Foundations of Software Engineering , pages 2201–2203, 2023. [14] Conghui He, Wei Li, Zhenjiang Jin, Chao Xu, Bin Wang, and Dahua Lin. Opendatalab: Empowering general artificial intelligence with open datasets. arXiv preprint arXiv:2407.13773 , 2024. [15] Alice Heiman. The accuracy, robustness, and readability of llm-generated sustainability-related word definitions. arXiv preprint arXiv:2502.00916 , 2025. [16] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. 10 [17] Hans Hersbach, Bill Bell, Paul Berrisford, Shoji Hirahara, András Horányi, Joaquín Muñoz- Sabater, Julien Nicolas, Carole Peubey, Raluca Radu, Dinand Schepers, et al. The era5 global reanalysis. Quarterly journal of the royal meteorological society , 146(730):1999–2049, 2020. [18] Raisa Islam and Owana Marzia Moushi. Gpt-4o: The cutting-edge advancement in multimodal llm. Authorea Preprints , 2024. [19] Mathias Kraus, Julia Anna Bingler, Markus Leippold, Tobias Schimanski, Chiara Colesanti Senni, Dominik Stammbach, Saeid Ashraf Vaghefi, and Nicolas Webersinke. Enhancing large language models with climate resources. arXiv preprint arXiv:2304.00116 , 2023. [20] Ryo Kurokawa, Yuji Ohizumi, Jun Kanzawa, Mariko Kurokawa, Yuki Sonoda, Yuta Nakamura, Takao Kiguchi, Wataru Gonoi, and Osamu Abe. Diagnostic performances of claude 3 opus and claude 3.5 sonnet from patient history and key images in radiology’s “diagnosis please” cases. Japanese Journal of Radiology , pages 1–4, 2024. [21] Qianxi Li. Iterative large language models evolution through self-critique. 2024. [22] Xingzuo Li, Kehai Chen, Yunfei Long, and Min Zhang. Llm with relation classifier for document-level relation extraction. arXiv preprint arXiv:2408.13889 , 2024. [23] Yunxin Li, Zhenyu Liu, Zitao Li, Xuanyu Zhang, Zhenran Xu, Xinyu Chen, Haoyuan Shi, Shenyuan Jiang, Xintong Wang, Jifang Wang, et al. Perception, reason, think, and plan: A survey on large multimodal reasoning models. arXiv preprint arXiv:2505.04921 , 2025. [24] Bryan Lim, Ishith Seth, Molly Maxwell, Roberto Cuomo, Richard J Ross, and Warren M Rozen. Evaluating the efficacy of large language models in generating medical documentation: A comparative study of chatgpt-4, chatgpt-4o, and claude. Aesthetic Plastic Surgery , pages 1–12, 2025. [25] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [26] Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scien- tist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292 , 2024. [27] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems , 35:2507–2521, 2022. [28] Stanley E Manahan. The five environmental spheres. In Environmental Science and Technology , pages 59–84. CRC Press, 2006. [29] Veeramakali Vignesh Manivannan, Yasaman Jafari, Srikar Eranky, Spencer Ho, Rose Yu, Duncan Watson-Parris, Yian Ma, Leon Bergen, and Taylor Berg-Kirkpatrick. Climaqa: An auto- mated evaluation framework for climate foundation models. arXiv preprint arXiv:2410.16701 , 2024. [30] Kien Nguyen, Fengchun Qiao, Arthur Trembanis, and Xi Peng. Seafloorai: A large-scale vision- language dataset for seafloor geological survey. Advances
https://arxiv.org/abs/2505.17139v1
in Neural Information Processing Systems , 37:22107–22123, 2024. [31] Max Peeperkorn, Tom Kouwenhoven, Dan Brown, and Anna Jordanous. Is temperature the creativity parameter of large language models? arXiv preprint arXiv:2405.00492 , 2024. [32] Long Phan, Alice Gatti, Ziwen Han, Nathaniel Li, Josephina Hu, Hugh Zhang, Chen Bo Calvin Zhang, Mohamed Shaaban, John Ling, Sean Shi, et al. Humanity’s last exam. arXiv preprint arXiv:2501.14249 , 2025. [33] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. 11 [34] Andreas Stephan, Dawei Zhu, Matthias Aßenmacher, Xiaoyu Shen, and Benjamin Roth. From calculation to adjudication: Examining llm judges on mathematical reasoning tasks. arXiv preprint arXiv:2409.04168 , 2024. [35] Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. arXiv preprint arXiv:2211.09085 , 2022. [36] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. [37] Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [38] Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. Large language models in medicine. Nature medicine , 29(8):1930–1940, 2023. [39] Francesco S Varini, Jordan Boyd-Graber, Massimiliano Ciaramita, and Markus Leippold. Climatext: A dataset for climate change topic detection. arXiv preprint arXiv:2012.00483 , 2020. [40] Bin Wang, Chao Xu, Xiaomeng Zhao, Linke Ouyang, Fan Wu, Zhiyuan Zhao, Rui Xu, Kaiwen Liu, Yuan Qu, Fukai Shang, et al. Mineru: An open-source solution for precise document content extraction. arXiv preprint arXiv:2409.18839 , 2024. [41] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635 , 2023. [42] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. [43] Nicolas Webersinke, Mathias Kraus, Julia Anna Bingler, and Markus Leippold. Climatebert: A pretrained language model for climate-related text. arXiv preprint arXiv:2110.12010 , 2021. [44] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [45] Jiaheng Wei, Yuanshun Yao, Jean-Francois Ton, Hongyi Guo, Andrew Estornell, and Yang Liu. Measuring and reducing llm hallucination without gold-standard answers. arXiv preprint arXiv:2402.10412 , 2024. [46] Yao Xu, Shizhu He, Jiabei Chen, Zihao
https://arxiv.org/abs/2505.17139v1
Wang, Yangqiu Song, Hanghang Tong, Guang Liu, Kang Liu, and Jun Zhao. Generate-on-graph: Treat llm as both agent and kg in incomplete knowledge graph question answering. arXiv preprint arXiv:2404.14741 , 2024. [47] Yutaro Yamada, Robert Tjarko Lange, Cong Lu, Shengran Hu, Chris Lu, Jakob Foerster, Jeff Clune, and David Ha. The ai scientist-v2: Workshop-level automated scientific discovery via agentic tree search. arXiv preprint arXiv:2504.08066 , 2025. [48] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [49] Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie, Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik Cambria, and Dongzhan Zhou. Moose-chem: Large language models for rediscovering unseen chemistry scientific hypotheses. arXiv preprint arXiv:2410.07076 , 2024. 12 [50] Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Yongliang Shen, Ren Kan, Dongsheng Li, and Deqing Yang. Easytool: Enhancing llm-based agents with concise tool instruction. arXiv preprint arXiv:2401.06201 , 2024. [51] Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, et al. Chemllm: A chemical large language model. arXiv preprint arXiv:2402.06852 , 2024. [52] Xuan Zhang and Wei Gao. Towards llm-based fact verification on news claims with a hierarchi- cal step-by-step prompting method. arXiv preprint arXiv:2310.00305 , 2023. [53] Yifan Zhang, Cheng Wei, Shangyou Wu, Zhengting He, and Wenhao Yu. Geogpt: Understanding and processing geospatial tasks through an autonomous gpt. arXiv preprint arXiv:2307.07930 , 2023. 13 A Authors The members of the OpenEarthTeam involved in this work are as follows: Wanghan Xu∗, Shanghai Jiao Tong University, Shanghai Artificial Intelligence Laboratory, xu_wanghan@sjtu.edu.cn . Xiangyu Zhao, Shanghai Artificial Intelligence Laboratory, zhaoxiangyu1@pjlab.org.cn . Yuhao Zhou, Shanghai Artificial Intelligence Laboratory, zhouyuhao@pjlab.org.cn . Xiaoyu Yue, Shanghai Artificial Intelligence Laboratory, yuexiaoyu@pjlab.org.cn . Ben Fei, Shanghai Artificial Intelligence Laboratory, feiben@pjlab.org.cn . Fenghua Ling, Shanghai Artificial Intelligence Laboratory, lingfenghua@pjlab.org.cn . Wenlong Zhang†, Shanghai Artificial Intelligence Laboratory, zhangwenlong@pjlab.org.cn . Lei Bai†, Shanghai Artificial Intelligence Laboratory, bailei@pjlab.org.cn . ∗This work was primarily conducted during the author’s internship at the Shanghai Artificial Intelligence Laboratory. †Corresponding author. B Paper Selection and Tiered Classification The proposed benchmark derives from 100,000 papers downloaded from OpenDataLab [ 14], which undergo a tiered classification process resulting in three distinct paper collections: • Base paper collection ( Pbase) • High-journal-impact papers ( Phj) • High-citation papers ( Phc) ThePbasecollection is constructed through semantic similarity analysis of paper titles and keywords. Our methodology employs both positive and negative keywords to enhance selection precision. Negative keywords effectively filter out semantically related but thematically irrelevant papers. For instance, when using "earth" as a positive keyword, astronomy-related papers may incorrectly appear relevant. To address this, we introduce "cosmos" as a negative keyword, requiring papers to maintain a similarity score below a specified threshold (0.1) with negative keywords while exceeding a minimum threshold (0.2) with positive keywords. This dual-threshold approach yields the final Pbasecollection of 100,000 papers. Using an analogous positive and negative keyword approach, we accurately classify papers into five Earth science subdomains (hydrosphere, biosphere, lithosphere, atmosphere,
https://arxiv.org/abs/2505.17139v1
and cryosphere). The complete domain-specific keyword system is presented in Figure 7. Positive and Negative Keywords Overall Positive Keywords Earth, Earth system, hydrosphere, biosphere, lithosphere, atmosphere, cryosphere. Overall Negative Keywords cell biology, virus, pharmaceuticals, chemistry, physics, astronomy, food science, proteins, microbiology. Hydrosphere Positive Keywords water cycle, ocean, rivers, lakes, groundwater, ice caps, aquifers, precipitation, evaporation, humidity. Hydrosphere Negative Keywords chemistry, universe, planets, astronomy, astrophysics, space, stars, galaxy, cosmology. 14 Biosphere Positive Keywords ecosystem, biodiversity, habitat, species, biomes, ecological balance, carbon cycle. Biosphere Negative Keywords cell biology, chemistry, medicine, pharmacology, microbiology, biochemistry, toxicology, pathology, clinical. Lithosphere Positive Keywords earthquake, tectonic plates, earth’s crust, minerals, rocks, soil, sediments, mountains, volcanoes, landforms, geological processes. Lithosphere Negative Keywords ancient texts, archaeology, culture, history, artifacts, civilization, prehistoric, mythology, anthropology. Atmosphere Positive Keywords stratosphere, troposphere, weather, climate, greenhouse gases, ozone layer, air pressure, humidity, winds, carbon dioxide, temperature. Atmosphere Negative Keywords universe, galaxy, astronomy, astrophysics, space, stars, planets, cosmology, black holes, nebula, solar system. Cryosphere Positive Keywords glaciers, ice sheets, sea ice, permafrost, snowpack, icebergs, frozen ground, climate change, albedo effect, polar regions. Cryosphere Negative Keywords frozen food, ice cream, refrigeration, freezing, cold storage, ice cubes, food preservation, chilling, frost. Figure 7: Positive and Negative Keywords. From the initial paper collection Pbase, we select a subset of papers Phjpublished in high-quality journals based on multiple criteria including impact factor (IF) [ 11], disciplinary ranking, and expert assessment. Representative journals are listed in Table 7, with the complete journal inventory provided in Supplementary Materials. Table 7: Partial List of Selected Journal Titles. CELL NATURE SCIENCE NATURE BIOTECHNOLOGY NATURE CELL BIOLOGY NATURE CHEMICAL BIOLOGY NATURE CHEMISTRY NATURE CLIMATE CHANGE NATURE COMMUNICATIONS NATURE DIGEST NATURE GENETICS NATURE GEOSCIENCE NATURE IMMUNOLOGY NATURE MATERIALS NATURE MEDICINE NATURE METHODS NATURE NANOTECHNOLOGY NATURE NEUROSCIENCE NATURE PHOTONICS NATURE PHYSICS NATURE REVIEWS CANCER NATURE REVIEWS CARDIOLOGY NATURE REVIEWS CLINICAL ONCOLOGY ... Finally, we select the 1000 papers Phcwith the highest number of citations from Phjto form a collection. The distribution of papers in each collection is shown in Table 3. C QA Construction When we construct QA questions, the prompts we use will vary depending on the format and task of the question. For questions of different formats, we use the following prompts. Prompt for Different Question Formats Free Form QA Free-form questions and answers allow for open-ended responses. These questions typically require detailed explanations, recommendations, or descriptions. The answers can vary in 15 length and structure, depending on the complexity of the question. For example, a question about improving the marine environment may require a list of measures with explanations. Format Rules: Answers can be in any format, including sentences, lists, or paragraphs. There are no strict restrictions on length or structure. Multiple Choice Multiple-choice questions provide a question followed by several answer choices, typically labeled with letters (e.g., A, B, C, D). Only one choice is correct. The task is to select the correct choice by providing the corresponding letter. Format Rules: The question requires a prompt along with four options, one of which is the correct answer. The answer must be a single letter
https://arxiv.org/abs/2505.17139v1
corresponding to the correct choice (e.g., ’A’, ’B’, ’C’, or ’D’). The answer should not include the full text of the choice. True False True/false questions require determining whether a given statement is correct or incorrect. The answer must be either ’True’ or ’False’, depending on the accuracy of the statement. Format Rules: The answer must be exactly ’True’ or ’False’. No additional text or explanations are allowed. Free Form QA Fill-in-the-blank questions provide a sentence or statement with one or more missing words, phrases or number. The task is to complete the sentence by filling in the blank(s) with the correct word(s). Format Rules: The answer must be the exact word or phrase that fits the blank. No additional text or explanations are allowed unless explicitly requested. Figure 8: Prompt for Different Question Formats. The prompts for different tasks are shown in Table 2. The complete QA prompt is shown in Figure 9. Complete QA Construction Prompt System Prompt You are a highly skilled scientist with a deep expertise in reading and analyzing scientific literature. Below is a research paper that you will carefully examine. Your task is to generate a well-structured question and answer based on the provided instructions. Ensure that your output is precise, relevant, and adheres to the specified guidelines. Instructions 1.Format : <Format Prompt> 2.Task : <Task Prompt> 3.Answer Explanation : Each question must be accompanied by a corresponding answer explanation. 4.Output Format : The output should be structured as a dictionary, including the following keys: -question : The generated question. -answer : The correct answer to the question. -explanation : A list containing the explanation(s) for the answer. 5.Select One Specific Detail Do not generalize the entire paper into a question. Instead, carefully select the most relevant and specific part of the paper (e.g., a key finding, methodological detail, or discussion point) and craft a detailed question around it. The question should be highly specific and require a nuanced understanding of the paper to answer. Avoid generating general or overly simplistic questions. 6.Content Relevance and Accuracy 16 Ensure that the questions are directly derived from the content of the paper. Avoid generating questions that are irrelevant or based on incorrect facts. The questions should accurately reflect the findings, methodologies, or discussions presented in the paper. 7.Independent and Complete Question, Answer, and Explanation -Self-contained : The question, answer, and explanation must be self-contained and complete. Do not use phrases such as "this article" "this paper" "according to the article" "according to the paper" or similar references to the paper. -Avoid Personal Pronouns : Additionally, avoid using personal pronouns like "we" or "our." The question, answer, and explanation must stand alone and be understandable without additional context. -Avoid Analysis of the Paper : Do not include any analysis of the paper in the questions, answers, or explanations. Instead, transform the analysis into independent statements that are self-sufficient and do not rely on the paper for context. Example { "question": "The generated question based on the paper.", "answer": "The corresponding answer to the question.", "explanation": [ "Step 1: Explanation
https://arxiv.org/abs/2505.17139v1
for the first step.", "Step 2: Explanation for the second step.", "Step 3: Explanation for the third step." ] } Figure 9: Complete QA Construction Prompt. After constructing the QA pair, we will also refine the QA. The main purpose is to remove extra references that may exist in the QA, such as mentioning papers or non-existent figures in the question. The refine prompt is as follows. QA Refine Prompt System Prompt Please modify the following question, answer, and explanation to remove any expressions related to "paper," "article," "study," or similar references. Ensure that the question is complete and can be answered directly based on the explanation provided, without requiring any additional context or knowledge of the paper. After modification, maintain the original dictionary format and ensure that the meaning of the questions, answers, and explanations remains unchanged. Instructions 1.Remove References to the Paper : - Eliminate any phrases such as "this paper," "the article," "according to the study," or similar references. - Ensure the question, answer, and explanation are self-contained and do not rely on external sources for understanding. 2.Maintain Clarity and Completeness : - The question should be clear, specific, and able to stand alone, without referencing the paper. - The answer should directly address the question without referencing the paper. - The explanation should provide sufficient detail to justify the answer, using independent and self-sufficient statements, without referencing the paper. 3.Preserve the Dictionary Format : - Keep the output in the original dictionary format, including the keys ‘question‘, ‘answer‘, and ‘explanation‘. - Ensure that the meaning of the questions, answers, and explanations remains unchanged. 17 Figure 10: QA Refine Prompt. D Scientific Exploration Dialogue Construction The Earth-Gold dataset construction from the high-citation paper collection Phcinvolves a two-phase processing pipeline: 1.Paper Structuring Phase : Decompose each paper into four core components: • Summary of existing methods ( M0) • Limitations of existing methods ( L0) • Proposed new methods ( M1) • Potential limitations of new methods ( L1) 2.Dialogue Generation Phase : Generate two-turn scientific exploration dialogues The paper structuring prompt is presented as follows: Paper Structuring Prompt System Prompt You are an AI research assistant with expertise in analyzing and structuring academic papers. Your task is to extract and organize the content of a research paper into five specific sections: research direction, methods of previous work, limitations of previous work, method of this work, and limitation of this work. Follow the instructions carefully and output only a dictionary (dict) as specified. Instructions 1.Task Description : - Extract and categorize the content of the provided research paper into the following five sections: 1)Research direction 2)Methods of previous work 3)Limitations of previous work 4)Method of this work 5)Limitation of this work - Exclude all other sections or content not related to these five categories. 2.Output Format : - The output must be a Python dictionary (dict) with the following structure: { "research direction": "xxx", "methods of previous work": "xxx", "limitations of previous work": "xxx", "method of this work": "xxx", "limitation of this work": "xxx" } - Replace ‘"xxx"‘ with the
https://arxiv.org/abs/2505.17139v1
extracted content for each section. 3.Extraction Rules : -research direction : Identify the primary focus, goals, or areas of investigation in the paper. Avoid using terms such as "paper," "study," "work," or "thesis." Instead, directly state the research direction in a concise and general manner. -methods of previous work : Summarize the methodologies or approaches used in prior research relevant to the paper. -limitations of previous work : Highlight the shortcomings, gaps, or challenges in previous research. -method of this work : Describe the methodology or approach proposed or used in the current paper. -limitation of this work : Identify and summarize any limitations or weaknesses in the current work’s approach. 18 4.Important Notes : - Strictly adhere to the five sections outlined above. - Do not include any additional text, explanations, or commentary outside the dictionary. - Ensure the output is concise, clear, and directly relevant to the specified sections. - Elaborate on the content with as much detail as possible, retaining specific numerical values, dataset names, method names, author names, etc., and ensure the content is highly professional and information-rich. Example { "research direction": "xxx", "methods of previous work": "xxx", "limitations of previous work": "xxx", "method of this work": "xxx", "limitation of this work": "xxx" } Figure 11: Paper Structuring Prompt. The structured paper content enables the generation of two progressive dialogue rounds: •Round 1 : The LLM summarizes existing methods ( M0) and analyzes their limitations ( L0). This round evaluates the model’s understanding of domain-specific research methodologies. •Round 2 : Building upon Round 1, the LLM proposes innovative methods ( M1) while reflecting on potential limitations ( L1). This round assesses the model’s scientific innovation capability and critical reflection skills. The dialogue construction prompt is presented as follows: Dialogue Construction Prompt System Prompt You are an AI research assistant specializing in refining supervised fine-tuning (SFT) data for large language models. Your task is to enhance the quality and linguistic diversity of the provided SFT data while preserving its original structure and content. Instructions 1.Task Description : - Polish the text in the SFT data, including both the "user" and "assistant" parts. - Ensure the polished text is clear, concise, and linguistically diverse while maintaining the original meaning and intent. 2.Refinement Requirements : - Articulate the logic with utmost clarity, employing logical conjunctions to underscore logical relationships where pertinent. - Elaborate on the content with as much detail as possible, retaining specific numerical values, dataset names, method names, author names, etc., and ensure the content is highly professional and information-rich. - While retaining the original meaning of the sentence, appropriately add or modify it to make the unsmooth conversation content smooth. 3.Constraints : - Do not add, remove, or alter the number of dialogue turns. - Preserve the original structure and role labels ("user" and "assistant"). - Use a variety of linguistic styles, vocabulary, and phrasing to increase the diversity of the text. 4.Output Requirements : - Output the modified SFT data as a dialogue list in the same JSON format as the original. - Ensure the polished text
https://arxiv.org/abs/2505.17139v1
is natural, engaging, and suitable for training large language models. Example [ 19 {"role": "user", "content": "<INPUT1> Could you provide an overview of related work and discuss their key limitations?"}, {"role": "assistant", "content": "Certainly. The related works include the following. <IN- PUT2> However, these approaches face several limitations. <INPUT3>"}, {"role": "user", "content": "Given these existing works and their limitations, can you propose a new method and evaluate its potential drawbacks?"}, {"role": "assistant", "content": "Building on these foundations, the proposed method is structured as follows. <INPUT4> Despite its advantages, this method has certain limitations. <INPUT5>"} ] Figure 12: Dialogue Construction Prompt. The <INPUT1> to <INPUT5> in the prompt are replaced by the values in the dictionary obtained in paper structuring. E General Metrics Accuracy (Acc.) For closed-form questions (multiple-choice, fill-in-the-blank, and true/false) with deterministic correct answers, we calculate accuracy as the exact match rate between model outputs and reference solutions. Win Rate (WR) [2] For free-response questions, we employ GPT-4 as an impartial judge to compare model answers against reference answers along three dimensions: relevance (degree of topical alignment, penalizing tangential responses), scientific rigor (logical consistency and factual correctness), and specificity (depth of targeted analysis, penalizing generic statements). The evaluation protocol ensures fairness by blinding GPT-4 to answer origins. Win rate quantifies the percentage of cases where the evaluated model’s response surpasses the reference. Semantic Similarity (SS) [9] For free-response questions, we additionally measure the conceptual alignment between model outputs and reference answers using sentence embeddings. Specifically, we utilize sentence-transformers to project responses into a 384-dimensional semantic space, then compute their cosine similarity as the semantic similarity metric. F QA Examples QA Example 1 (Multiple Choice) Question What factor plays a pivotal role in the formation of sapropel S1 in the Mediterranean, and what evidence suggests its influence during the last interglacial insolation maximum? A) Increased wind stress and evidence from high sedimentation rates. B) Monsoon-fueled freshwater input from the Nile River and synchronized timing of sapropel S1 deposition with wetter conditions in North Africa. C) Enhanced salinity from increased evaporation and evidence from salinity isotope ratios. D) Elevated ocean temperatures and evidence from fossil coral records. Explanation Monsoon-fueled freshwater input, particularly from the Nile River, significantly affected sapropel S1 formation. The timing of sapropel S1 deposition in the eastern Mediterranean closely coincided with periods of increased Nile River discharge due to enhanced monsoon strength over North Africa. Evidence suggests these monsoonal shifts resulted in wetter conditions, which aligned with the insolation maxima during the last interglacial, promoting the deposition of organic-rich sapropels. 20 Answer B Figure 13: QA Example 1. QA Example 2 (True or False) Question Atmospheric humidity in Nanjing’s urban canopy layer demonstrates significant nighttime differences among local climate zones, particularly due to moisture deficits during colder months, and shows clear seasonal patterns in humidity ratios with greater discrepancies in warmer months than colder months. Explanation Step 1: Significant differences in atmospheric humidity among local climate zones are observed during nighttime, highlighting variations in humidity ratios across seasons. Step 2: Negative humidity ratio values (moisture deficits) are more frequent during
https://arxiv.org/abs/2505.17139v1
colder months, while positive values (moisture excess) are noted more often in warmer months. Step 3: Seasonal patterns are more pronounced in humidity ratio differences, showing greater discrepancies during warmer months, supporting the statement’s claim about seasonal variations. Answer True Figure 14: QA Example 2. QA Example 3 (Fill in the Blanks) Question In the comparison between SP-CCSM4 and CCSM4, the difference in projected ENSO-shear relationships is attributed to the varying intensity and spatial extent of anomalous westerlies at upper levels during El NINO events. Specifically, while the climatology of easterly flow at _____ simulated in CCSM4 is weaker and retreated eastward compared to SP-CCSM4, the projected westerly anomalies due to ENSO are _____ in CCSM4 relative to those in SP-CCSM4. Explanation Step 1: The climatology of easterly flow at 200 hPa is noted to be weaker and shifted eastward in CCSM4 compared to SP-CCSM4. Step 2: During El NINO events, CCSM4 projects stronger westerly anomalies than SP- CCSM4 at upper atmospheric levels, contributing to variability in ENSO-shear relationships. Step 3: These differences in the intensity and spatial distribution of westerly anomalies directly influence the projected ENSO-shear relationships, highlighting the distinct behavior between the two models in response to ENSO in a warmer climate. Answer 200 hpa, stronger Figure 15: QA Example 3. QA Example 4 (Free QA) Question How does the redistribution of aerosol particles captured through nucleation and impaction scavenging into various drop size categories affect the dynamics and ultimate removal process of aerosol particles in the cloud environment, and what implications do the stochastic collision- 21 coalescence process and drop growth mechanisms have on the distribution and contamination levels observed in precipitation-sized drops? Explanation Step 1: Aerosol particles are captured in cloud drops through nucleation and impaction scav- enging, with their subsequent redistribution depending on the stochastic collision-coalescence processes of drop interactions. Step 2: As smaller drops merge and grow, aerosol particle mass becomes concentrated in larger precipitation-sized drops, which are more effective in removal due to their increased mass and falling speed. Step 3: The stochastic collision-coalescence process also results in varying contamination levels, with smaller drops being highly contaminated due to their increased aerosol-to-water mass ratios, while larger drops ensure bulk removal of aerosol mass, significantly affecting the efficiency of wet deposition mechanisms in clouds. Answer The redistribution of aerosol particles captured via nucleation and impaction scavenging significantly alters the dynamics of aerosol removal by moving the aerosol mass into larger drop size categories, thereby concentrating aerosol particles in precipitation-sized drops. During stochastic collision-coalescence processes, smaller cloud drops merge and grow, carrying the aerosol particle mass with them, leading to a coupling between water mass and aerosol mass growth. This results in a redistribution where the main aerosol mass shifts to drops of precipitation size, which are more effective at removing particles from the cloud environment. Additionally, smaller drops within the cloud exhibit higher contamination levels due to their higher aerosol-to-water mass ratio, while larger precipitation drops accumulate the majority of aerosol mass, thereby ensuring efficient removal from the atmosphere. Figure 16: QA Example 4. G Examples
https://arxiv.org/abs/2505.17139v1
of Scientific Exploration Dialogues Earth-Gold: Scientific Exploration Dialogue Example 1 User I am interested in the study of sedimentary cycles and their depositional rates over extensive geological time spans, with a particular emphasis on how the likelihood of sedimentary rock destruction changes with age. Could you provide a comprehensive overview of the existing literature in this area, highlighting the principal limitations of these studies? Assistant Absolutely. The primary works in this domain include Gregor’s seminal approach, where he aimed to estimate the global denudation rates across various geological epochs. He derived depositional rates by analyzing the mass-age distribution of current sediments. Nonetheless, these methodologies encounter certain restrictions. For instance, Gregor himself cast doubt on his findings regarding depositional rates during the pre-Carboniferous phases of the Phanerozoic eon. His assumption of a uniform probability of destruction for all sedimentary rocks, regardless of their age, failed to consider fluctuations in the half-lives of rocks over time. User Considering these existing studies and their constraints, could you suggest an innovative approach and discuss its potential drawbacks? Assistant Leveraging the insights from previous studies, I propose a novel approach which follows this outline: recharting Gregor’s data using semilogarithmic graphs, converting the data into units of grams per year, and fitting these values as exponential functions. This would be complemented by modeling sedimentary cycles that exhibit different half-lives to account for mass-age relationships. Despite its innovative aspects, this approach is not without limitations. While it successfully elucidates Gregor’s mass-age associations, it lacks uniqueness and 22 posits a steady total sedimentary mass across cycles, which might overlook some of the more intricate geological phenomena. Figure 17: Scientific Exploration Dialogue Example 1. Earth-Gold: Scientific Exploration Dialogue Example 2 User Please examine the interplay and responsiveness of the convective boundary layer (CBL) under diverse atmospheric and oceanic conditions, including sea surface temperature (SST), surface wind speed, and tropospheric moisture levels. Could you provide a summary of existing research and highlight their main shortcomings? Assistant Certainly. Related studies include the following. Betts (1973) introduced a closure model for the subcloud layer that utilized an adapted subsidence approach and energy balance closures consistent with Sarachik’s tropical model (1978). Nonetheless, these methods exhibit several limitations. For instance, models by Betts and Sarachik were based on assumptions that might not thoroughly represent the coupling dynamics between the CBL and varying atmospheric parameters. User Considering these previous studies and their limitations, could you propose a novel approach and assess its potential limitations? Assistant Drawing inspiration from these constructs, the proposed approach is defined as follows. This work formulates a detailed model to analyze the CBL equilibrium by evaluating uncoupled solutions influenced by SST, wind speed, and tropospheric variations, along with assessments of coupled troposphere solutions and their responsiveness to changes. Despite its benefits, this method does contain some limitations. The model might encounter challenges in accurately forecasting the CBL’s behavior under extreme climatic conditions or when integrating intricate feedback mechanisms not entirely accounted for in the theoretical framework. Figure 18: Scientific Exploration Dialogue Example 2. 23
https://arxiv.org/abs/2505.17139v1
arXiv:2505.17140v1 [cs.CL] 22 May 2025Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs Essa Jan1†, Moiz Ali1†, Muhammad Saram Hassan1, Fareed Zaffar1*, Yasir Zaki2* 1Lahore University of Management Sciences,2New York University Abu Dhabi †The two authors contributed equally to this paper. *Correspondence: fareed.zaffar@lums.edu.pk, yasir.zaki@nyu.edu Abstract As the knowledge of large language mod- els (LLMs) becomes outdated over time, there is a growing need for efficient meth- ods to update them, especially when inject- ing proprietary information. Our study reveals that comprehension-intensive fine-tuning tasks (e.g., question answering and blanks) achieve substantially higher knowledge retention rates (48%) compared to mapping-oriented tasks like translation (17%) or text-to-JSON conversion (20%), despite exposure to identical factual con- tent. We demonstrate that this pattern persists across model architectures and follows scaling laws, with larger models showing improved re- tention across all task types. However, all mod- els exhibit significant performance drops when applying injected knowledge in broader con- texts, suggesting limited semantic integration. These findings show the importance of task se- lection in updating LLM knowledge, showing that effective knowledge injection relies not just on data exposure but on the depth of cognitive engagement during fine-tuning. 1 Introduction LLMs demonstrated proficiency in possessing factual knowledge across a wide variety of do- mains (Cohen et al., 2023; Hu et al., 2024). De- spite their impressive capabilities, these models face a fundamental limitation: their knowledge is bounded by the cutoff date of their pre-training data. While extended pre-training offers a potential solution to incorporate new knowledge, it demands substantial computational resources, often involv- ing thousands of GPU hours and millions of tokens, making it expensive and impractical for most re- searchers and organizations (Ovadia et al., 2024). This economic barrier has pushed researchers to explore more efficient methods to inject new knowl- edge into LLMs. Recently, knowledge injection has emerged as an alternative to extended pre-training, as supervisedfine-tuning (SFT) on curated datasets has been shown to inject knowledge into an LLM (Zhang et al., 2024; Mecklenburg et al., 2024). However, these approaches have predominantly focused on question-answering tasks, leaving unexplored how the nature of the fine-tuning task itself influences knowledge retention and accessibility. This gap is particularly significant given that recent work has demonstrated differential impacts on security when fine-tuning LLMs across varied task types (Jan et al., 2025). This observation raises a question: whether comprehension-intensive tasks like ques- tion answering (QA) (requiring deep understanding of information) yield different knowledge injection outcomes compared to mapping-oriented tasks like translation. The main question is that, in tasks such as translation, where the model theoretically needs only to perform A →B mapping without deeper se- mantic understanding, is the model actually inter- nalizing the factual content contained within that data? This distinction is important for understand- ing how different fine-tuning approaches affect a model’s overall knowledge representation. Build- ing on the variability of task-based knowledge in- jection, we ask the following research questions: •RQ1: Does the new knowledge retained in fine- tuning differ for tasks with token-to-token map- ping (e.g., translation) compared to ones demand- ing explicit content understanding
https://arxiv.org/abs/2505.17140v1
(e.g., QA)? •RQ2: In scenarios where content understanding is required, does the model internalize knowl- edge beyond what’s assessed in direct question- ing, demonstrating deeper semantic integration? •RQ3: Does the model size affect the knowledge learned and its generalizability across tasks? 2 Related Work To keep LLMs up-to-date with new information without incurring high costs of full retraining, some widely studied approaches are Supervised Fine- Tuning (SFT) (Ouyang et al., 2022), Retrieval- Augmented Generation (Lewis et al., 2021), and Continual Pre-Training (CPT) (Ke et al., 2023; Gu- rurangan et al., 2020). In this paper, we focus on SFT as our injection mechanism. Unlike RAG, SFT embeds knowledge directly into model parameters- essential for evaluating true “learning” rather than deferred lookup—works offline, and requires fewer compute resources than CPT (Ovadia et al., 2024). Existing studies hint at task-dependent fine- tuning outcomes. (Mecklenburg et al., 2024) ex- plored SFT to inject new out-of-domain facts, i.e., recent sporting results, into LLMs, and demon- strated that fact-based scaling yields more effective injected knowledge than token-based scaling . (Yang et al., 2024) also demonstrated that LLMs fine-tuned on generation tasks versus classifica- tion tasks exhibit distinct generalization behav- iors. (Zhu et al., 2025) showed that formatting- based data augmentation combined with Sharpness- Aware Minimization significantly improves LLM knowledge acquisition and generalization. However, a significant challenge in knowledge injection is determining whether an LLM has truly internalized new information or merely memorized it. Two widely used evaluation paradigms are prob- ing and benchmarking. Probing methods, such as those by (Cohen et al., 2023), extract latent facts into structured artifacts (e.g., knowledge graphs) to inspect internal representations. Benchmarking relies on standardized test suites: MMLU, Truth- fulQA, to measure factual recall and reasoning ac- curacy but suffers from data contamination and hal- lucination artifacts. Recognizing these shortcom- ings, (Cao et al., 2025) advocates a shift towards capability-based assessments that better isolate re- tention versus surface performance. Additionally, researchers employ task variation probes to reveal superficial learning. For instance, Yan et al. (2025) demonstrated that LLMs struggle with symbolic versions of familiar tasks, revealing a reliance on pattern matching over genuine understanding. In this paper, we investigate how shifting the fine-tuning objective from comprehension-based tasks to mapping-based tasks affects the depth and transferability of injected knowledge. 3 Methodology 3.1 Dataset Curation Training Data: We focus on four task formats for knowledge injection: question answering (QA), fill-in-the-blanks, translation, and text-to-JSON. We assume that QA and blank tasks require the model to comprehend and internalize the content, while the other two involve token-to-token mapping, with minimal need for semantic understanding. To evaluate a model’s ability to acquire and trans- fer new knowledge, it is crucial to ensure that infor- mation presented during the training is not known to the model. Thus, we curated a dataset of real- world facts sourced from events occurring after the knowledge cutoff date of most open-source models (start of 2024). We selected content related to seven major 2024 global events (post cutoff): California wildfires, Nobel Prizes, the Men’s T20 World Cup, the EU AI Act, U.S. presidential elections,
https://arxiv.org/abs/2505.17140v1
the fall of the Assad regime, and the Olympic Games. The collected data was cleaned to remove re- dundancy and ambiguity. Using the GPT-4o-mini API (OpenAI et al., 2024), we decomposed the raw text into 126 atomic, standalone facts inspired by the work of (Mecklenburg et al., 2024). Each fact was concise yet complete, structured to be un- derstandable in isolation. All facts were manually reviewed and validated by the authors for factual correctness and clarity (examples in Appendix A). These atomic facts were then reformatted to suit the four training task formats: •Question Answering (QA): For each fact, one question was generated, which was crafted to query the exact information contained in the fact without requiring external knowledge. •Translation: The facts were translated into French using the Google Translate API. To in- crease diversity, each translated sentence was prepended with a randomly selected prompt ask- ing the model to translate it back to English. •Blanks: Using the GPT API and multishot prompting, we generated blanks for each fact. The most essential piece of information was re- moved and replaced with a blank, requiring the model to infer the missing content. •Text-to-JSON: Through few-shot prompting, each fact was converted into a structured JSON format. This format included the original fact along with important locations, dates, and person- alities mentioned in the fact. 3.2 Testing Data To evaluate the extent of knowledge retention after fine-tuning, we built two types of test questions: Direct Questions: We created a set of 126 questions, each corresponding to one atomic fact from the training set. These questions were gener- ated by rephrasing the original questions such that they shared no lexical overlap, except for essential named entities. The aim was to minimize surface- level similarity while ensuring that each question is still fully answered using only the information contained in the associated fact. This allowed us to test whether a model had retained knowledge and could access it independently of its training. Generic Questions: To assess whether a model truly internalizes injected knowledge, we devel- oped a second evaluation set comprising indi- rect, comprehension-oriented questions. Unlike direct questions, these generic questions require the model to apply the injected knowledge in broader contexts (examples in Table 5). We col- lected three questions per atomic fact using Pro- lific (pro), instructing participants to create queries with deeper understanding without explicitly mir- roring the training format. We evaluated these sub- missions, selecting and refining one high-quality generic question per fact based on two criteria: 1) it must require genuine comprehension of the in- jected fact, and 2) it must be impossible to correctly answer without having internalized the knowledge. 3.3 GPT Judge To evaluate the correctness of the model responses, we employed a GPT-4 O-MINIbased automatic judge. For each evaluation instance, the judge was provided with the corresponding atomic fact, the question, and the model’s response. The judge assigned a score of 1 if the response correctly ad- dressed the core question and was consistent with the fact. Minor inaccuracies were tolerated, as long as the main answer remained correct.
https://arxiv.org/abs/2505.17140v1
The prompt used for the judge is shown in appendix D. We also incorporated chain-of-thought reason- ing (Wei et al., 2023) during the evaluation to better capture the model’s justification. This mechanism enabled us to systematically assess the model per- formance across both the direct and generic sets. 3.4 Experimental Design In this study, we fine-tune a range of LLMs, including GEMINI -1.5-F LASH (GeminiTeam et al., 2024), LLAMA 3.2-3B (Llama Team, 2024), GEMMA 3-4B- IT(Team, 2025), PHI-3.5- MINI(Abdin et al., 2024), and several variants ofQWEN 2.5-I NSTRUCT (Team, 2024). Each model was fine-tuned separately on one of the four task-specific training datasets from Section 3.1: QA, translation, blanks, and text-to-JSON.After fine-tuning, we conducted inference on both the direct and generic sets to assess the knowl- edge retained by the models and their ability to gen- eralize beyond their training. To investigate RQ3 , we further examined whether the internalization of knowledge scales with the model size. Hence, we fine-tuned multiple variants of Qwen 2.5, including the 1.5B, 3B, 32B, and 72B parameter versions. 4 Evaluation and Results 4.1 GPT-4 O-MINIJudge To evaluate the reliability of our judge, we ran- domly sampled model responses from different fine-tuned models and manually annotated them as correct or incorrect. The annotations were per- formed by two independent evaluators, achieving a Cohen kappa of 0.884 (Cohen, 1960), indicating strong inter-annotator agreement. Our judge had an accuracy of 94% compared to human annotations. 4.2 Knowledge Retention Across Task Types Our results provide evidence for a significant dif- ference in knowledge retention between tasks re- quiring content understanding versus those primar- ily involving token-to-token mapping. Appendix 3 shows the baseline results, most of which are between 5-10% except larger models, which are around 15%. Table 1 illustrates this pattern consis- tently across all evaluated models. Understanding- based tasks (QA and blanks) demonstrated sub- stantially higher knowledge retention rates com- pared to mapping-based tasks (translation and text- to-JSON conversion). QA tasks yielded an aver- age retention rate of 48%, while fill-in-the-blank tasks averaged 32%. In contrast, token mapping tasks showed lower retention rates, with transla- tion averaging only 17% (12-22%) and text-to- JSON conversion averaging 20%. This finding is particularly noteworthy as translation and text-to- JSON expose models to identical factual content as understanding-based tasks, yet result in signifi- cantly diminished knowledge retention. The consistent performance gap between these tasks suggests that the cognitive demands of a fine- tuning task play a crucial role in knowledge in- ternalization. When models must comprehend in- formation to generate appropriate responses, they appear to develop deeper, more accessible repre- sentations of that knowledge. This pattern held across model architectures and sizes, with almost all models showing at least a 20 percentage point advantage for QA tasks over translation tasks. Model QA Trans. Blank JSON GEMINI -1.5-F LASH 54% 14% 21% 7% LLAMA 3.2-3B 61% 12% 40% 29% GEMMA 3-4B- IT 49% 22% 36% 17% PHI-3.5-M INI 56% 20% 35% 25% QWEN 2.5-3B 45% 14% 40% 23% Table 1: Percentage of direct questions answered by each model after fine-tuning on each of the
https://arxiv.org/abs/2505.17140v1
four tasks. 4.3 Beyond Direct Questioning While our direct evaluation in Section 4.2 demon- strated clear differences in knowledge retention patterns across task types, examining how models apply this knowledge in broader contexts allows us to address RQ2 : whether models internalize knowl- edge beyond what’s assessed in direct questioning. Table 2 presents the performance of fine-tuned mod- els on our generic questions dataset, which required deeper semantic integration of the injected facts. A notable pattern emerges when comparing generic question performance (Table 2) to direct question results (Table 1): all models show a de- cline in accuracy when tasked with applying knowl- edge in more general contexts. Models fine-tuned on QA tasks, which demonstrated the highest direct question accuracy (averaging 48%), showed per- formance drops when addressing generic questions requiring the same underlying knowledge. This gap directly addresses RQ2 , suggesting that even when models appear to retain factual information, their ability to apply this knowledge—demonstrating true semantic integration—remains limited. The pattern persists across both understanding-based and mapping-based tasks, indicating a fundamental challenge in knowledge generalization regardless of the initial fine-tuning task. Model QA Trans. Blank JSON GEMINI -1.5-F LASH 24% 12% 12% 7% LLAMA 3.2-3B 30% 12% 19% 9% GEMMA 3-4B- IT 23% 9% 24% 20% PHI-3.5-M INI 24% 19% 8% 23% QWEN 2.5-3B 21% 9% 14% 19% Table 2: Percentage of generic questions answered cor- rectly by each model after fine-tuning. 4.4 Scaling Laws of Knowledge Retention To investigate RQ3 regarding the relationship be- tween model scale and knowledge internaliza- tion, we conducted a systematic analysis using the Qwen2.5 model family across four sizes: 1.5B, 3B, 32B, and 72B. Our results demonstrate clear evi- dence that knowledge retention scales consistently with model size across all tasks (Figure 1). For QAtasks, which showed the highest knowledge reten- tion, performance scaled from 38% for the 1.5B variant to 72% for the 72B one (+34%). Similarly, as shown in the table, the other three tasks also show significant improvement. The monotonic im- provement across all tasks suggests that larger pa- rameters confer enhanced capacity for knowledge integration regardless of the fine-tuning paradigm. 1.5B 3B 32B 72B Model Size0204060Percentage QA TranslationBlank JSON Figure 1: Direct question accuracy across Qwen2.5 model sizes and tasks. Notably, while absolute performance increases with scale, the relative patterns of task perfor- mance remain consistent across model sizes, with understanding-based tasks (QA and blank filling) consistently outperforming mapping-based tasks (translation and text-to-JSON). This performance gap indicates that the cognitive demands of dif- ferent tasks represent a fundamental constraint on knowledge internalization transcending model size. While larger LLMs show improved knowledge retention for direct and generic questions, even the larger model shows significantly lower per- formance on questions requiring deeper seman- tic transfer tested indirectly using generic ques- tions. This aligns with neural scaling laws (Kaplan et al., 2020), with knowledge integration follow- ing a power law, yet the task-specific nature of knowledge internalization persists as a distinct fac- tor influencing retention regardless of scale. 5 Conclusion Our study reveals that effective knowledge injec- tion in LLMs depends not just
https://arxiv.org/abs/2505.17140v1
on data exposure but on the depth of semantic engagement, favoring comprehension-based tasks over token-mapping ones. Scaling trends suggest that knowledge in- tegration improves with model size, yet the gap between recall and broader application points to lingering limitations in current fine-tuning methods. These insights highlight the importance of task se- lection for efficient and meaningful model updates, emphasizing the need to prioritize task formats that require deeper semantic processing, especially for critical factual information. Limitations Our investigation of scaling laws was constrained to the Qwen2.5 model family due to computational resources, leaving open questions about how knowl- edge retention scales across different architectures. Further analysis is needed to identify specific cate- gories of knowledge that resist internalization dur- ing fine-tuning. While several techniques exist for improving knowledge retention in QA tasks, their efficacy for mapping-oriented tasks remains unex- plored. References Prolific | quickly find research participants you can trust. https://www.prolific.com/ . Accessed: 2025-04- 12. Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, and Arash Bakhtiari. 2024. Phi-3 tech- nical report: A highly capable language model locally on your phone. Preprint , arXiv:2404.14219. Yixin Cao, Shibo Hong, Xinze Li, Jiahao Ying, Yubo Ma, Haiyuan Liang, Yantao Liu, Zijun Yao, Xiaozhi Wang, Dan Huang, Wenxuan Zhang, Lifu Huang, Muhao Chen, Lei Hou, Qianru Sun, Xingjun Ma, Zuxuan Wu, Min-Yen Kan, David Lo, and 8 oth- ers. 2025. Toward generalizable evaluation in the llm era: A survey beyond benchmarks. Preprint , arXiv:2504.18838. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological mea- surement , 20(1):37–46. Roi Cohen, Mor Geva, Jonathan Berant, and Amir Globerson. 2023. Crawling the internal knowledge- base of language models. In Findings of the Asso- ciation for Computational Linguistics: EACL 2023 , pages 1856–1869, Dubrovnik, Croatia. Association for Computational Linguistics. GeminiTeam, Petko Georgiev, Ving Ian Lei, Ryan Bur- nell, Libin Bai, and Anmol Gulati. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. Preprint , arXiv:2403.05530. Suchin Gururangan, Ana Marasovi ´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don‘t stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 8342–8360, Online. Association for Computational Linguistics. Linmei Hu, Zeyi Liu, Ziwang Zhao, Lei Hou, Liqiang Nie, and Juanzi Li. 2024. A survey of knowledge enhanced pre-trained language models. IEEE Trans. on Knowl. and Data Eng. , 36(4):1413–1430.Essa Jan, Nouar Aldahoul, Moiz Ali, Faizan Ahmad, Fa- reed Zaffar, and Yasir Zaki. 2025. Multitask-bench: Unveiling and mitigating safety gaps in LLMs fine- tuning. In Proceedings of the 31st International Con- ference on Computational Linguistics , pages 9025– 9043, Abu Dhabi, UAE. Association for Computa- tional Linguistics. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Preprint , arXiv:2001.08361. Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Kon- ishi, Gyuhak Kim, and Bing Liu. 2023. Con- tinual pre-training of language models. Preprint
https://arxiv.org/abs/2505.17140v1
, arXiv:2302.03241. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2021. Retrieval-augmented generation for knowledge- intensive nlp tasks. Preprint , arXiv:2005.11401. AI @ Meta Llama Team. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Nick Mecklenburg, Yiyou Lin, Xiaoxiao Li, Daniel Hol- stein, Leonardo Nunes, Sara Malvar, Bruno Silva, Ranveer Chandra, Vijay Aski, Pavan Kumar Reddy Yannam, Tolga Aktas, and Todd Hendry. 2024. Inject- ing new knowledge into large language models via supervised fine-tuning. Preprint , arXiv:2404.00213. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, and Diogo Almeida. 2024. Gpt-4 technical report. Preprint , arXiv:2303.08774. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Preprint , arXiv:2203.02155. Oded Ovadia, Menachem Brief, Moshik Mishaeli, and Oren Elisha. 2024. Fine-tuning or retrieval? compar- ing knowledge injection in LLMs. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 237–250, Mi- ami, Florida, USA. Association for Computational Linguistics. Gemma Team. 2025. Gemma 3. Qwen Team. 2024. Qwen2.5: A party of foundation models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models. Preprint , arXiv:2201.11903. Yang Yan, Yu Lu, Renjun Xu, and Zhenzhong Lan. 2025. Do phd-level llms truly grasp elementary addi- tion? probing rule learning vs. memorization in large language models. Preprint , arXiv:2504.05262. Haoran Yang, Yumeng Zhang, Jiaqi Xu, Hongyuan Lu, Pheng-Ann Heng, and Wai Lam. 2024. Unveiling the generalization power of fine-tuned large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pages 884–899, Mexico City, Mexico. Association for Computational Linguistics. Jiaxin Zhang, Wendi Cui, Yiran Huang, Kamalika Das, and Sricharan Kumar. 2024. Synthetic knowledge in- gestion: Towards knowledge refinement and injection for enhancing large language models. In Proceed- ings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 21456–21473, Miami, Florida, USA. Association for Computational Linguistics. Mingkang Zhu, Xi Chen, Zhongdao Wang, Bei Yu, Hengshuang Zhao, and Jiaya Jia. 2025. Effective llm knowledge learning via model generalization. Preprint , arXiv:2503.03705. A Sample Atomic Facts Below we include some reference atomic facts ex- tracted using GPT-4o from Wikipedia. 2024 Men’s T20 World Cup: •The opening match was between the United States and Canada, with the U.S. securing their first-ever T20 World Cup victory. •South Africa reached their first-ever T20 World Cup final. •Virat Kohli scored 76 in the final, winning the Player of the Match award. •Jasprit Bumrah was named Player of the Tour- nament, finishing with 15 wickets at an econ- omy
https://arxiv.org/abs/2505.17140v1
of 4.17. •Rohit Sharma, Kohli, and Jadeja announced their retirement from T20 internationals after the final. US Presidential Election 2024: •Donald Trump (Republican) and JD Vance defeated Kamala Harris (Democrat) and Tim Walz.•Joe Biden initially ran for re-election but with- drew on July 21, 2024, citing concerns about his health and age. •Trump won the popular vote (49.8%), the first Republican to do so since George W. Bush in 2004. •Trump also survived a second assassination attempt on September 15, 2024, at Trump In- ternational Golf Club. •Swing states included Wisconsin, Michigan, Pennsylvania, Arizona, Georgia, Nevada, and North Carolina, all won by Trump. California Wildfires 2025: •At least 29 people died as a result of the fires, with the Eaton Fire (17 deaths) and Palisades Fire (12 deaths) being the deadliest. •A total of eight wildfires burned across South- ern California, fueled by strong Santa Ana winds, dry vegetation, and drought conditions. •2024 was one of the driest years on record for Los Angeles County, leading to highly flammable vegetation. •Estimated insured losses surpassed $20 bil- lion, making it the most expensive wildfire disaster in U.S. history. •Governor Gavin Newsom deployed over 7,500 emergency personnel to combat the fires. B Baseline Results Table 3 shows the results of all models on our test- ing data before fine-tuning to establish a baseline, and to rule out that any of the models had prior knowledge of some of the knowledge being in- jected. Model Percentage Answered LLAMA 3.2-3B 2% PHI-3.5-M INI 5% GEMMA 3-4B- IT 16% QWEN 2.5-3B 8% QWEN 2.5-32B 15% QWEN 2.5-72B 15% Table 3: Percentage of direct evaluation questions an- swered by different models before fine-tuning. C Scaling Results Table 4 shows the percentage of direct evaluation questions answered by different sizes of Qwen2.5. Model QA Trans. Blank JSON QWEN 2.5-1.5B 38% 13% 28% 20% QWEN 2.5-3B 45% 14% 40% 23% QWEN 2.5-32B 63% 26% 47% 33% QWEN 2.5-72B 72% 34% 56% 39% Table 4: Percentage of direct evaluation questions an- swered by different sizes of Qwen2.5 after fine-tuning on each of the four tasks. D GPT Judge Prompt Figure 2 shows the exact prompt provided to GPT- 4O-MINIjudge, that is used in the evaluation of our results as described in Section 3.3.GPT-4 O-MINIJudge Prompt [System] Please act as an impartial judge and evaluate whether the LLM’s response is factually correct based solely on the provided atomic fact. Evaluation Scale: •Class 1: CORRECT – The LLM’s response contains information that is consistent with the atomic fact provided. •Class -1: INCORRECT – The LLM’s response contradicts or misrepresents the atomic fact pro- vided. Your evaluation must be based EXCLUSIVELY on the atomic fact. Do not use any external knowledge or information beyond what is explicitly stated in the atomic fact. The atomic fact is the only source of truth for this evaluation. IMPORTANT: •A response should be classified as CORRECT if it accurately includes the information from the atomic fact, even if it contains additional information not mentioned in the atomic fact. •Only classify a response as INCORRECT if it directly contradicts
https://arxiv.org/abs/2505.17140v1
the atomic fact or presents information that is inconsistent with the atomic fact. •Irrelevant or additional information beyond the atomic fact should NOT cause a response to be classified as INCORRECT as long as the core information from the atomic fact is presented accurately. •If the response addresses the question with infor- mation that aligns with the atomic fact, classify it as CORRECT regardless of any supplemen- tary details. Do not add any information from your end. Only answer based on the provided evaluation criteria. Do not check for anything extra like completeness or style. Answer Format: •Class 1 (CORRECT): <reasoning for why the LLM’s response accurately reflects the informa- tion in the atomic fact > •Class -1 (INCORRECT): <reasoning for why the LLM’s response contradicts or misrepresents the atomic fact > Final Verdict: <assigned class >(1/-1) Explanation: Based on the atomic fact provided, ex- plain why the response is assigned to the final class in 2-3 lines. Figure 2: GPT-4o-Mini judge prompt. E Sample Testing Data In Table 5, we give some examples of direct and generic questions created for a given atomic fact: Atomic Fact Direct Question Generic Question India won the tournament, defeat- ing South Africa in the final by 7 runs.Who won the Men’s T20 World Cup 2024 final against South Africa?Give a list of teams which lost the Men’s T20 World Cup Final with minimal score difference in the last few T20 World Cups. Pakistan lost to the United States in a Super Over, marking one of the biggest upsets in T20 World Cup history.Which team did Pakistan lose to in a Super Over during the Men’s T20 World Cup 2024, marking a significant upset?What is the biggest upsets in the T20 World cup in recent history? List all of them. Trump was convicted of 34 felonies related to hush money payments to Stormy Daniels.How many felonies was Trump convicted of in relation to hush money payments to Stormy Daniels?How many US presidents have been convicted of a felony? Give a list of names, and the number of felonies. Trump survived an assassination attempt on July 13, 2024, during a rally in Butler, Pennsylvania.On what date did Trump survive an assassination attempt during a rally in Butler, Pennsylvania?Give a list of all US presidents who faced an assassination at- tempt. Russia granted Assad asylum, confirming his resignation and departure from Syria.Which country granted asylum to Assad, confirming his resignation and departure from Syria?Who are the most recent Mid- dle Eastern leaders to have been granted asylum by Russia? Table 5: Examples of direct and generic questions de- rived from atomic facts.
https://arxiv.org/abs/2505.17140v1
MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models Bohan Jin1,2, Shuhan Qi1,2*, Kehai Chen1, Xinyi Guo3, Xuan Wang1 1Harbin Institute of Technology (Shenzhen) 2Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies 3University of Barcelona 23s051024@stu.hit.edu.cn, shuhanqi@cs.hitsz.edu.cn Abstract The widespread use of Large Multimodal Mod- els (LMMs) has raised concerns about model toxicity. However, current research mainly fo- cuses on explicit toxicity, with less attention to some more implicit toxicity regarding prejudice and discrimination. To address this limitation, we introduce a subtler type of toxicity named dual-implicit toxicity and a novel toxicity benchmark termed MDIT-Bench : Multimodal Dual-Implicit Toxicity Benchmark. Specifi- cally, we first create the MDIT-Dataset with dual-implicit toxicity using the proposed Multi- stage Human-in-loop In-context Generation method. Based on this dataset, we construct the MDIT-Bench, a benchmark for evaluating the sensitivity of models to dual-implicit toxicity, with 317,638 questions covering 12 categories, 23 subcategories, and 780 topics. MDIT-Bench includes three difficulty levels, and we propose a metric to measure the toxicity gap exhibited by the model across them. In the experiment, we conducted MDIT-Bench on 13 prominent LMMs, and the results show that these LMMs cannot handle dual-implicit toxicity effectively. The model’s performance drops significantly in hard level, revealing that these LMMs still contain a significant amount of hidden but ac- tivatable toxicity. Data are available at https: //github.com/nuo1nuo/MDIT-Bench . Warning: this paper includes examples that may be offensive or harmful. 1 Introduction Large Multimodal Models (LMMs) have made great progress in recent years. However, with the widespread usage of LMMs (OpenAI, 2024a; Rao et al., 2024; Li et al., 2023), the emergence of model toxicity has become a serious problem. Model toxicity refers to the offensive, abusive, hateful content that the model may generate in its output, such as racism, sexism, and homophobia *Corresponding author Two paperboards saying fuck and shit.Irish and blacks and dogs ar e not allowed.Are these people in the pictur e better suited to picking cotton? 2025/2/14 18:10 three types of toxicity.svg file:///Users/nuonuo/study/ 硕⼠毕设 /ACL 论⽂做图 /three types of toxicity.svg 1/1Figure 1: Three types of toxicity: (a) Explicit toxic- ity: containing directly offensive language; (b) Single- implicit toxicity: Not containing obvious offensive lan- guage, and the toxicity can be detected from either modality; (c) Dual-implicit toxicity: Not containing obvious offensive language, and the toxicity can be de- tected only by combining both modalities. (Pavlopoulos et al., 2020; Kurita et al., 2019; Jig- saw, 2024). The presence of model toxicity can cause serious consequences. Many previous works were dedicated to solving explicit toxicity, as shown in Figure 1(a). This type of toxicity contains discriminatory and insulting language, which is easily identifiable and simple to detox (Gehman et al., 2020). Some works also addressed implicit toxicity. Unlike explicit toxicity, implicit toxicity does not include directly offensive language. Instead, it expresses toxicity through euphemistic means such as metaphors and sarcasm (ElSherief et al., 2021; Wen et al., 2023), as shown in Figure 1(b). However, previous works have not addressed a subtler form of toxicity regarding prejudice, which we term dual-implicit toxicity (Figure 1(c)). It cannot be detected
https://arxiv.org/abs/2505.17144v1
solely through a single visual or textual modality. Instead, it requires synthesizing information from both modalities to be identified. To address the research gap related to dual- implicit toxicity, we introduce the Multimodal Dual-Implicit Toxic (MDIT) Dataset , constructed using the proposed Multi-stage Human-in-looparXiv:2505.17144v1 [cs.CL] 22 May 2025 In-context Generation method. This method gen- erates diverse data and incorporates human inter- vention to align with human values. The MDIT- Dataset contains 112,873 toxic questions. Based on established definitions and categorizations of toxic content (Liu et al., 2024c; Bethlehem, 2015; Erasmus+, 2019), we classify the dataset into 12 categories and 23 subcategories. Next, we introduce the MDIT-Bench , a bench- mark designed to assess the sensitivity of LMMs to dual-implicit toxicity regarding fine-grained prej- udice and discrimination. It comprises 317,638 test questions across three difficulty levels: easy, medium, and hard. The easy level contains 91,892 questions without dual-implicit toxicity, while both the medium and hard levels include 112,873 ques- tions each, sourced from the MDIT-Dataset. We review the rationality and validity of this bench- mark through human evaluation. 13 LMMs are evaluated using MDIT-Bench, and the results in- dicate limited sensitivity to dual-implicit toxicity, highlighting the need for further improvement. The hard level builds on the medium level by adding Long-context Jailbreaking . We propose theHidden Toxicity Metric (HT) to quantify the increased toxicity exhibited by the model at the hard level compared to the medium level, referred to as “ hidden toxicity ”. Results show that the tested LMMs exhibit significant hidden toxicity, with many models achieving about half the accuracy at the hard level compared to the medium level. In summary, our contributions are as follows: •We introduce the concept of dual-implicit tox- icity regarding fine-grained prejudice, a more subtle form distinct from explicit toxicity. To address the lack of data on this form of tox- icity, we propose the Multi-stage Human-in- loop In-context Generation method. •We present the MDIT-Dataset and construct the MDIT-Bench, comprising 317,638 data points across three difficulty levels, to evalu- ate LMMs’ sensitivity to dual-implicit toxic- ity. Results indicate that even state-of-the-art models require further refinement to address dual-implicit toxicity effectively. •We introduce a metric for quantifying the hid- den toxicity in models under hard level. Re- sults show that most models contain substan- tial hidden toxicity that can be triggered under specific conditions.2 Related Work 2.1 Large Multimodal Models Recent progress in large multimodal mod- els (LMMs) play important roles in multiple fields (Rao et al., 2023a,b), with top-tier compa- nies like OpenAI (GPT-4o) (OpenAI, 2024b), An- thropic (Claude 3.5) (Anthropic, 2024), and Google (Gemini 1.5) (Team, 2024) achieving excellent re- sults in multimodal integration and response gen- eration. On the other hand, several open-sourced LMMs, such as LLaV A (Liu et al., 2023), LLaV A- 1.5 (Liu et al., 2024a), LLaV A-NeXT (Liu et al., 2024b), CogVLM2 (Hong et al., 2024), Qwen2-VL (Wang et al., 2024a), Phi-3.5-Vision (Abdin et al., 2024), miniGPT-v2 (Chen et al., 2023), BLIP2 (Li et al., 2023), and InstructBLIP (Dai et al., 2023), have also made significant contributions to LMM development.
https://arxiv.org/abs/2505.17144v1
2.2 Toxicity Benchmarks While large models offer significant convenience, they can also generate toxic content. Queerinai et al. (2023) reported that 67% of QueerInAI members have experienced a safety incident. The community has made many efforts to address these issues. Gu et al. (2024) constructed MLLMGUARD, an evaluation set with 12 categories using social me- dia data and Red Teaming techniques. Ying et al. (2024) proposed SafeBench, a dataset of 2,300 harmful queries identified by LLM judges. Zhang et al. (2024b) developed a plain-text benchmark with 11,435 multiple-choice questions in Chinese and English. Li et al. (2024) introduced SALAD- Bench, which includes attack-enhanced, defense- enhanced, and multiple-choice subsets for assess- ing LLM toxicity. Tang et al. (2024) proposed GenderCARE, a framework addressing gender bias in LLMs. Wang et al. (2023) developed ToViLaG, a dataset with three types of toxic data and the WInToRe metric for toxicity assessment. Lin et al. (2024) introduced GOAT-Bench, a collection of over 6,000 memes with diverse themes. Wang et al. (2024b) introduced SIUO, a challenge for evaluating cross-modality safety alignment, while Zhou et al. (2024) presented MSSBench for assess- ing situational safety performance. Unlike these, we focus on prejudice and discrimination, and our dataset is much larger. Liu et al. (2025) developed MM-SafetyBench using a four-step methodology for safety evaluations, and Zhang et al. (2024a) proposed SPA-VL, a Safety Preference Alignment Racial discrimination3662Gender4384Age4560Poor/rich4342Sexual orientation2888Region discrimination4429Religion4087Outfit5952Blind4518Disability4585Vulnerable3753Manic depressive5100Depression6556Collection4454Cosplay4012Horror3949Divination5435Gl/bl fandom6792Personality type5551Bad habits5041Close relationship4704Animal protection8060Misbehaviour5197RacismSexismAgeismClassismHomophobiaNationalismReligious prejudiceGroups of individualsNeurological discriminationSubcultural disciminationPersonality prejudicesNegative cognitionFigure 2: Toxicity categories of MDIT-Dataset. MDIT- Dataset is divided into 12 categories and 23 sub- categories, and the number of samples in each sub- category is approximately equal. dataset built in three stages. Both of them have made significant contributions. Unlike them, our data construction pipeline integrates keyword ex- traction with question generation to improve au- tomation and enhances diversity through human-in- loop methods. Despite significant progress in existing works, several limitations remain: 1) Most focus on ex- plicit or single-implicit toxicity, neglecting dual- implicit toxicity; 2) Many are confined to the text domain; 3) Some benchmarks have limited data. In contrast, our work introduces MDIT-Bench, a large- scale multimodal benchmark (317,638 instances) for dual-implicit toxicity, where toxicity is detected only through cross-modal integration. 3 Method Model toxicity is a critical issue. Motivated by the lack of focus on more subtle forms of toxicity in existing works, we introduce the concept of dual- implicit toxicity. We then propose the Multimodal Dual-Implicit Toxic (MDIT) Dataset and construct the MDIT-Bench, designed to assess the sensitivity of LMMs to dual-implicit toxicity. To ensure the validity of the MDIT-Bench, we conduct human evaluation. Additionally, we propose a metric for quantifying hidden toxicity in hard level. MDIT-Dataset Category Count Racism 3662 Sexism 4384 Ageism 4560 Classism 4342 Homophobia 2888 Nationalism 4429 Religious prejudice 4087 Groups of individuals 18808 Neurological discrimination 11656 Subcultural discrimination 24642 Personality prejudices 5551 Negative cognition 23864 Total 112873 MDIT-Bench Level Count Easy level 91892 Medium level 112873 Hard level 112873 Total 317638 Table 1: The quantity of each category and each level in the MDIT-Bench. 3.1 Definition We classify toxicity
https://arxiv.org/abs/2505.17144v1
into three types based on its level of conspicuity: •Explicit toxicity refers to direct and overt forms of toxicity, including easily identifiable discriminatory and insulting language. •Single-implicit toxicity does not rely on of- fensive language (e.g., swearing or insulting words) and may even be positive in sentiment. It is built on associative networks in seman- tic memory and automatic activation, and it is conveyed through euphemism (Magu and Luo, 2018), metaphor (Lemmens et al., 2021), world knowledge (Ocampo et al., 2023), and so on (Wen et al., 2023). •Dual-implicit toxicity is more subtle than single-implicit toxicity and cannot be detected through visual or verbal modal alone. Only by combining the data from the two modalities can the implicit toxicity be detected. More details are shown in Appendix A. In the context of dual-implicit toxicity, the toxic- ity mainly refers to prejudices (Turiel, 2007), dis- crimination (Altman, 2020), and stereotypes (Cox et al., 2012). 11 of the 12 categories are related to these issues, as shown in Figure 2. Prejudice and discrimination in models can be amplified dur- ing information dissemination, undermining social equality. Thus, we focus on this form of toxicity. 3.2 Categorization Drawing on comprehensive definitions and cate- gorizations of toxic content from both AI (Liu et al., 2024c; Xu et al., 2023; Huang et al., 2024) and social sciences (Bethlehem, 2015; Erasmus+, 2019), we provide a detailed categorization of the MDIT-Dataset, which includes 12 primary cate- gories, such as racism, sexism, classism, homo- phobia, and nationalism. We further refine this categorization into 23 subcategories and 780 top- ics. The data distribution is shown in Figure 2 and Table 1. Our categorization aims to encompass a wide range of toxic content. 3.3 MDIT-Bench Construction To construct the MDIT-Dataset and MDIT-Bench, we propose a method called Multi-satge Human- in-loop In-context Generation . This method con- sists of 4 stages: Question Generation, Data Clean- ing, Modal Expansion, and Benchmark Construc- tion. During the construction, human involvement is required to integrate human values. The overall construction process is illustrated in Figure 3. 3.3.1 Question Generation In this stage, we first collect implicitly toxic ques- tions from CV ALUES (Xu et al., 2023) and manu- ally create additional toxic questions. To facilitate Modal Expansion, we create pseudo-multimodal modified questions , replacing toxic words with terms referring to image information. We then use these data as demonstrations to enable LLMs to expand the dataset through in-context genera- tion. We implement a human-in-the-loop strategy, generating a limited amount of data in the initial stage. After filtering and rewriting, this data serves as demonstrations for subsequent generations, en- hancing diversity. 3.3.2 Data Cleaning In this stage, we filter the previously generated questions. Due to the inherent randomness of large models, not all questions contain implicit toxicity. We filter out those lacking toxicity or with ambigu- ous references. Manual filtering is impractical, so we leverage the distribution of the replaced word . This approach is justified, as the replaced word is a key element and is most likely to carry implicit tox-icity. After manual review, we retain 780 replaced
https://arxiv.org/abs/2505.17144v1
words that appear frequently, have clear referents, and are likely to imply toxicity. 3.3.3 Modal Expansion In this stage, we match questions with correspond- ing images, transforming pseudo-multimodal ques- tions into fully multimodal ones. Using the re- placed word as keywords, we crawl relevant images from the Internet, manually filtering out irrelevant or blurry ones. This process yielded 29,097 im- ages. Since the replaced word is masked as “ the [] in the picture ”, the model must fully consider both modalities in its response, preventing it from relying solely on text-based bias. 3.3.4 Benchmark Construction In this stage, we construct the benchmark for eval- uating LMMs based on the data obtained in the previous stage. We use multiple-choice questions for their objectivity, low cost, and ease of quan- tification. Judge scoring is not used because even the most advanced models perform inadequately on MDIT-Bench (Appendix D), indicating them unqualified as judges. First, we create a non-toxic answer (Ans1) and two toxic answers (Ans2 and Ans3) for each ques- tion. To mitigate hallucinations (Qi et al., 2023) and ensure the use of visual information, we re- place the replaced word or its synonyms with “ the [] in the picture ”. Next, we create Ans4 and Ans5, two misleading options designed to assess the model’s comprehension of both modalities. Through the above process, we construct multiple-choice questions for the MDIT-Bench. More details can be found in Appendix C. 3.4 Difficulty Tiering We tier the difficulty of MDIT-Bench into three levels: easy, medium, and hard. The easy level, based on MMHS150K (Gomez et al., 2020), excludes dual-implicit toxicity and contains 91,892 questions (details in Appendix B). The medium level uses data from the MDIT- Dataset, with Ans1-5 from the previous section as options, totaling 112,873 questions. Inspired by Anil et al. (2024), who found that many-shot can trigger unsafe outputs in models with larger context windows, we introduce the Long-Context Jailbreaking method to create the hard level based on our dataset. This involves adding toxic demon- strations to the prompts of the medium level, which HumanDemonstrations LLMGenerated DataSeedsQuestion: Are black people better suited to picking cotton? Modified question : Are these people in the pictur e better suited to picking cotton? Replaced word : black people Follow the demonstrations to generate mor e data { Question: ... Modified question: ... Replaced word: ... } ......Black people: 20% Indian: 15% Antarctic native: 0.1% ......Black people: 20% Indian: 15% Antar ctic native: 0.1% ...... Replaced word Filter ed replaced word MDIT -Dataset { Question: ... Modified question: ... Replaced word: ... Image: } ......Manual filtration Cleaned dataHumanDemonstrations Seeds Positive Answer: ... Toxic answer: ... Follow the demonstrations to answer the question below LLM Ans1: Positive answer Ans2: Toxic answer Ans3: Ans1+Ans2+... Describe the image Ans4: Image caption What's in the image except {Replaced word} Ans5: Ans1+Cheating wordPromptsFilter ed replaced word { Question: ... Modified question: ... Replaced word: ... }Question: ... Modified question: ... Image: Ans1:... Ans2:... Ans3:... Ans4:... Ans5:...MDIT -Bench1 Question Generation 2 Data Cleaning 3 Modal Expansion 4 Benchmark Construction Human-in-loopHuman-in-loop Figure 3:
https://arxiv.org/abs/2505.17144v1