text
string
source
string
omit). 3.1.2 Build A Multi-modal Knowledge Base. With the logs, perfor- mance metrics and bug descriptions retrieved from the prior step, we can build a knowledge base that associates related information. The idea behind the knowledge base is to do up front work so that later, we can quickly find similar bugs by comparing their logs and telemetric data during a triage step and then rapidly retrieve the corresponding bug descriptions to help create a mitigation plan. An important design choice in ARCA is to use search augmen- tation to improve answer quality instead of storing the potential answers in the LLM parameters via a technique such as fine-tuning or training "Expert Heads" in a Mixture of Experts (MoE) models. We adopted search augmentation because creating a knowledge base, similar to creating a database, is much less resource-intensive than LLM training. Search augmentation additionally leaves us the freedom to update the knowledge base as information evolves; in contrast, pretrained models are rigid and hence unable to learn dynamically without some form of fine-tuning (which would often require resources on the same scale as were used during the original model tuning procedure). One drawback this decision is that it can increase latency, but we will show that ARCA is rapidly responsive and fully suited to interactive exploration of puzzling anomalies by SRE teams. To enable fast similarity search among logs instead of directly searching the text space of the logs, ARCA maps ( embeds processed log snippets to a high-dimensional latent space: the embedding space. The system will later use cosine similarity to quantify thedifference between two log snippets. Calculating cosine similarity only involves calculating the product of two matrices, which can be carried out at very high speeds, particularly with the help of a GPU. The task is much quicker than searching in the text space. To further accelerate the similarity search on very large data sets, ARCA uses approximate K-nearest neighbors to organize the log embeddings in two tiers. To find the most similar log embeddings, we first look for the closest centroids. The assumption is that these event clusters will contain the embeddings most relevant to the in- cident report. We then use cosine similarity again, but now include performance metrics in our approximate similarity test. To enable this we first convert the performance metrics to a vector during our log preprocessing step by aligning the telemetric data of variant lengths and sources. Then, we store the vector in the knowledge base and via the bug id, can we associate it with other pieces of information collected from the same bug. ARCA keeps bug descriptions in natural language because they may contain important details that stood out to the human observer of the issue and hence are likely to be of high value to the tasks per- formed out by the Evaluation LLM in later steps. Additionally, bug resolution descriptions contain mitigation plans which proved ef- fective in the past, and the Generator LLM can use those to propose a new plan to mitigate the ongoing issues. ARCA’s embedding space
https://arxiv.org/abs/2505.21419v2
contains 3072-dimensional vectors of 32-bit floating point numbers, and we embed the logs using the "text-embedding-3-large" model from OpenAI. The preprocessed performance metrics are represented as 21-dimensional vectors of 32-bit floating point numbers. The knowledge base in ARCA is made of 3 object stores, with one for each of the log embedding, vectorized performance metrics and bug descriptions. We also maintain a mapping relationship between them. In the future we plan to allow dynamic additions to the database, but the PoC works with a static data set. 3.1.3 Process Log Files. ARCA supports two data modalities: logs of semi-structured text and bug descriptions containing human inputs in natural language. We preprocess the logs prior to improve the accuracy of ARCA’s triage technique. The logs we consider are created by a variety of applications and systems services, and take the form of text files in which system maintenance messages, warnings and errors, anomaly notations, EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands Wang et al. Figure 2: t-SNE (t-distributed stochastic neighbor embedding) of the embedded log content. The x- and y-axes show the coordinates in the t-SNE embedding space. and other reporting can be intermixed. After examining the log files in our evaluation data set, we have found that (1)A relatively small subset of log lines are relevant to any given incident. (2)Log records of a given type are formatted in similar ways. For example, heartbeat messages for the same component only differ in their timestamps. (3)For any single incident, a log may contain multiple relevant data modalities: text, tabular data, time-series data, etc. ARCA filters log contents by retrieving the bugs that show a "similar pattern" in logs in the query step. To keep the LLM focused on important features, it is important that the log contents visible to the LLM be relevant to the issues flagged in the problem report, and free of irrelevant information if that information might be of value for maintaining the system or other purposes. Accordingly, we run a Feature Extraction LLM that we configure to remove repetitious content and extract data that distinguishes each record from the others occuring at the same time, like the error messages, special events, performance metric readings, etc. We additionally convert all the data modalities that we encounter to text. Length considerations precluded reproducing the prompt here, but we do include it as Appendix A, Fig. 5. To assess the efficacy of this log preparation approach we pro- cessed 800 log files from our data set using OpenAI’s gpt-4o as the Feature Extraction LLM. First, we generated embeddings from the raw log content with no preprocessing. Then, we preprocessed the logs and embedded only the filtered and aggregated outputs of the Feature Extraction LLM. We used t-SNE [ 8] to project the high dimensional embedding space to a 2-D image while maintaining relative Euclidian distances. Doing so yielded the images seen in Fig. 2, where each dot represents an embedding. As we can see, the embedding of processed logs (the right picture) resolves more clearly, showing a cleaner clustering pattern with
https://arxiv.org/abs/2505.21419v2
far fewer clus- ters than for the raw log (the left picture): evidence that this step achieved its goals. We additionally colored the dots in both images to signify root cause labels. As we can easily see, the dots from the same root cause, i.e., memory, CPU and network, are correctly clustered after preprocessing but were jumbled before doing so. Especially interesting are the green dots, for incidents in which a mix of CPU and memory issues simultaneously caused degraded system performance. These green data points are correctly located between the clusters for CPU issues and those for memory issues.3.1.4 Align Telemetric Data. To enable a similarity search, it is nec- essary to convert telemetry data to a fixed length vector. Raw data can be highly platform-specific: a matrix with one row per time stamp and a column for each performance counter (CPU utilization, memory utilization, etc), but potentially with missing data due to faults and timeouts, idiosyncratic formats and units, and includ- ing hardware-specific metrics. To overcome these issues, ARCA focuses on a set of 7 docker performance counters, all of which are commonly available when diagnosing cloud microservice incidents. These track CPU and memory utilization, network I/O, block de- vice I/Os, average operation latency, and socket errors. Servers are highly heterogeneous, hence raw values are not directly compara- ble. Accordingly, we calculate the normalized first order gradient, the average value and the standard deviation for each time series. In this way, we can convert the matrix of performance counter readings to a vector of 21 floating point numbers. 3.2 ARCA-PoC Phases ARCA-PoC runs in two sub-phases: the query phase and the gen- erating phase. In the query phase, we interrogate the populated knowledge base by carrying out the similarity search on log embed- dings and vectorized performance metrics. This phase is analogous to the triage step and the output are the textual descriptions of similar bugs. The descriptions are then sent to the generating phase to create a mitigation plan for the SREs. 3.2.1 Query Phase. Once our knowledge base has been populated, ARCA performs an approximate match query using posts associ- ated with a new incident as its query prompts. The methodology used to extract the relevant aspects of the incident is quite similar to the one used to build the knowledge base, and yields an embedding vector that we can understand as an abstract representation of the new incident in the knowledge space. Our goal is to perform an ap- proximate nearest neighbor (ANN) search. We do this in two steps: first, we identify cluster centroids closest to the query embedding, and then within those clusters perform a search for known prior incidents with similar characteristics. Here, ARCA departs slightly from common RAG approaches that only retrieve the top tens of documents based on the similarity score. Instead, ARCA retrieves the top hundreds of bugs as reported from the similarity search. This is because ARCA treats similarity search as a triage step for the purpose of coarsely categorizing a bug by placing it within a family of issues
https://arxiv.org/abs/2505.21419v2
so that corresponding SREs can chime in. For example, if a bug seems to be CPU-related, it could be assigned to SREs working on performance issues, ones working on scheduling, and ones investigating disruptions associated with locking. With just a small number of approximate matches we might miss some relevant categories, but with hundreds of approximate machines, we have a high likelihood of routing the issue to all SREs that might have insight into the issue. From the bugs with similar log patterns, we additionally perform a second-round KNN search in the high-dimensional space of the vectorized performance metrics. Here, an issue of cost arises: our work uses OpenAI language-generation APIs that are billed on a per-use basis. Accordingly, we only use one tenth of our prior report candidates for generation of the bug explanation hypotheses Diagnosing and Resolving Cloud Platform Instability with Multi-modal RAG LLMs EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands that the developer will be shown. In the evaluation, we will show that this step of filtering will not hurt the overall accuracy. In ARCA, we use FAISS library [ 19] to carry out the similarity search so that it will run on GPU accelerators. We have tried to retrieve from top 100 bugs to top 500 bugs and we can reach a triage success rate as high as 92%. We will discuss the effect brought by different number of retrieved bugs in evaluation section. 3.2.2 Generating Phase. In the generating phase, we first use an Evaluation LLM to find the bug whose description most closely fits each incident. We pass the description of the bug fix (which contains the mitigation plan) to the Generator LLM, which in turn produces text explaining the choice and suggesting a new mitigation plan to the SREs. The approach is similar to a concept sometimes referred to as LLM-as-a-judge [22] (the corresponding prompt details are included in Appendix A, Fig. 6). To improve accuracy, we employ a Chain of Thought prompting style (Appendix A, Fig. 7), using a series of similar CoT prompts in accordance with standard practice in few-shot learning. The output of this step is the closest resolved bug. We then prompt the Generator LLM with the input shown in Fig. 8. An additional benefit brought by using LLM-as-a-judge is that we can ask the evaluating LLM to explain how and why it reached certain conclusions, either in summary form or even as a sequence of step-by-step decisions, allowing the SREs to better understand the results and hence increasing confidence in its coverage. Were ARCA to operate in a single step, it would have more of a black-box feel that SREs might distrust. Our design is human-centric: ARCA generates mitigation plans and reports them to SREs for final review, together with illustrative data drawn from any similar incidents it found. We are not con- sidering direct intervention by ARCA at this time, in part because some privileged commands (like restarting critical services) require privilage escalation and should not occur without close scrutiny and Dev-Ops (human) approval. A benefit is that
https://arxiv.org/abs/2505.21419v2
ARCAs ability to identify similar prior incidents may be helpful to SREs even if its proposed mitigation plans are flawed. To obtain recommenda- tions with a natural tone and style, ARCA uses gpt-4o for both the evaluating and generation LLM stages. 4 EXPERIMENTAL RESULTS To evaluate our work, we first build a data set for 800 bug tickets containing descriptions, logs and performance metrics. Then we build ARCA’s knowledge base using 700 bug tickets, saving 100 to use when testing the PoC’s performance. 4.1 Data Set Our data set of bugs arising in micro service systems is typical of modern cloud infrastructures. To keep our data set as general as possible, we keep our attention only to the bug features reported from the docker container, including the docker logs and the per- formance metric readings from the "top" command, without any application-level features. We use a micro-service workload generator, "DeathStar" [ 3] to run different micro-service applications, like "HotelReservation", "SocialNetwork", etc. As the application executes, we inject errors. To load the CPU, we modified the benchmark so before processing a new request, the application performs a CPU-intensive operation. Figure 3: Accuracy of ARCA-PoC. The x-axis represents the output size of the similarity search. The left y-axis shows the triage accuracy, while the right y-axis shows the system accuracy. We also increase the number of requests per second during runtime until the application crashes from overload. To simulate a memory leak we modify the benchmark by introducing a memory allocation in the call back function but intentionally not freeing the memory. Finally, to increase network delays, we introduce a random sleep in the call back function. To make the challenge harder, we have introduced a fourth category of error that causes both a memory leak and a long-running computation, resulting in two possible crash types. Each of the four categories of injected errors are used to create 100 experiments, which we diversify by tweaking settings. We run each experiment twice so that we can use the data set we can automatically label an experiment run with its closest bug, which is the run generated from the same experiment configuration, yielding 4*100*2=800 bug incidents. For each bug, we use gpt-4o to generate a human readable bug report. In the generating prompt, we have provided the root causes like "the issue is caused by a random delay in every invocation of the call back function X" to ensure that the bug report contains meaningful mitigation plans. We have also instructed the LLM to decribe the bug by summarizing the performance metric readings and the logs. We thus obtain 800 bug tickets that contain the bug descriptions with mitigation plans, the time-series of performance metrics and the logs. To evaluate the efficacy of our similarity search in the log em- bedding space, which is the key of the RAG system, we use public data sets from four supercomputing systems: BGL, Thunderbird, Liberty, and Spirit [15]. 4.2 End-to-end Evaluation We first study the effect of using different numbers of nearest neigh- bors reported from the similarity search
https://arxiv.org/abs/2505.21419v2
module. This is also the size of the output from the triage step. So we compare both the triage accuracy and the system accuracy. For a triage operation to be accurate, ARCA needs to include the closest bug in the output of the triage steps. For the whole system to be accurate, ARCA needs to pick the labeled closest bug as the output of the Evaluation LLM. The results are shown in Fig. 3. To account for the randomness introduced by the LLMs, we evaluated the average performance on EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands Wang et al. Figure 4: Cost analysis of using ARCA-PoC. The x-axis repre- sents the output size of the similarity search. The left y-axis shows the average cost of a single query in US cents, while the right y-axis shows its average time consumption. 300 queries for each setting and for each query, we repeated the experiment for 3 times. In our test, we increase the log similarity search output size from 100 to 400, and we filter out 20% of the chosen bugs in the simi- larity search using telemetric data. As we can see, triage accuracy increases steadily with the raise of the triage set size. However, the overall system accuracy drops when we increase the similarity search size from 300 to 400. Upon inspection we found that when the similarity search size is small (less than 200), the right answer is not presented in the input prompt. This ceases to be an issue with larger set sizes. Interestingly, however, although triage accuracy at set size 400 is significantly higher than that for size 300, overall system accuracy drops: the Evaluation LLM apparently becomes overwhelmed by choices. It’s also worth noting that we cannot increase the similarity search output size without limit. GPT-4o, the LLM we use for our Evaluation LLM, has a context window size limit of 30,000 tokens and the input cannot be longer than that. This token window limit corresponds to a triage output set size of slightly more than 400. We also evaluated time and financial cost per query. Gpt-4o uses a decoder-only neural network structure and hence the longer the input in tokens, the more time it will take to generate an answer. Also, OpenAI charges clients on the basis of the number of tokens computed. Taking all these considerations together, we arrive at the results shown in Fig. 4. As we can see, for large group size, generation is significantly slower and cost mounts substantially. 4.3 Evaluation of Similarity Search There are two similarity search steps in ARCA, with one in the high- dimensional space of log embedding and the other in telemetric encoding. We have evaluated the efficacy of each step against the combined performance, and reported the results in Table. 1, where we pick the triage size to be 300. The key finding is that multi-modal similarity search saves time and money relative to approaches that require one by one searches across modes followed by a human integrative activity.Data Modes Accuracy Cost (cents) Time (s)
https://arxiv.org/abs/2505.21419v2
Telemetric Data Only 0.34 2.81 4.67 Log Only 0.72 2.31 4.16 Telemetric Data + Log 0.74 2.89 4.89 Table 1: Comparison of the efficacy via utilizing different modes of data. 4.4 ARCA as A Log Clustering Tool Very similar to log clustering tools, ARCA’s RAG-LLM based log processing module can be used alone to detect anomalies in logs. In our evaluation, we use public log data sets reported from 4 su- percomputing labs and report the results in Table. 2. The numbers before ’/’ are from ARCA and the one after are the state-of-the-art numbers reported in [ 6,20], which are achieved through proprietar- ily fine-tuned LLMs. From the results, ARCA-PoC outperforms on all data sets despite requiring only off-the-shelf embedding LLMs. Data Set F1-Score Recall Precision BGL 0.995 /0.976 0.99/0.982 1/0.970 Thunderbird 0.984 /0.97 0.975/ 0.99 1 /0.97 Spirit 0.993 /0.992 0.986/ 0.999 1 /0.984 Liberty 0.986/* 0.986/* 0.986/* Table 2: Evaluation of using ARCA-PoC as a log clustering tool. *: For the Liberty data set, public baseline data is not available. 5 CONCLUSIONS AND FUTURE WORK ARCA is a work in progress, but already confirms the promise of the multimodal RAG LLM approach to searching the complicated inci- dent report databases that arise when troubleshooting cloud-hosted applications. In work still underway, we are investigating other possible ways to organize the ARCA knowledge base, including the option of using similarity search algorithms beyond the form of cosine similarity used in the ANN step. We expect that this will be needed as we expand the data modality coverage of the ARCA platform to include performance metrics and traces. Synthesis of generated answers that incorporate observations from multiple modalities raises especially interesting questions for study. ACKNOWLEDGMENTS We would like to thank Tiancheng Yuan for his insight on RAG LLMs and Miles Bramwit for his efforts on data collection. We are also grateful for support we received from Siemens, NVIDIA, Cisco and Microsoft. REFERENCES [1]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, and et al. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs.CL] https://arxiv.org/abs/2005.14165 [2]Qian Cheng, Doyen Sahoo, Amrita Saha, Wenzhuo Yang, Chenghao Liu, Gerald Woo, Manpreet Singh, Silvio Saverese, and Steven C. H. Hoi. 2023. AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities and Challenges. arXiv:2304.04661 [cs.LG] https://arxiv.org/abs/2304.04661 Diagnosing and Resolving Cloud Platform Instability with Multi-modal RAG LLMs EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands [3]Yu Gan, Yanqi Zhang, Dailun Cheng, Ankitha Shetty, Priyal Rathi, Christina Delimitrou, and et al. 2019. An Open-Source Benchmark Suite for Microservices and Their Hardware-Software Implications for Cloud & Edge Systems. In Pro- ceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (Providence, RI, USA) (AS- PLOS ’19) . Association for Computing Machinery, New York, NY, USA, 3–18. https://doi.org/10.1145/3297858.3304013 [4]Jingkun Gao, Xiaomin Song, Qingsong Wen, Pichao Wang, Liang Sun, and Huan Xu. 2021. RobustTAD: Robust Time Series Anomaly Detection via Decomposition and Convolutional Neural Networks. arXiv:2002.09545 [cs.LG] https://arxiv.org/ abs/2002.09545 [5]Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, and et al.
https://arxiv.org/abs/2505.21419v2
2024. Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv:2312.10997 [cs.CL] https://arxiv.org/abs/2312.10997 [6]Hongcheng Guo, Jian Yang, Jiaheng Liu, Jiaqi Bai, Boyang Wang, Zhoujun Li, Tieqiao Zheng, Bo Zhang, Junran peng, and Qi Tian. 2024. LogFormer: A Pre- train and Tuning Pipeline for Log Anomaly Detection. arXiv:2401.04749 [cs.LG] https://arxiv.org/abs/2401.04749 [7]Tao Huang, Pengfei Chen, and Ruipeng Li. 2022. A Semi-Supervised VAE Based Active Anomaly Detection Framework in Multivariate Time Series for Online Systems (WWW ’22) . Association for Computing Machinery, New York, NY, USA, 10 pages. https://doi.org/10.1145/3485447.3511984 [8]Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity search with GPUs. arXiv:1702.08734 [cs.CV] https://arxiv.org/abs/1702.08734 [9]Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Wen tau Yih, and et al. 2020. Dense Passage Retrieval for Open-Domain Question Answering. arXiv:2004.04906 [cs.CL] https://arxiv.org/abs/2004.04906 [10] Philippe Laban, Alexander R. Fabbri, Caiming Xiong, and Chien-Sheng Wu. 2024. Summary of a Haystack: A Challenge to Long-Context LLMs and RAG Systems. arXiv:2407.01370 [cs.CL] https://arxiv.org/abs/2407.01370 [11] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, and et al. 2021. Retrieval-Augmented Generation for Knowledge- Intensive NLP Tasks. arXiv:2005.11401 [cs.CL] https://arxiv.org/abs/2005.11401 [12] Aodong Li, Yunhan Zhao, Chen Qiu, Marius Kloft, Padhraic Smyth, Maja Rudolph, and Stephan Mandt. 2024. Anomaly Detection of Tabular Data Using LLMs. arXiv:2406.16308 [cs.LG] https://arxiv.org/abs/2406.16308 [13] Qingwei Lin, Hongyu Zhang, Jian-Guang Lou, Yu Zhang, and Xuewei Chen. 2016. Log Clustering Based Problem Identification for Online Service Systems. In 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C) . 102–111. [14] Lijun Sun Man Li, Ziyue Li and Fugee Tsung. 2024. Robust Self-Supervised Deep Tensor Decomposition for Corrupted Time Series Classification. In Anomaly Detection with Foundation Models . Jeju, South Korea. https://adfmw.github.io/ ijcai24/index.html [15] Adam Oliner and Jon Stearley. 2007. What Supercomputers Say: A Study of Five System Logs. In 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN’07) . 575–584. https://doi.org/10.1109/DSN.2007.103 [16] Md R. Parvez, Wasi U. Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai- Wei Chang. 2021. Retrieval Augmented Code Generation and Summarization. arXiv:2108.11601 [cs.SE] https://arxiv.org/abs/2108.11601 [17] Hansheng Ren, Bixiong Xu, Yujing Wang, Chao Yi, Congrui Huang, Xiaoyu Kou, Tony Xing, Mao Yang, Jie Tong, and Qi Zhang. 2019. Time-Series Anomaly De- tection Service at Microsoft. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’19) . ACM, 3009–3017. https://doi.org/10.1145/3292500.3330680 [18] Bianca Schroeder and Garth A. Gibson. 2007. Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You?. In 5th USENIX Confer- ence on File and Storage Technologies (FAST 07) . USENIX Association, San Jose, CA. https://www.usenix.org/conference/fast-07/disk-failures-real-world-what- does-mttf-1000000-hours-mean-you [19] Laurens v. d. Maaten and Geoffrey Hinton. 2008. Visualizing Data using t- SNE. Journal of Machine Learning Research 9, 86 (2008), 2579–2605. http: //jmlr.org/papers/v9/vandermaaten08a.html [20] Yuqing Wang, Mika V. Mäntylä, Jesse Nyyssölä, Ke Ping, and Liqiang Wang. 2025. Cross-System Software Log-based Anomaly Detection Using Meta-Learning. arXiv:2412.15445 [cs.SE] https://arxiv.org/abs/2412.15445 [21] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Denny Zhou, and et al. 2023. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv:2201.11903 [cs.CL] https://arxiv.org/abs/2201.11903 [22] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng,
https://arxiv.org/abs/2505.21419v2
Siyuan Zhuang, Zhanghao Wu, Ion Stoica, and et al. 2024. Judging LLM-as-a-judge with MT-bench and Chatbot Arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems (New Orleans, LA, USA) (NIPS ’23) . Curran Associates Inc., Red Hook, NY, USA, Article 2020, 29 pages. [23] Jieming Zhu, Shilin He, Pinjia He, Jinyang Liu, and Michael R. Lyu. 2023. Loghub: A Large Collection of System Log Datasets for AI-driven Log Analytics. In 2023 IEEE 34th International Symposium on Software Reliability Engineering (ISSRE) . 355–366. https://doi.org/10.1109/ISSRE59848.2023.00071[24] Yichen Zhu, Weibin Meng, Ying Liu, Shenglin Zhang, Tao Han, Shimin Tao, and Dan Pei. 2021. UniLog: Deploy One Model and Specialize it for All Log Analysis Tasks. arXiv:2112.03159 [cs.NI] https://arxiv.org/abs/2112.03159 EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands Wang et al. Appendix A: LLM Prompts Received 11 February 2025; accepted 25 February 2025; revised 7 March 2025 Figure 5: Prompt for LLM to process log file. Figure 6: Prompt for the Evaluation LLM. Figure 7: The CoT contexts for the Evaluation LLM. Figure 8: Prompt for the Generator LLM.
https://arxiv.org/abs/2505.21419v2
arXiv:2505.21420v1 [cs.CV] 27 May 2025PAPER SUBMITTED. 1 Mentor3AD: Feature Reconstruction-based 3D Anomaly Detection via Multi-modality Mentor Learning Jinbao Wang1, Hanzhe Liang1, Can Gao1, Chenxi Hu1, Jie Zhou1, Yunkang Cao2, Linlin Shen1, Weiming Shen3,† 1Shenzhen University 2Hunan University 3Huazhong University of Science and Technology Abstract —Multimodal feature reconstruction is a promising approach for 3D anomaly detection, leveraging the comple- mentary information from dual modalities. We further advance this paradigm by utilizing multi-modal mentor learning, which fuses intermediate features to further distinguish normal from feature differences. To address these challenges, we propose a novel method called Mentor3AD, which utilizes multi-modal mentor learning. By leveraging the shared features of different modalities, Mentor3AD can extract more effective features and guide feature reconstruction, ultimately improving detection per- formance. Specifically, Mentor3AD includes a Mentor of Fusion Module (MFM) that merges features extracted from RGB and 3D modalities to create a mentor feature. Additionally, we have designed a Mentor of Guidance Module (MGM) to facilitate cross-modal reconstruction, supported by the mentor feature. Lastly, we introduce a Voting Module (VM) to more accurately generate the final anomaly score. Extensive comparative and ablation studies on MVTec 3D-AD and Eyecandies have verified the effectiveness of the proposed method. Index Terms —3D Anomaly Detection, point cloud, multimodal, contrastive learning. I. I NTRODUCTION THree Dimension Anomaly Detection (3DAD) has been extensively used in high-precision industrial product in- spection, attracting considerable attention from the computer vision community [1]–[3]. It is dedicated to identifying anoma- lous points or regions that deviate from the normal distribution in a given 3D point cloud and depth data. Existing methods are mainly classified into unimodal 3DAD and multimodal 3DAD [4]–[7]. Unimodal 3DAD detects anomalies from the point cloud (or depth) structure, with methods based on feature- embedding [8]–[11] and point cloud reconstruction [1], [12]. However, these methods mainly focus on improving unimodal features and do not fully explore the complementarity of differ- ent modalities. Multimodal 3DAD enhances the feature set by integrating RGB and 3D modalities, including depth maps and point clouds. While depth maps can experience information loss due to occlusions, point clouds offer a more accurate representation of 3D structures. Effective fusion is critical for multimodal anomaly detection. However, existing methods suffer from interference between different modalities because of the insufficient fusion. Several methods have been proposed Corresponding Author †. This paper is mainly realised by Hanzhe Liang at Shenzhen University, if you have any questions please contact: 2023362051@ email.szu.edu.cn. Multilayer Perceptron Calculating score(a) Base (b) Ours Mentor of Guidance Module Mentor of Fusion Module Voting ModuleElement -wise multiplication𝑺 XYZ FeatureRGB FeatureRe. XYZ FeatureRe. RGB Feature Mentor Feature XYZ FeatureRe. RGB Feature RGB FeatureRe. XYZ FeatureRe. Mentor Feature𝑺 𝑺Score Map 𝑺 𝑺Score Map Mentor LearningVoting 𝑺Anomaly Score Re. ReconstructedFig. 1. Illustration of (a) the base mode and (b) our proposed mode. The base mode integrates features simply, while our mode excels in capturing shared features via mentor learning. to fully utilize the complementary nature of these modalities. For example, BTF [13] and M3DM [14] provide basic fusion strategies; however, they neglect the combination mechanisms between different modalities, which
https://arxiv.org/abs/2505.21420v1
leads to poor performance in the final scoring stage. Additionally, shape-guided [15] relies on feature alignment. Although this method attempts to leverage complementary information between modalities, it may not fully exploit it, which negatively impacts subsequent detection performance. Therefore, a key issue is posed: how to effectively integrate multimodal features, that is, to enhance the discriminative fusion features and suppress the negative effects (e.g., false positives)? To address this question, based on the fact that the shared features in multiple modalities play a crucial role in enhancing the models’ discriminative abilities, we present a novel ap- proach called Multi-modality Mentor Learning (Mentor3AD) for detecting anomalies in multimodal data, as illustrated in Figure 1. Specifically, Mentor3AD targets two main challenges related to multimodal information fusion in multimodal mentor learning. (1) Weak Representation. Previous feature recon- struction models directly and reconstruct features from one modality to another. However, it is insufficient for handling complex feature maps. Besides, these models do not leverage the correlation information between different modalities, lead- ing to suboptimal outcomes. At the same time, the difference between normal and abnormal feature maps is often not obvi- ous, which negatively impacts the anomaly detection perfor- mance. (2) Weak Discrimination. When multiple modalities PAPER SUBMITTED. 2 are introduced, the feature reconstruction model struggles with discriminatory information from these modalities. The proposed Mentor3ad consists of three modules to enhance feature fusion and improve the model’s discrimination ability. The first is the Mentor of Fusion Module (MFM), which combines RGB and 3D features into the mentor fea- tures. The second is the Mentor of Guidance Module (MGM), which performs the cross-modality reconstruction facilitated by the shared mentor features. Lastly, the V oting Module (VM) aggregates the anomaly detection results from different modalities to produce a final anomaly score. The main contributions are summarized as follows: •This paper proposes a new multi-modality mentor learn- ing framework, called Mentor3AD for the 3DAD task, significantly improving performance and suppressing negative effects. •To make better use of modal information, we designed MFM for generating mentor features to guide feature re- construction, and the MGM guided by MFM to generate opposing modes. A V oting Module combines results from different modalities to generate final anomaly scores. •The Mentor3AD achieves significant results in both com- parative experiments and ablation studies, showcasing the effectiveness of the proposed method. II. R ELATED WORK A. RGB Anomaly Detection 2D image anomaly detection comprises feature extraction and feature modeling [5], [16], [17]. Feature extraction aims to derive discriminative representations, while feature modeling captures the distribution of normal features to detect anoma- lies [18], [19]. Early methods employed autoencoders and inpainting frameworks [20]–[22]. Subsequent advancements integrated normalizing flows [23], [24] and memory banks [25] for robust density estimation. These innovations enhance 2D detection accuracy and extend to multimodal frameworks, advancing industrial inspection research. B. Unimodal 3D Anomaly Detection In unimodal 3DAD, several innovative methods address the challenges of defect identification in 3D point clouds, mainly classified into feature embedding and feature reconstruction methods. The feature embedding method determines anoma- lies by forming a normal feature distribution
https://arxiv.org/abs/2505.21420v1
from the features of the training set, and by comparing the difference between the features to be tested and the normal feature distribution at the time of testing [25], [26]. Reg3D-AD [8] utilizes a registration-based approach and feature memory banks to preserve critical details essential for anomaly detection, though there may be challenges in feature extraction and registra- tion dependency that could impact its robustness. To further constrain group-level features in Reg3D-AD, Group3AD [9] refines anomaly localization by employing group-level feature contrastive learning, which differentiates normal and abnormal patterns more effectively. Then ISMP [10] skillfully uses the internal view to align global and local features to fullymine the structural information. Looking3D [27] aligns 2D and 3D data for anomaly detection, particularly benefiting manufacturing and quality control tasks. The feature recon- struction method uses normal samples to train the model in its ability to reconstruct normal features, and identifies anomalies by the reconstruction error during testing. IMR- Net [1] eliminates potential anomalies by iteratively masking and reconstructing, and identifies anomalies by comparing re- construction differences. R3D-AD [12] uses diffusion models to further improve the reconstruction accuracy of the model for better detection. PO3AD [6] obtains higher resolution anomaly detection by predicting point-level offsets. Moreover, real-time pose-agnostic methods such as SplatPose [28] and SplatPose++ [29], ensure efficient anomaly detection critical for industrial applications. Some zero-shot methods using LLM also achieve good results [30]–[33]. These methods obtain excellent detection performance on unimodal modes but face challenges when considering inter- modal complementarity. And utilising the complementarity of RGB and 3D point clouds might lead to more comprehensive anomaly detection. C. Multimodal 3D Anomaly Detection The landscape of multimodal 3DAD has been enriched by a variety of methods that aim to integrate different types of data for enhanced detection capabilities, which can be broadly categorized into feature embedding methods and fea- ture reconstruction approaches. Feature embedding methods are represented by BTF, which highlights the importance of leveraging classical 3D features to identify defects, advocating for a focus on the foundational geometric properties of the data [13], AST [34], which employs an asymmetric student- teacher network architecture, where a normalizing flow teacher and a feed-forward student network collaborate to distinguish anomalies by creating a divergence in their outputs, and M3DM [14], which stands out with its hybrid feature fusion approach, demonstrating the benefits of combining multiple data modalities by utilizing RGB, XYZ, and fused features to create three memory banks for anomaly detection. Feature reconstruction methods are represented by Shape-Guided [15], which utilizes a dual-memory framework informed by shape information, making it particularly effective at identifying anomalies in both color and shape. Instead of employing fused modalities, the system utilizes shape features to guide the steering process, which may potentially result in a lack of more informative fused features when confronted with complex scenes. Another method, CFM [30], was proposed to align features across different modalities to improve the detection of abnormalities. These methods collectively contribute to a more nuanced and effective approach to anomaly detection in 3D data. However, feature reconstruction methods in a multimodal context remain
https://arxiv.org/abs/2505.21420v1
challenging, as evidenced by the difficulty of cross-modality reconstruction due to significant differences in feature distribution between modalities, leading to poor discrimination. This paper proposes an approach that uses mentor modality to address this problem, leading to better anomaly detection. PAPER SUBMITTED. 3 RGB XYZ GT Ours M3DM CFM Tire Bagel Rope Cookie Fig. 2. More visualization results on MvTec 3D-AD. The distinction between anomalous and normal regions is more effective than previous methods. For instance, the normal region score of our method in the Rope class is nearly equivalent to zero, demonstrating its excellent anomaly detection performance. III. A PPROACH A. Problem Statement Multimodal anomaly detection (2D RGB + 3D point cloud) involves a training set defined as: De train= Iq∈RH×W×3, Pq∈RNq×3 M q=1, which contains Mnormal samples from category e. Each sample consists of a 2D RGB image Iq(resolution H×W, e.g.,H=W= 224 in MvTec3D-AD and Eyecandies ) and a 3D point cloud Pq(with Nqpoints). The test set is defined as:De test= Iq∈RH×W×3, Pq∈RNq×3, tq∈T K q=1, where labels tq∈T={0,1}(0for normal, 1for anomaly). The objective is to train a deep anomaly detection model to build a scoring function: ϕ:RH×W×3×RNq×3→RH×W, for quantitatively evaluating the abnormality levels of new instances (combining RGB images and point clouds). We show several samples as shown in Figure 2. B. Overview The Mentor3AD method, illustrated in Figure 3, improves multimodal anomaly detection by employing a feature-based reconstruction that incorporates additional inter-modal mentor features. This approach allows our model to better use fea- ture information and decision-making insights across different modalities. Consequently, the model becomes more robust and accurate in detecting anomalies in complex scenes. Above all, the point cloud XYZ and RGB images are extracted into feature maps FXY Z andFRGB by the respective extractors. The proposed method is outlined as follows: Training Phase. Contrastive learning is used to merge the shared features from the RGB and XYZ modalities into a low- dimensional mentor features, denoted as FMtr, through the Mentor of Fusion Module (MFM). After training the MFM, the parameter weights are frozen. The model then undergoes training using the Mentor of Guidance Module (MGM), whichuses the capabilities of the mentor features to assist in cross- modal feature reconstruction. This involves 1) reconstructing FRGB andFMtr to obtain ˜FXY Z , 2)FXY Z andFMtr to obtain ˜FRGB , and 3) ˜FXY Z and˜FRGB to obtain ˜FMtr. Then we use the reconstructed differences of these three feature maps to train the V oting Module. Test Phase. All weights are frozen, and both the FXY Z and FRGB feature maps are first input into the MFM to generate theFMtr feature maps. Then, the XYZ feature maps and the Mentor feature maps are fed into the pre-trained MGM to generate the reconstructed ˜FRGB feature maps. Note that The reconstruction process for the RGB feature maps is the same as that of the XYZ feature maps. After this, the reconstructed ˜FRGB and˜FXY Z feature maps are sent back into the MFM, which generates the reconstructed Mentor feature maps ˜FMtr. By examine the differences in reconstruction among the three feature maps, three
https://arxiv.org/abs/2505.21420v1
scoring maps are created. Finally, These scoring maps are then fed into the V oting Module (VM) to generate the anomaly scoring map. C. Mentor of Fusion Module It is essential to leverage the shared features between two modalities, especially when reconstructing a modal feature into another modality. Utilizing the shared information effectively can assist the model in gathering more features for detecting anomalies. Therefore, we propose a Mentor of Fusion Module (MFM), which uses an MLP framework to compress the dimensionality of the two modal feature maps. This process reduces the dimensionality of the fused modes to align with the model’s requirements. The generated fused feature maps serve as guiding information, since they contain essential shared details common to both modalities, such as contours and shapes. Using the fused feature map as a mentor can help the model reconstruct the normal feature map more accurately while struggling with the abnormal feature map. This is because the fused model is effective at combining normal multimodal features but faces difficulties when dealing with abnormal modal features. This process enables a clearer distinction between normal and abnormal features. The inputs for the self-supervised learning of the mentor feature FMtrare the RGB feature map FRGB and the point cloud feature map FXY Z . The fusion process MFM (FRGB, FXY Z)→FMtr can be represented as follows: FMtr=MLP (MLP (FRGB)⊕MLP (FXY Z)).(1) The process involves aligning incoming bimodal features. This is accomplished by downscaling different dimensional feature maps from various modalities to a uniform dimen- sion, followed by fusing these maps into a consolidated feature using a Multi-Layer Perceptron (MLP). The model ultimately receives aligned features through the function MFM (FRGB, FXY Z)→FMtr. The next step in this process is to enhance the accuracy and detail of the information embedded in the aligned features by applying contrast loss. To self-supervise the learning of shared information between different modal feature maps, we use InfoNCE Loss for contrastive learning [14], [35]. This loss function encourages PAPER SUBMITTED. 4 XYZ Featur es Re. Mentor Featur esRGB Featur es Re. XYZ Featur es Mentor Featur es Voting Module 2MGM MGM Align Features 3Feature FusionRGB Features XYZ Features Input FeaturesContrastive learningMentor of Fusion Module (MFM) Original FeaturesReconstructed Features RGB Score MapXYZ Score MapMentor Score MapScore MapVoting Module (VM) XYZ Feature Extractor RGB Feature Extractor Reconstructed Re.Element-wise subtraction/multiplicationMentor of Guidance Module (MGM) RGB Features Mentor Features MLPs Re. XYZ Features XYZ Features Mentor Features MLPs Re. RGB Features Multi-modality Mentor Learning (Mentor3AD) Re. RGB Featur es 1 RGB Image XYZ Image Anomaly Map Score MapMFM MFMRGB Scor e Map XYZ Scor e Map Mentor Scor e Map Fig. 3. The pipeline of Mentor3AD. Training Phase: MFM merges RGB and XYZ features into mentor features FMtr. MGM is used to reconstruct FRGB andFMtr, yielding ˜FXY Z , while training another MGM, a process guided by the Mentor modal Test Phase: All weights are frozen. FXY Z andFRGB are input into MFM to generate FMtr, which are then used by MGM to reconstruct ˜FRGB and˜FXY Z . These are re-input into MFM to obtain
https://arxiv.org/abs/2505.21420v1
˜FMtr. Reconstruction differences compute scoring maps, which are processed by the voting module (VM) to produce the final score. the fusion of RGB modal feature maps with point cloud feature maps, facilitating a self-supervised approach to feature fusion. The loss function can be expressed as follows: Lcon=F(i,j) RGB·F(i,j) XY ZPNb t=1PNp k=1F(t,k) RGB·F(t,k) XY Z, (2) where Nbis the batch size and Npis the nonzero patch number. In this context, let irepresent the index of the training sample and jrepresent the index of the patch. By optimizing the loss function, we obtain the fused modal feature FFusion . This feature is subsequently used in the filter reconstruction module to guide the unimodal modes during the reconstruction process. The loss function was optimized with features shared between the RGB and XYZ modal feature maps, which were self-supervised and extracted in the form of a moderately di- mensional fused feature map. This fused feature map plays an essential role in guiding the subsequent feature reconstruction. D. Mentor of Guidance Module In the study of multimodal feature map reconstruction, a typical method uses the normal feature map of one modality to reconstruct the normal feature map of another modality [30]. The core idea is to train the model to reconstruct using only normal feature maps, enabling the use of reconstruction error for anomaly detection during inference. However, chal- lenges exist in accurately reconstructing normal features andeffectively discriminating anomalies when directly mapping between modalities. Therefore, we propose the Mentor of Guidance Mod- ule (MGM), which introduces a mentor modality into the model, guided by feature fusion, to address these challenges. When the guiding features are normal, the reconstructed feature maps are more accurate. Conversely, when the guiding features are abnormal, the reconstructed feature maps also dis- play increased abnormalities. This occurs because the mentor modality is less effective at fusing abnormal feature maps; therefore, when it attempts to guide the reconstruction of these abnormal maps, it amplifies the reconstruction error further. Due to this mechanism, the introduction of the mentor modal- ity not only enhances the model’s reconstruction accuracy for normal features but also improves its ability to differentiate between normal and abnormal features. We take the RGB modal FRGB with the fused men- tor modality FMtr as an example to reconstruct FXY Z , and its result denotes ˜FXY Z . The reconstruction of FRGB results in ˜FRGB following the same logic. The process MGM (FMtr, FRGB)→˜FXY Z can be illustrated as follows: We use the RGB modality FRGB along with the fused mentor modality FMtras an example to reconstruct FXY Z . The result of this reconstruction is denoted as ˜FXY Z . The reconstruction ofFRGB results in ˜FRGB following the same logic. The process of MGM (FMtr, FRGB)→˜FXY Z can be illustrated PAPER SUBMITTED. 5 as follows: ˜FXY Z =MLP (MLP (FMtr)⊕FRGB). (3) The processing of the mentor modality FMtr is performed using a single MLP. The features obtained from this processing are then combined with FRGB . The combined features are further processed using three separate MLPs. The loss function Lcosis calculated
https://arxiv.org/abs/2505.21420v1
based on the cosine similarity between the original feature FRGB and the reconstructed feature ˜FRGB , as illustrated below: Lcos= 1−Pn i=1˜FXY Z,i FXY Z,iqPn i=1˜F2 XY Z,iqPn i=1F2 XY Z,i, (4) where ˜FRGB,i andFRGB,i represent the ith component of the vectors ˜FRGB andFRGB , respectively. ndenotes the dimen- sionality of the feature vector. The loss function evaluates the level of dissimilarity between the two feature vectors. This MGM enhances the ability of the model to differentiate between abnormal and normal states by establishing a triple distinction. First, the model operates by accepting a bi-modal feature map from the feature extractor to generate a mentor modality. If the pre-fusion feature map is determined to be abnormal, creating a mentor modality will further reinforce the distinction between abnormal and normal states. Second, an abnormal feature map is combined with an abnormal mentor modality to generate an additional abnormal modality, which helps to further differentiate between abnormal and normal states. Lastly, the RGB and XYZ eigenmaps reconstructed by the MGM are fed back into the MFM, which then predicts the mentor modality, denoted as ˜FMtr. This step is akin to reconstructing the mentor modality. When the anomalous RGB and XYZ eigenmaps produced by the MGM are inputted, the already reconstructed anomalous eigenmaps are further enhanced, effectively widening the gap between normal and abnormal states. E. Voting Module Effectively leveraging the reconstruction differences among the three modalities is crucial. CFM [30] shows that multipli- cation can effectively capture the interactions between differ- ent scoring maps, largely due to the significant differences in their magnitudes. However, simple multiplicative methods may struggle to handle multiple anomaly scoring maps and might not provide optimal performance when integrating modalities with varying features. Furthermore, M3DM [36] demonstrates that directly using multiple One-Class Support Vector Ma- chines to provide scores for each modality leads to the need to assign score weights to each One-Class Support Vector Machine in advance, which creates the challenge of finding optimal parameters. To address this, we propose a V oting Module (VM) to understand better how different modalities—RGB, XYZ, and Mentor—contribute to anomaly detection. Similar to how we compute loss, we determine the anomaly score by calculating the cosine similarity between the original and reconstructedfeatures. For instance, the RGB modal score SRGB , can be computed as follows: SRGB = 1−Pn i=1˜FRGB,i FRGB,iqPn i=1˜F2 RGB,iqPn i=1F2 RGB,i. (5) Using the same approach, we calculate the anomaly scoring maps SRGB ,SXY Z andSMtr, and then SAllis calculated according to the following procedure: SAll=NY n=1f SRGBαn·SXY Zβn·SMtrγn , (6) where f(·)aims to generate more refined scoring maps, providing more accurate scoring results for each pixel and enhancing the significance of anomalies in the ratings. Here, SRGB ,SXY Z , and SMtr represent the anomaly scores in the RGB, XYZ, and mentor modalities, respectively. Besides, αn, βn, and γnare weighting exponents for different evaluation levels. These exponents are used to adjust the contribution of each modal disparity map to the final score. By concatenating and multiplying the results of these weighted disparity maps, we can derive a composite score
https://arxiv.org/abs/2505.21420v1
Sthat reflects the overall eval- uation across multiple reconstruction disparities. The function fcan be expressed as follows: f=CU CL(SInput) , (7) where SInput represents the fraction to be calculated, C represents the convolution, and its superscripts UandL represent the convolution at different depths. Then we use a learnable One-Class Support Vector Machine Osto make the final anomaly segmentation map S′ All, which can be formalised as: S′ All=Os(SAll,Θ) (8) where Θstands for the parameters of Os. The training process ofOsis shown in Algorithm 1. We train Osthrough the score mapSAllof the training set. Moreover, the final calculation result anomaly map S′ All, is used to calculate the score of each pixel. The object score is calculated by Max (S′ All)[25]. This voting module allows the model to make more effective use of the reconstruction differences between the three modes, thereby improving anomaly detection performance. Algorithm 1 One-Class Support Vector Machine OsTraining Output: Reconstructing Difference Map SAll, OCSVM layer Os, OCSVM loss function Loc[37]. Input: OCSVM parameters Θ. 1:forsall∈SAlldo 2:Θoptim← − − − Loc(Os(sall); Θ){Optimize parameters of Os} 3:end for IV. E XPERIMENT Datasets. We conduct extensive experiments on MVTec 3D- AD [38] and Eyecandies [39]. MVTec 3D-AD is a multi- modal anomaly detection dataset containing both RGB and 3D structural information. It includes 4,147 sample pairs from 10 categories, of which 894 are anomalous. Eyecandies PAPER SUBMITTED. 6 Method Publication Bagel Cable Gland Carrot Cookie Dowel Foam Peach Potato Rope Tire MeanI-AUROCVoxel Method (3D+RGB) VoxelGAN ICCV22 0.680 0.324 0.565 0.399 0.497 0.482 0.566 0.579 0.601 0.482 0.517 VoxelAE ICCV22 0.510 0.540 0.384 0.693 0.446 0.632 0.550 0.494 0.721 0.413 0.538 VoxelVM ICCV22 0.553 0.772 0.484 0.701 0.751 0.578 0.480 0.466 0.689 0.611 0.609 PointCloud Method (3D+RGB) BTF CVPR23 0.918 0.748 0.967 0.883 0.932 0.582 0.896 0.912 0.921 0.886 0.865 AST WACV23 0.983 0.873 0.976 0.971 0.932 0.885 0.974 0.981 1.000 0.797 0.937 M3DM CVPR24 0.994 0.909 0.972 0.976 0.960 0.942 0.973 0.899 0.972 0.850 0.945 Shape-Guided CVPR24 0.986 0.894 0.983 0.991 0.976 0.857 0.990 0.965 0.960 0.869 0.947 CFM CVPR24 0.994 0.888 0.984 0.993 0.980 0.888 0.941 0.943 0.980 0.953 0.954 CFM-M CVPR24 0.988 0.875 0.984 0.992 0.997 0.924 0.964 0.949 0.979 0.950 0.960 Mentor3AD (XYZ+RGB) 0.992 0.900 0.982 0.994 0.995 0.900 0.980 0.984 1.000 0.910 0.964 Mentor3AD 0.996 0.897 0.988 0.995 0.996 0.934 0.985 0.977 1.000 0.943 0.971 Voxel Method (3D+RGB)AUPRO@30%VoxelGAN ICCV22 0.664 0.620 0.766 0.740 0.783 0.332 0.582 0.790 0.633 0.483 0.639 VoxelA ICCV22 0.467 0.750 0.808 0.550 0.765 0.473 0.721 0.918 0.019 0.170 0.564 VoxelVM ICCV22 0.510 0.331 0.413 0.715 0.680 0.279 0.300 0.507 0.611 0.366 0.471 PointCloud Method (3D+RGB) BTF CVPR23 0.976 0.969 0.979 0.973 0.933 0.888 0.975 0.981 0.950 0.971 0.959 AST WACV23 0.970 0.947 0.981 0.939 0.913 0.906 0.979 0.982 0.889 0.940 0.944 M3DM CVPR24 0.970 0.971 0.979 0.950 0.941 0.932 0.977 0.971 0.971 0.975 0.964 Shape-Guided CVPR24 0.981 0.973 0.982 0.971 0.962 0.978 0.981 0.983 0.974 0.975 0.976 CFM CVPR24 0.979 0.972 0.982 0.945 0.950 0.968 0.980 0.982 0.975 0.981 0.971 CFM-M CVPR24 0.980 0.966 0.982 0.947 0.959 0.967 0.982 0.983 0.976 0.982 0.972 Mentor3AD
https://arxiv.org/abs/2505.21420v1
(XYZ+RGB) 0.981 0.965 0.920 0.951 0.950 0.978 0.982 0.983 0.981 0.980 0.967 Mentor3AD 0.981 0.976 0.982 0.958 0.966 0.975 0.983 0.983 0.982 0.989 0.978 PointCloud Method (3D+RGB)AUPRO@1%BTF CVPR23 0.428 0.365 0.452 0.431 0.370 0.244 0.427 0.470 0.298 0.345 0.383 AST WACV23 0.388 0.322 0.470 0.411 0.328 0.275 0.474 0.487 0.360 0.474 0.398 M3DM CVPR24 0.414 0.395 0.447 0.318 0.422 0.335 0.444 0.351 0.416 0.398 0.394 CFM CVPR24 0.459 0.431 0.485 0.469 0.394 0.413 0.468 0.487 0.464 0.476 0.455 CFM-M CVPR24 0.480 0.398 0.490 0.467 0.413 0.408 0.481 0.494 0.468 0.488 0.459 Mentor3AD (XYZ+RGB) 0.478 0.402 0.487 0.474 0.396 0.467 0.488 0.495 0.486 0.476 0.465 Mentor3AD 0.479 0.420 0.485 0.474 0.411 0.464 0.498 0.494 0.484 0.475 0.468 TABLE I MAIN RESULTS ON MVT EC3D-AD. I-AUROC( ↑)EVALUATES THE MODEL ’S ABILITY TO DETECT ANOMALIES AT THE SAMPLE LEVEL . P-AUPRO@30%( ↑)AND P-AUPRO@1%( ↑)EVALUATE THE MODEL ’S ABILITY TO DETECT ANOMALIES AT THE PIXEL LEVEL UNDER GENERAL AND STRINGENT CONDITIONS ,RESPECTIVELY . THE BEST AND THE SECOND -BEST RESULTS ARE HIGHLIGHTED IN BOLD AND UNDERLINE ,RESPECTIVELY . TABLE II RESULTS ON THE EYECANDIES DATASET USING ONLY 350 TRAINING DATA . OUR METHOD WORKS BETTER USING LESS DATA TO CAPTURE MORE COMPLEX TRAINING STRUCTURES . THE BEST RESULTS ARE HIGHLIGHTED IN BOLD . Eyecandies Method Candy Cane Chocolate Cookie Chocolate Praline Confetto Gummy Bear Hazelnut Truffle Licorice Sandwich Lollipop Marshmallow Peppermint Candy MeanI-AUROCBTF 0.650 0.682 0.805 0.813 0.713 0.445 0.763 0.772 0.771 0.790 0.720 M3DM 0.637 0.712 0.725 0.830 0.614 0.538 0.749 0.779 0.958 0.829 0.737 CFM 0.661 0.971 0.915 0.939 0.904 0.797 0.850 0.879 0.984 0.877 0.878 Ours 0.688 0.955 0.907 0.952 0.894 0.702 0.925 0.893 0.978 0.896 0.879P-AUROCBTF 0.987 0.914 0.917 0.921 0.838 0.817 0.884 0.957 0.897 0.811 0.894 M3DM 0.975 0.962 0.926 0.989 0.889 0.835 0.955 0.943 0.993 0.982 0.945 CFM 0.982 0.987 0.956 0.988 0.964 0.940 0.964 0.977 0.995 0.980 0.973 Ours 0.981 0.988 0.958 0.994 0.966 0.945 0.972 0.977 0.993 0.991 0.977P-AUPROBTF 0.938 0.739 0.700 0.707 0.656 0.470 0.663 0.882 0.719 0.619 0.709 M3DM 0.925 0.825 0.725 0.956 0.659 0.456 0.826 0.704 0.947 0.910 0.793 CFM 0.943 0.894 0.804 0.959 0.855 0.781 0.768 0.896 0.946 0.930 0.878 Ours 0.944 0.843 0.810 0.962 0.840 0.779 0.799 0.906 0.939 0.952 0.877 TABLE III THE FEW -SHOT RESULTS ON MVT EC3D-AD. T HE BEST AND THE SECOND -BEST RESULTS ARE HIGHLIGHTED IN BOLD AND UNDERLINE , RESPECTIVELY . 5-shot 10-shot 50-shot Full 5-shot 10-shot 50-shot Full 5-shot 10-shot 50-shot Full 5-shot 10-shot 50-shot Full Method I-AUROC P-AUROC AUPRO@30% AUPRO@1% BTF 0.671 0.695 0.806 0.865 0.980 0.983 0.989 0.992 0.920 0.928 0.947 0.959 0.288 0.308 0.356 0.383 AST 0.680 0.689 0.794 0.937 0.950 0.946 0.974 0.976 0.903 0.835 0.929 0.944 0.158 0.174 0.335 0.398 M3DM 0.822 0.845 0.907 0.945 0.984 0.986 0.989 0.992 0.937 0.943 0.955 0.964 0.330 0.355 0.387 0.394 CFM 0.811 0.845 0.906 0.954 0.986 0.987 0.991 0.993 0.949 0.954 0.965 0.971 0.382 0.398 0.431 0.455 Mentor3AD 0.824 0.866 0.916 0.971 0.987 0.991 0.993 0.995 0.966 0.962 0.971 0.977 0.345 0.425 0.450 0.468 contains 10 categories and 4,147 data pairs, 894 of which
https://arxiv.org/abs/2505.21420v1
are anomalous. Eyecandies is also an RGB and 3D dataset containing 10,000 normal data pairs as training samples [39]. As existing methods use different benchmarks, e.g. some methods use only part of the normal data for training, whileothers use all of the data, this may have implications [14], [15], [30]. For a fair comparison, the number of training samples for each class is uniformly set to 349, giving a total of 3,500 training samples. Fewer samples show the method’s excellent performance. The test samples are 250 normal data pairs and PAPER SUBMITTED. 7 Peach Licorice Sandwich Bagel Chocolate Cookie Rope Mar Shmallow (a) Visualization of score map for different modalities (b) Visualization of complementary capabilities RGB XYZ GTRGB XYZ Mentor VotingScore MapRGB XYZ GTRGB XYZ Mentor VotingScore Map Fig. 4. Visualization analysis. (a) Visualization of score maps of each modality on MVTec 3D-AD. (b) Visualization of models with complementary capabilities on Eyecandies. TABLE IV RESULTS OF ABLATION EXPERIMENTS .w/o FOR NOT USING THIS MODULE ,XY Z FOR XYZ SCORE MAP ,RGB FOR RGB SCORE MAP AND Mtr FOR MENTOR SCORE MAP . THE BEST RESULTS ARE HIGHLIGHTED IN BOLD . Method I-AUROC P-AUROC P-AUPRO@30% P-AUPRO@1% Mentor 3ADw/o V ote &Mtr 0.954 0.993 0.971 0.455 Mentor 3ADw/o XY Z &Mtr 0.883 0.982 0.973 0.380 Mentor 3ADw/o RGB &Mtr 0.906 0.982 0.977 0.397 Mentor 3ADw/o RGB &XY Z 0.883 0.982 0.939 0.380 Mentor 3ADw/o Mtr 0.964 0.994 0.967 0.465 Mentor 3ADw/o V ote 0.967 0.990 0.964 0.450 Mentor3AD 0.971 0.995 0.978 0.468 250 anomalous data pairs. The experiment was divided into three parts. MVTec3D-AD was used for comparisons, with MVTec3D-AD for ablations, and samples from MVTec3D-AD and Eyecandies for few-shot experiments. Methods. We select mainstream 3D+RGB and straight- forward 3D methods, including V oxelGAN, V oxelAE, V ox- elVM [38], BTF [13], AST [34], M3DM [14], Shape- Guided [15], CFM, and CFM-M [30]. All codes are derived from publicly available sources or published results, and their contributions are gratefully acknowledged. Metrics. The Image Area Under the Receiver Operating Characteristic Curve (I-AUROC, ↑) is calculated using the global anomaly score to assess the performance of image-level anomaly detection. For pixel-level anomaly segmentation, the Area Under the Receiver Operating Characteristic Curve (P- AUROC, ↑) and the Area Under the Region Overlap (P- AUPRO, ↑) are utilized to evaluate performance. Additionally, AUPRO is examined at thresholds of 0.01 and 0.3, referred to as AUPRO@1% and AUPRO@30%, respectively, to analyze further the efficacy of pixel-level anomaly segmentation [30]. Implementation Details. In parallel with M3DM and CFM, the pre-trained weights of PointMAE are employed for 3D representation. In this process, a point cloud is transformed into 1,024 groups using FPS with KNN [14], [30], [40], [41]. Each group consists of 32 points and extracts features independently. The resulting features have a dimension of 1024×1152 and are ultimately projected onto an RGB image, creating a feature map that measures 224 ×224×1152 [42]. The frozen DINO VIT-B/8 model plays a crucial role in the characterization of the 224 ×224 image [43]. The image iscarefully divided into 8×8 patches, allowing for the
https://arxiv.org/abs/2505.21420v1
detailed characterization of each patch. This process produces a fea- ture map with dimensions of 28 ×28×768. All training was conducted on a server equipped with a single NVIDIA A100- PCIE-40GB and a 64-core Intel Xeon Silver 4314 processor. To ensure consistent speed comparison criteria, tests were implemented on a server equipped with an RTX 4090 (24GB) and a Xeon(R) Platinum 8352V . All model performances are from publicly available papers. The CUDA version of one of the test processes was V11.3, the code was architected on Pytoch 1.10.0+cu113, and the Python version was 3.7. A. Main Results Comparison Results. We present the experimental results of our model on MVTec 3D-AD in Tables I. Our proposed method demonstrates a significant advantage in anomaly de- tection and segmentation compared to the previously leading 3D+RGB method. The I-AUROC has improved by 1.0%, reaching 97.1%, which reflects a notable enhancement in per- formance. The AUPRO@30% achieves performance compara- ble to the previous best method, with a SOTA performance of 97.8%. However, our method shows better inference efficiency, which will be discussed in subsequent sections. Additionally, the AUPRO@1% records 46.8% and performer better than the previous method under stringent criteria. Efficiency Analysis. We evaluate our model’s memory usage, inference speed, and overall performance, as outlined in Table V. By saving the final fusion results locally during pre-training, similar to the Shape-Guided approach, our model achieves improved efficiency. It also boasts a higher inference rate and lower memory usage compared to memory bank methods like M3DM, Shape-Guided, and BTF. Although it is slightly slower and uses more memory than CFM, our model surpasses it in 3DAD metrics, effectively balancing space-time efficiency with performance. B. Ablation Studies Analysis of Voting Module. We analyzed weighted com- binations of modal score maps, as presented in Table VI. PAPER SUBMITTED. 8 RGB With GT Shape -Guided OursBagel Foam Dowel Fig. 5. Visualization of discriminative abilities. Previous methods, like Shape- Guided, often yield false positives, while our approach clearly distinguishes anomalies. TABLE V INFERENCE SPEED , MEMORY AND PERFORMANCE ON MVT EC3D-AD. FRAME RATE IN FPS(↑)AND MEMORY IN MB(↓). H IGHER METRICS FOR 3DAD REPRESENT BETTER . THE BEST RESULTS IS HIGHLIGHTED IN BOLD . Method FrameRate Memory I-AUROC P-AUROC AUPRO@30% AUPRO@1% BTF 3.197 381.1 0.865 0.992 0.959 0.383 AST 4.966 463.9 0.937 0.976 0.944 0.398 M3DM 0.514 6526.1 0.945 0.992 0.964 0.394 Shape-Guided 1.513 1105.9 0.947 0.996 0.976 0.456 CFM 21.755 437.9 0.954 0.993 0.971 0.455 Mentor3AD 8.371 1615.3 0.971 0.995 0.978 0.468 TABLE VI IMPACT OF DIFFERENT SCORING MANNERS . OUR VOTING PERFORMS BEST DUE TO THE VARYING SENSITIVITY OF DIFFERENT MODALITIES TO PIXEL -LEVEL VERSUS SAMPLE -LEVEL ANOMALIES . Score Map I-AUROC P-AUROC P-AUPRO@30% P-AUPRO@1% f(XY Z×RGB ) 0.965 0.994 0.973 0.464 f(XY Z×RGB×Mtr) 0.961 0.995 0.977 0.466 f(XY Z×RGB )×f(Mtr) 0.965 0.993 0.970 0.456 f(Mtr×RGB )×f(XY Z ) 0.965 0.993 0.970 0.458 f(Mtr×XY Z )×f(RGB ) 0.965 0.993 0.970 0.458 f(XY Z )×f(RGB )×f(Mtr) 0.967 0.990 0.964 0.450 Voting 0.971 0.995 0.978 0.468 Multiplying the three modality score maps improves pixel- level scoring, while multiplying individual score maps
https://arxiv.org/abs/2505.21420v1
en- hances sample-level scoring. Our experiments confirm that the proposed voting strategy, which balances scoring at both the sample and pixel levels, achieves the best results across all 3DAD metrics. This success is attributed to the comple- mentary strengths of the different modalities: RGB detects surface anomalies, XYZ identifies structural anomalies, and the mentor modality Mtr combines both types of information. Analysis of Each Module. We evaluated by removing each modality and voting strategy to assess validity, and the results are presented in Table IV. Our findings indicate that some subtle details may be overlooked when only RGB modality is used, and surface structure may not be effectively captured when only XYZ modality is available. Additionally, using fused mentor modality as complementary information to support 3DAD showed similar limitations. When these modalities are not utilized in combination, the results tend to be suboptimal. While using only XYZ and RGB modalitiesyields satisfactory results, they do not represent the best performance. A simple weighting of the XYZ, RGB, and mentor modalities can produce excellent detection results. Furthermore, incorporating the voting strategy leads to a significant improvement, achieving a 97.1% I-AUROC, which is considerably better than the previous best method using just XYZ and RGB. Our combined approach of XYZ, RGB, mentor modalities, and V oting achieves the best results. C. More Experiments Few-shot Results on MvTec3D-AD. The results for train- ing set sizes of 5, 10, 50, and the full dataset, as shown in Table III, demonstrate that our method outperforms previous approaches across most metrics. Notably, it excels in pixel- level segmentation, even with limited samples. This fact can be attributed to the incorporation of fused modalities, which enhance the differentiation of anomalies compared to earlier methods that primarily focus on common features across modalities. The feature map-based reconstruction also outper- forms memory bank-based methods because the latter strug- gles to represent the normal distribution with fewer samples, while the former is more efficient. Consequently, our method achieves superior performance in few-shot settings. Few-shot Results on Eyecandies. Eyecandies is a chal- lenging industrial synthetic dataset that provides 500 samples under various conditions. However, too many samples are difficult to obtain in the real industry, and there is a lack of uniform measurement criteria for existing models. We chose the first 350 samples as a smaller training set to test the performance of the model, and our performance achieves excellent results, with the SOTA P-AUROC and I-AUROC reaching 97.7% and 87.9%, respectively, as shown in Table II. Feature Visualization. The results are shown in Figure 4. In part (a), we display the results of feature visualization for each modality of the MVTec 3D-AD sample, highlighting the differences in feature maps before and after reconstruction. Our model achieved excellent results in anomaly detection. Part (b) illustrates the results on Eyecandies, showcasing effective modal complementarity. Figure 5 shows that our model can better distinguish true and potential anomalies. Our model successfully localizes anomalies and assigns lower anomaly scores to normal pixels. TABLE VII QUANTITATIVE RESULTS FROM ACTUAL INDUSTRIAL PARTS DATASETS . RESULTS ARE EXPRESSED AS O-AUROC%/P-AUROC%.
https://arxiv.org/abs/2505.21420v1
B EST RESULTS ARE IN BOLD . Category BTF M3DM Mentor3AD Duck 1 71.0/72.3 83.3/76.2 98.9/93.4 Duck 2 76.2/60.4 68.3/59.9 93.7/89.6 Duck 3 63.7/74.3 71.0/83.4 82.4/90.2 Means 70.3/69.0 74.2/73.2 91.7/91.1 D. Actual Inspection on Industry Object We conducted detection experiments on real industrial prod- ucts to evaluate further the proposed model’s actual perfor- mance in the real industry. The detection experiments include PAPER SUBMITTED. 9 Depth Sensor3D Sensor Test Sample (a) Scene (a) Data Fig. 6. (a) The point cloud collection device of industry objects. (b) Some abnormal samples from four object classes. three levels: 1) Duck 1: anomalies in 3D only, Duck 2: anomalies in RGB only, and Duck 3: 3D and RGB anomalies at the same time. Each class contains two training and 20 test sample pairs, each containing a point cloud and an RGB image. The 3D anomalies include bulges and concavities, and the RGB anomalies include painted colours. The scanning process is shown in Figure 6 (a), where the scanning process includes the object under test, the 3D scanning sensor and the depth scanning sensor. The data used are shown exemplarily in Figure 6 (b). To capture the corresponding RGB data, we used a NIKON Z6III with NIKKOR LENS Z 24-70mm f/4 S to shoot in the same pose. The quantitative results are presented in Table VII, where our model demonstrates excellent detection. Compared to the previous M3DM, our model improves 17.5% and 7.9% in anomaly detection and localization performance, respectively. Moreover, our model excels in RGB and 3D, which may be attributed to the mentor modality guiding the two modalities for cross-modal intermingling. V. C ONCLUSION This paper presents a novel Multi-modality Mentor Learning (Mentor3AD) for detecting anomalies in multimodal 3DAD. Our method consists of three main modules that use the Mentor of Fusion Module (MFM) to combine RGB and 3D features into a single mentor modality, a Mentor of Guidance Module (MGM) that uses mentor modality to help reconstruct the modalities from each other, and a V oting Module (VM) that combines AD results from different modalities to generate a final anomaly score. Our model obtained the SOTA results, indicating the effectiveness of our method. REFERENCES [1] W. Li, X. Xu, Y . Gu, B. Zheng, S. Gao, and Y . Wu, “Towards scalable 3d anomaly detection and localization: A benchmark via 3d anomaly synthesis and a self-supervised learning network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 22207–22216, June 2024. [2] Y . He, K. Song, Q. Meng, and Y . Yan, “An end-to-end steel surface defect detection approach via fusing multiple hierarchical features,” IEEE Transactions on Instrumentation and Measurement , vol. 69, no. 4, pp. 1493–1504, 2020.[3] Z. Zhou, J. Wang, Z. Yu, Z. Wang, X. Liu, L. Qiu, and S. Zhang, “Featdae: Introducing features with denoising autoencoder for anomaly detection,” IEEE Transactions on Instrumentation and Measurement , pp. 1–1, 2025. [4] Y . Lin, Y . Chang, X. Tong, J. Yu, A. Liotta, G. Huang, W. Song, D. Zeng, Z. Wu, Y . Wang, and W.
https://arxiv.org/abs/2505.21420v1
Zhang, “A survey on rgb, 3d, and multimodal approaches for unsupervised industrial image anomaly detection,” Information Fusion , vol. 121, p. 103139, 2025. [5] J. Liu, G. Xie, J. Wang, S. Li, C. Wang, F. Zheng, and Y . Jin, “Deep industrial image anomaly detection: A survey,” Machine Intelligence Research , vol. 21, p. 104–135, Jan. 2024. [6] J. Ye, W. Zhao, X. Yang, G. Cheng, and K. Huang, “Po3ad: Predicting point offsets toward better 3d point cloud anomaly detection,” 2024. [7] H. Liang, A. Wang, J. Zhou, X. Jin, C. Gao, and J. Wang, “Examining the source of defects from a mechanical perspective for 3d anomaly detection,” 2025. [8] J. Liu, G. Xie, X. Li, J. Wang, Y . Liu, C. Wang, F. Zheng, et al. , “Real3d-ad: A dataset of point cloud anomaly detection,” in Thirty- seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2023. [9] H. Zhu, G. Xie, C. Hou, T. Dai, C. Gao, J. Wang, and L. Shen, “Towards high-resolution 3d anomaly detection via group-level feature contrastive learning,” in Proceedings of the 32nd ACM International Conference on Multimedia , MM ’24, p. 4680–4689, ACM, Oct. 2024. [10] H. Liang, G. Xie, C. Hou, B. Wang, C. Gao, and J. Wang, “Look inside for more: Internal spatial modality perception for 3d anomaly detection,” 2025. [11] Y .-Q. Cheng, W.-L. Li, C. Jiang, D.-F. Wang, H.-W. Xing, and W. Xu, “Mvgr: Mean-variance minimization global registration method for multiview point cloud in robot inspection,” IEEE Transactions on Instrumentation and Measurement , vol. 73, pp. 1–15, 2024. [12] Z. Zhou, L. Wang, N. Fang, Z. Wang, L. Qiu, and S. Zhang, “R3d-ad: Reconstruction via diffusion for 3d anomaly detection,” in Computer Vision – ECCV 2024: 18th European Conference, Milan, Italy, Septem- ber 29–October 4, 2024, Proceedings, Part XXXVI , (Berlin, Heidelberg), p. 91–107, Springer-Verlag, 2024. [13] E. Horwitz and Y . Hoshen, “Back to the feature: classical 3d features are (almost) all you need for 3d anomaly detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 2968–2977, 2023. [14] Y . Wang, J. Peng, J. Zhang, R. Yi, Y . Wang, and C. Wang, “Multimodal industrial anomaly detection via hybrid fusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 8032–8041, 2023. [15] Y .-M. Chu, C. Liu, T.-I. Hsieh, H.-T. Chen, and T.-L. Liu, “Shape-guided dual-memory learning for 3d anomaly detection,” in Proceedings of the 40th International Conference on Machine Learning (ICML) , pp. 6185– 6194, 2023. [16] X. Xu, Y . Wang, Y . Huang, J. Liu, X. Lei, G. Xie, G. Jiang, and Z. Lu, “A survey on industrial anomalies synthesis,” 2025. [17] J. Wang, J. Cheng, C. Gao, J. Zhou, and L. Shen, “Enhanced fabric defect detection with feature contrast interference suppression,” IEEE Transactions on Instrumentation and Measurement , vol. 74, pp. 1–12, 2025. [18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015. [19] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X.
https://arxiv.org/abs/2505.21420v1
Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” 2021. [20] V . Zavrtanik, M. Kristan, and D. Sko ˇcaj, “Draem – a discriminatively trained reconstruction embedding for surface anomaly detection,” 2021. [21] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger, “Uninformed students: Student-teacher anomaly detection with discriminative latent embeddings,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , IEEE, June 2020. [22] C. Gao, X. Chen, J. Zhou, J. Wang, and L. Shen, “Open-set fabric defect detection with defect generation and transfer,” IEEE Transactions on Instrumentation and Measurement , vol. 74, pp. 1–13, 2025. [23] M. Rudolph, T. Wehrbein, B. Rosenhahn, and B. Wandt, “Fully convo- lutional cross-scale-flows for image-based defect detection,” in Winter Conference on Applications of Computer Vision (WACV) , Jan. 2022. [24] D. Gudovskiy, S. Ishizaka, and K. Kozuka, “CFLOW-AD: Real-time unsupervised anomaly detection with localization via conditional nor- malizing flows,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pp. 98–107, January 2022. PAPER SUBMITTED. 10 [25] K. Roth, L. Pemula, J. Zepeda, B. Sch ¨olkopf, T. Brox, and P. Gehler, “Towards total recall in industrial anomaly detection,” 2021. [26] Y . Cao, X. Xu, and W. Shen, “Complementary pseudo multimodal fea- ture for point cloud anomaly detection,” Pattern Recognition , vol. 156, p. 110761, 2024. [27] A. Bhunia, C. Li, and H. Bilen, “Looking 3d: Anomaly detection with 2d-3d alignment,” 2024. [28] M. Kruse, M. Rudolph, D. Woiwode, and B. Rosenhahn, “Splatpose & detect: Pose-agnostic 3d anomaly detection,” June 2024. [29] Y . Liu, Y . S. Hu, Y . Chen, and J. Zelek, “Splatpose+: Real-time image- based pose-agnostic 3d anomaly detection,” 2024. [30] A. Costanzino, P. Zama Ramirez, G. Lisanti, and L. Di Stefano, “Multi- modal industrial anomaly detection by crossmodal feature mapping,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2024. [31] Q. Zhou, J. Yan, S. He, W. Meng, and J. Chen, “Pointad: Comprehending 3d anomalies from points and pixels for zero-shot 3d anomaly detection,” inAdvances in Neural Information Processing Systems (A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, eds.), vol. 37, pp. 84866–84896, Curran Associates, Inc., 2024. [32] Y . Cheng, Y . Cao, G. Xie, Z. Lu, and W. Shen, “Towards zero-shot point cloud anomaly detection: A multi-view projection framework,” 2024. [33] Z. Zuo, J. Dong, Y . Wu, Y . Qu, and Z. Wu, “Clip3d-ad: Extending clip for 3d few-shot anomaly detection with multi-view images generation,” 2024. [34] M. Rudolph, T. Wehrbein, B. Rosenhahn, and B. Wandt, “Asymmetric student-teacher networks for industrial anomaly detection,” Jan. 2023. [35] A. van den Oord, Y . Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” 2019. [36] C. Wang, H. Zhu, J. Peng, Y . Wang, R. Yi, Y . Wu, L. Ma, and J. Zhang, “M3dm-nr: Rgb-3d noisy-resistant industrial anomaly detection via multimodal denoising,” 2024. [37] B. Sch
https://arxiv.org/abs/2505.21420v1
¨olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson, “Estimating the support of a high-dimensional distribution,” Neural Computation , vol. 13, no. 7, pp. 1443–1471, 2001. [38] P. Bergmann, X. Jin, D. Sattlegger, and C. Steger, “The mvtec 3d- ad dataset for unsupervised 3d anomaly detection and localization,” in Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications , SCITEPRESS - Science and Technology Publications, 2022. [39] L. Bonfiglioli, M. Toschi, D. Silvestri, N. Fioraio, and D. De Gregorio, “The eyecandies dataset for unsupervised multimodal anomaly detection and localization,” in Proceedings of the 16th Asian Conference on Computer Vision (ACCV) , 2022. [40] Y . Pang, W. Wang, F. E. Tay, W. Liu, Y . Tian, and L. Yuan, “Masked autoencoders for point cloud self-supervised learning,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part II , pp. 604–621, Springer, 2022. [41] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” 2017. [42] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V . Koltun, “Point transformer,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 16259–16268, 2021. [43] M. Caron, H. Touvron, I. Misra, H. J ´egou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” CoRR , vol. abs/2104.14294, 2021.
https://arxiv.org/abs/2505.21420v1
arXiv:2505.21426v1 [cs.AI] 27 May 2025Learning Individual Behavior in Agent-Based Models with Graph Diffusion Networks Francesco Cozzi Sapienza University, Rome, Italy CENTAI, Turin, Italy francesco.cozzi@centai.euMarco Pangallo CENTAI, Turin, Italy marco.pangallo@centai.eu Alan Perotti CENTAI, Turin, Italy alan.perotti@centai.euAndré Panisson CENTAI, Turin, Italy panisson@centai.euCorrado Monti CENTAI, Turin, Italy corrado.monti@centai.eu Abstract Agent-Based Models (ABMs) are powerful tools for studying emergent properties in complex systems. In ABMs, agent behaviors are governed by local interactions and stochastic rules. However, these rules are, in general, non-differentiable, limiting the use of gradient-based methods for optimization, and thus integration with real-world data. We propose a novel framework to learn a differentiable surrogate of any ABM by observing its generated data. Our method combines diffusion models to capture behavioral stochasticity and graph neural networks to model agent interactions. Distinct from prior surrogate approaches, our method introduces a fundamental shift: rather than approximating system-level outputs, it models individual agent behavior directly, preserving the decentralized, bottom-up dynamics that define ABMs. We validate our approach on two ABMs (Schelling’s segregation model and a Predator-Prey ecosystem) showing that it replicates individual-level patterns and accurately forecasts emergent dynamics beyond training. Our results demonstrate the potential of combining diffusion models and graph learning for data-driven ABM simulation. 1 Introduction Agent-Based Models (ABMs) are computational frameworks in which autonomous “agents” interact with each other and their environment, leading to emergent collective behavior [ 43]. ABMs are typically characterized by: (i) a well-defined network of interactions, where the state of each agent is influenced by the states of a specific set of other agents, usually from the previous time step; (ii) stochasticity, meaning that agents’ decisions incorporate a degree of randomness, producing probability distributions over multiple runs that capture real-world uncertainty and variation. ABMs have proven to be a powerful tool for developing and refining theoretical understanding, particularly in identifying minimal sets of micro-level rules that generate realistic macro-level outcomes [ 35]. In this sense, they have been instrumental in modeling a diverse range of phenomena [ 6], including structure formation in biological systems, pedestrian traffic, urban aggregation, and opinion dynamics. More recently, ABMs have demonstrated their value as forecasting tools [ 33], such as in predicting the economic impacts of the COVID-19 pandemic [31]. However, this progress is occurring despite the absence of principled methods to systematically align ABMs with real-world data. While various approaches have been proposed for calibrating macro-level parameters of ABMs [ 32], there are still no established methods for tuning the micro-level behaviors Preprint. Under review. Figure 1: Overview of the training and data generation pipeline for differentiable surrogates of Agent-Based Models. The top-left panel illustrates the training process: we run simulations using the original ABM, and use the resulting trajectories to train the differentiable surrogate. The top- right panel visualizes the structure of the ABM-generated data using the Predator-Prey model as an example: a state at time step t−1gives rise to multiple possible states at time t, one of which is chosen to generate further possible states at t+1. Colored cells highlight the behavior of a specific “prey” agent — green for “move,” red for
https://arxiv.org/abs/2505.21426v1
“die,” and pink for “reproduce.” The bottom panel shows the data generation phase: given a new observed state, the trained surrogate can simulate plausible future states, effectively mimicking the original ABM’s generative behavior. of individual agents to data. One potential approach is to manually construct a probabilistic model that replicates the ABM and then use its likelihood function to estimate individual state variables [26]. However, this method requires the manual development of an ad-hoc probabilistic framework that reproduces the original ABM. Thus, what is currently missing is a fully automated method for deriving a learnable, differentiable model directly from an ABM. In this work, we propose a novel approach to address this challenge: combining a graph neural network with a diffusion model to learn a differentiable surrogate of an ABM, from ABM-generated data. We refer to this method as a Graph Diffusion Network (GDN) . In this framework, a graph neural network captures the interactions that govern the evolution of each agent’s state in response to other agents, while the diffusion model learns the distribution of possible state transitions, conditioned on these interactions. A central aspect of our approach is its explicit modeling of individual agent behavior. Rather than treating the system as a whole, we focus on how each agent acts as an independent entity, while also incorporating the influence of other agents on its decisions. This approach ensures that the emergent dynamics remain faithful to the decentralized nature of ABMs. Our approach draws inspiration from previous work on using neural network models to emulate deterministic cellular automata [ 15]. However, we extend this idea to the broader domain of ABMs by introducing a crucial component: stochasticity . By incorporating stochasticity, our architecture can learn directly from ABM-generated data traces, making it adaptable to a wide variety of agent-based models across diverse real-world applications. Furthermore, since our method is trained on data traces, it can seamlessly integrate empirical observations alongside simulated data, thus being potentially applicable to real-world scenarios. In this sense, our work represents a first step toward developing a comprehensive methodology for creating easy-to-use, learnable ABMs. 2 2 Background From a general perspective, an ABM can be represented as a stochastic process Zt∼PΘ(Zt|Zτ<t), where Ztdenotes the state variables at time t,Θis a set of parameters , andPis a probability distri- bution implicitly defined by the model structure and parameters. The index trepresents discrete time. Typically, Θconsists of a small number of parameters, remains fixed in dimension, is interpretable by domain experts, and serves as the model’s primary control mechanism. Conversely, each element in Ztcaptures an agent’s state, leading to a high-dimensional state space. To illustrate this structure, we consider two ABMs used throughout the paper. The first is the well- known model by Schelling [ 37], where Ztcaptures agents positions and colors , and Θindicates preference for same-color neighbors. Even with some tolerance for neighbors of different color, agents often form segregated clusters [ 40]. This clear mismatch between individual preferences and aggregate outcomes is a classic example of emergence . The second model is a predator-prey model [39,43],
https://arxiv.org/abs/2505.21426v1
describing the ecological dynamics between two interacting species, with one acting as predator and the other as prey, similarly to the Lotka-V olterra equations. In this ABM, Ztincludes agent position and type (prey-predator), while Θgoverns the probability to move, reproduce or die. This model replicates the cyclical predator-prey population dynamics, typical of Lotka-V olterra systems, while also capturing complex spatial patterns reminiscent of spatial evolutionary games [28]. ABMs have traditionally been powerful for theory generation, but in recent years, they have become increasingly data-driven [ 30]. To align ABM output with empirical data, most efforts focus on calibrating parameters Θso that model-generated summary statistics match observed ones [ 32,34]. Less attention, however, has been paid to estimating agent states Zt, which is key for matching time series further to summary statistics. Some researchers use data assimilation methods like particle filters [ 24] or ensemble Kalman filters [ 29] to infer Zt. A more principled alternative is to make ABMs differentiable, enabling the maximization of a likelihood function via gradient descent and automatic differentiation [ 25,26]. While differentiability is straightforward for simple stochastic behaviors, such as those governed by Bernoulli trials [ 3], it becomes far more challenging for complex behaviors like those observed in the Schelling and predator-prey models. To address this and other challenges in ABMs, researchers have increasingly turned to more tractable surrogates, also known as meta-models or emulators [ 19,11,13]. Surrogate models typically learn directly the mapping from parameters Θto static summary statistics, disregarding individual behavior and model dynamics. For instance, a surrogate in Lamperti et al. [ 22] maps Θto the mean growth rate of the economy. More recent research has also explored the emulation of model dynamics. Grattarola et al. [ 15] use Graph Neural Networks to approximate cellular automata, which can be seen as a special case of ABMs with deterministic interaction rules. Dyer et al. [ 10] propose Ordinary Differential Equation emulators to construct interventionally consistent surrogates, ensuring that micro-state interventions produce results aligned with macro-state interventions. Casert et al. [ 5] employ Transformers to model the transition of physical systems from one configuration to another, effectively capturing the evolution of the distribution of micro-states over time, but modeling in terms of configuration transition rates rather than reproducing individual agent behavior. In contrast to these approaches, our work is the first to jointly emulate individual andstochastic interacting agents. This is particularly important, since ABMs are inherently stochastic and rely on individual-level interactions to produce emergent aggregate outcomes. Moreover, since our surrogate is differentiable by design, it paves the way for methods that estimate both individual-level parameters and state variables. To achieve this goal, our framework relies on a novel combination of graph neural networks and diffusion models. Diffusion models [ 17] were first introduced in the context of image generation, where they demonstrated impressive generation capabilities [ 7], and were then applied to other domains [ 20]. A number of works addressed graph data [ 23], for example in molecule modeling [ 18] and protein structure generation [ 2]. However, these works focus on the
https://arxiv.org/abs/2505.21426v1
generation of graphs, while our architecture learns to generate random samples that are conditioned on information found on a graph. To the best of our knowledge, our work is the first application of this generative framework to individual behavior modeling in simulation systems, such as ABMs. 3 3 Methods Denoting the set of agents by A, let each agent i∈Aat discrete time tbe described by a state vector Z(i) t, which may include both continuous and categorical features. Given the ABM parameters Θ, the update rule of Z(i) tfollows a stochastic transition process PΘgiven by Z(i) t+1∼PΘ(Z(i) t+1|Z(i) t,{Z(j) t}j∈N(i) t), (1) where N(i) tis the set of agents interacting with agent iat time t, inducing a (time-varying) interaction graph Gt= (A, E t)that we assume to be known. This formulation focuses on individual agents, capturing not the dynamics of the entire system, but the evolution of each agent over time. In this way, it makes the two core ingredients of ABMs explicit: (i) relational structure via local neighborhoods N(i) t;(ii) stochasticity in the choice of next states. To effectively model these components in the same individual-level view, we leverage respectively (i) message-passing GNNs [12], which model the relationship between the evolution of an agent’s state and the state of its neighbors on the graph; (ii) conditional diffusion models [44], generative architectures well-suited to learning complex, multimodal distributions, allowing us to capture the intrinsic stochasticity of agent behaviour. Our proposed method, dubbed Graph Diffusion Network (GDN), combines these two components into a single architecture. Together, these components let us learn both how any agent state is affected by its neighbors on the graph, and the inherent randomness driving agent dynamics, yielding a surrogate that can both emulate the original ABM and be differentiated. Overview . In order to learn the distribution PΘ, the training phase requires observations of different outcomes given the same starting conditions. To do so, in our framework we use the original ABM to generate a data set as a ramification of possible states, namely Z(i) t,{Z(j) t}j∈N(i) t −→Z(i) t+1(see Figure 1). Our Graph Diffusion Network then approximates the stochastic kernel PΘby integrating a Message-Passing GNN with a Conditional Diffusion Model, of learnable parameters ωandϕ respectively. The GNN aggregates each agent’s state Z(i) tand its neighbours’ states {Z(j) t}j∈N(i) t via permutation-invariant message and readout functions to produce an interaction embedding g(i) t. This embedding acts as a compact representation of the information coming from i’s neighbors at t, affecting the distribution of possible states of agent iat time t+1. As such, it is passed to the diffusion model: conditioned on Z(i) tandg(i) t, the diffusion model learns to transform a sample of Gaussian noise into a possible instance of the next state Z(i) t+1. By minimizing the standard denoising loss over all observed transitions, this hybrid architecture captures both the graph-structured interactions and the inherent stochasticity of agent dynamics.The trained model GDN ϕ,ωis therefore able to generate, given a graph Gtof interacting agents and the state of each one Z(i) t, a sequence of possible next states
https://arxiv.org/abs/2505.21426v1
Z(i) t+1. The consecutive application of GDN ϕ,ωallows to reproduce the behavior of the original model. We now describe in detail each of these components. Message-passing GNN. The GNN operates on the provided interaction graph Gt= (A, E t), that we assume to be known or to be computable from Zt(e.g., in the Schelling model, the position of the agents determine who interacts with whom). For each agent i, the GNN aggregates its state Z(i) t together with each neighbor’s state Z(j) tvia a permutation-invariant operatorL, and then feeds the concatenated result through an MLP fω[12], that is g(i) t=fω Z(i) t,L j∈N(i) t Z(i) t,Z(j) t . The resulting vector g(i) tcaptures how i’s local neighborhood influences its next-state distribution. Conditional diffusion model. The diffusion model then learns the distribution over future states given this output from the graph and the state of a given agent. Diffusion models do so by reversing a fixed Gaussian noising process [ 17]. The obtained denoising process, indexed by τ∈ {τmax, . . . , 0}, starts from a sample of Gaussian noise xτmax∼ N(0,I)and in a sequence of denoising diffusion steps transforms it into a possible next state x0≈Z(i) t+1. In this setting, we denote the general latent xτas˜Z(i) t+1(τ). Each step of this process receives as input (i) the agent’s current state Z(i) t, (ii) its interaction embedding g(i) t, and (iii) a sinusoidal positional embedding of τ[41]. These inputs are 4 first transformed by MLPs to form the condition vector c(i) t. Then, a feed-forward network ϕis trained to predict the noise residual ϵϕ, i.e., the change to apply to the input ˜Z(i) t+1(τ)to continue the denoising process. Ramification data set. Given these two components, our framework uses the original ABM to produce a ramifications data set (see Figure 1). Such data set follows one main branch that specifies the evolution of the ABM, and multiple alternative stochastic evolutions of each time step from time tto time t+ 1. This method makes it possible to expose the model to multiple stochastic successors from identical conditioning context, while avoiding exponential growth in the number of histories. Starting from an initial configuration Z0= Z(1) 0, . . . ,Z(n) 0 ,we recursively simulate R+ 1child configurations at each time step t, yielding {Zt+1[r]}r=0,...,R. We designate the branch r= 0as the main branch {Zt[0]}t=0,...,T−1, from which we extract the conditioning tuples Z(i) t,{Z(j) t}j∈N(i) t for all agents i. The remaining Rsibling branches at each tsupply the target next states Z(i) t+1, ensuring that each context yields multiple outcomes. Algorithm 1: Training Procedure 1:repeat 2: t∼Uniform (0, ..., T−1) 3: Z(i) t,{Z(j) t}j∈N(i) t←Zt[0] 4: g(i) t=fω(Z(i) t,L j∈N(i) t(Z(i) t,Z(j) t)) 5: τ∼Uniform (1, ..., τ max) 6: τemb=SinusoidalPositionEmbedding (τ) 7: c(i) t=MLP(Z(i) t) +MLP(g(i) t) +MLP(τemb) 8: r∼Uniform (1, ..., R ) 9: Z(i) t+1←Zt+1[r] 10: ϵ∼ N(0,I) 11: Optimizers step over all i∈A 12: ∇ϕ,ω||ϵ−ϵϕ(˜Z(i) t+1(τ),c(i) t)||2 13:until convergenceLearning procedure. Our framework uses these data sets to train the Graph Diffusion Network. It minimizes the expected de- noising loss over the outcomes
https://arxiv.org/abs/2505.21426v1
observed in the ramification data (see Algorithm 1). At each training iteration, it uniformly sam- ples a time index tand extracts the condi- tioning pair (Z(i) t,{Z(j) t})from the main branch Zt[0]. We compute the interaction embedding g(i) tvia Equation (3), then draw a diffusion step τto form the condition vec- torc(i) t, and uniformly select one of the R next-state realizations to obtain the target Z(i) t+1. Finally, we minimize the denoising loss in Equation (5)by backpropagating through both the diffusion model and the GNN. More details about the architecture can be found in Supplementary Section A. 4 Experiments In this section, we present different experiments to assess and demonstrate our framework’s ability to learn micro-level agent behaviors and faithfully reproduce emergent system-level dynamics. We evaluate our Graph Diffusion Network on two canonical agent-based models: the Schelling’s segrega- tion model and the Predator–Prey ecosystem, presented in Section 2 and detailed in Supplementary Section B. We test both its micro-level andmacro-level fidelity. At the micro level, we measure how well the surrogate reproduces the conditional next-state distribution of each agent under identical con- text on an out-of-training ramification data set. At the macro level, we assess whether the surrogate, once trained on the first Ttrain= 10 timesteps, can accurately reproduce the subsequent Ttest= 25 timesteps of aggregate summary statistics. Because no existing method directly accepts graph-structured agent states and outputs per-agent state distributions, we compare against an ablation variant in which the GNN embedding is re- placed by a flat concatenation of all agent states (see Section 4.1). Both models are trained on the same ramified data and evaluated under identical protocols. In the remainder of this section, we first describe the experimental setup, including dataset generation, model variants, and evaluation metrics. We then present a qualitative analysis of emergent patterns, followed by a comprehensive quantitative comparison. All implementation and reproducibility details are provided in the Sup- plementary Materials. Full code to reproduce our experiments is available at the following link: https://github.com/fracozzi/ABM-Graph-Diffusion-Network/. 5 ξ1 ξ2 ξ3 Figure 2: Evolution of the position of black and red agents in the Schelling model, for three simulations runs, one for each of the considered tolerance thresholds ξ1= 0.625,ξ2= 0.75, ξ3= 0.875. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row), showing for each three time steps, t= 0(initial conditions, same for each column but kept for clarity), t= 15 , and t= 30 . 4.1 Experimental design Ablation. Our core hypothesis is that both relational structure and stochastic modeling are crucial for accurate ABM surrogates. We consider therefore two possible ablations. In the first, we remove the message-passing GNN and replace the interaction graph with a flat concatenation of all agents’ state vectors—this isolates the impact of neglecting agent interactions. The second would drop the diffusion component entirely, yielding a purely deterministic model incapable of capturing any randomness; however, since such a model cannot produce a stochastic output, it would offer little insights. Thus, we focus on the GNN-removed variant to justify modeling the interaction graph. Agent-based
https://arxiv.org/abs/2505.21426v1
models. We evaluate our approach on the two ABMs described in Section 2 as case studies. The first is the Schelling segregation model, in which nagents occupy cells on a two- dimensional grid. Each agent has a fixed binary “color” and a position on the grid. At each timestep, an agent is considered happy if the proportion of its (up to eight) immediate neighbors sharing its color exceeds a tolerance threshold ξ; otherwise, it is unhappy and relocates to a randomly selected empty cell; thus, the interaction graph Gtlinks each agent to its neighbors at time t. We adopt the standard NetLogo implementation of this model [ 42]. The second is a predator–prey ecosystem model, where agents belong to one of two species (predator or prey), inhabit grid cells, and cycle through life phases—Unborn, Alive, Pregnant, and Dead. At each timestep, an Alive agent may move to a neighboring cell, reproduce (becoming Pregnant), or die, with probabilities specified by a parameter matrix Ψand conditioned on the local presence of predators or prey [ 39,43]. Pregnant agents revert to Alive after giving birth; Unborn agents become Alive if their parent is Pregnant; and Dead agents remain inactive. Here, Gtlinks Alive neighboring agents, with specific rules for Pregnant and Unborn agents. See Supplementary Section B for more details. In both ABMs, each agent’s full state at time tcomprises its position, type (color or species), and, for the predator-prey ABM, its life phase. Together, these two models span both simple relocation dynamics and richer birth–death interactions, providing diverse testbeds for our surrogate. Micro evaluation metrics. To quantify how faithfully our surrogate captures individual agent behavior, we compare its predicted conditional next-state distributions against the ABM’s true stochastic transitions using the Earth Mover’s Distance (EMD) [ 36]. We extend the ramification dataset beyond the training horizon and generate corresponding datasets for both the surrogate and the ablation models. The EMD is then computed as the mean value across timesteps and individual agents. In the Schelling ABM, the EMD compares the distribution of agent positions. This directly measures the surrogate’s ability to relocate unhappy agents correctly, while keeping happy agents stationary. In the predator-prey model we treat the agent’s categorical life phase as the random variable and compute the EMD over its four-state distribution. This metric captures both deterministic transitions (e.g., Unborn →Alive, Dead →Dead) and stochastic, interaction-driven transitions (e.g., Alive→Dead, Alive →Pregnant). 6 Ψ1: Oscillations predators and preys 0 10 20 300100200300400500Number of agentsGround truth 0 10 20 30Surrogate model 0 10 20 30Ablation model Preys Predators Ψ4: Oscillations predators, monotone preys 0 10 20 30 Time0100200300400Number of agents 0 10 20 30 Time0 10 20 30 Time Figure 3: Forecasting macro-level summary statistics (here, the number of alive preys and predators over time), starting from the last condition seen in training, for 100 independent simulation runs, under configuration Ψ1(oscillations for both predators and preys, top) and Ψ4(oscillations only for predators, bottom). Left: original ABM simulations. Center: surrogate. Right: ablation. The dashed vertical line indicates the end of the training phase for the surrogate and ablation
https://arxiv.org/abs/2505.21426v1
models. Macro evaluation metrics. Next, we test whether agent-level predictions translate into faithful reproduction of emergent, system-level behavior. For each model, we track a summary statistic over time: the number of happy agents in Schelling, and the number of active (i.e. Alive and Pregnant) agents in the predator–prey ecosystem. Reusing the same ramification branches as in training would offer little new information, since different stochastic branches from the same state tend to produce very similar macroscopic trajectories. Instead, we generate a fresh ensemble of main-branch simulations ( 100independent runs) beyond the training horizon. We then compute the symmetric mean absolute percentage error (sMAPE) between the mean ground-truth trajectory and the mean surrogate-predicted trajectory across this ensemble, providing a quantitative measure of the surrogate’s ability to capture oscillations and steady-state behavior truly out-of-sample. Experimental set-up. We consider three parameter combinations ξfor the Schelling ABM, each producing distinct segregation outcomes, and four Ψcombinations for the predator-prey ABM, reflecting different oscillatory patterns in the population dynamics. For each ABM and parameter setting, we simulate Ttrain = 10 main-branch steps with R= 500 stochastic branches per step, yielding the training ramification as in Figure 1. For macro-evaluation, we run 100independent main-branch simulations to calculate sMAPE. For micro-evaluation, we generate an out-of-sample ramification dataset of T= 25 timesteps. We train both surrogate and ablation for 100epochs using Adam with learning rate 10−5for the diffusion model, and Adam with learning rate 2·10−5for the GNN, batch size equal to number of agents in the system, and diffusion hyper-parameters τmax= 100 (more information in Supplementary Section A). 4.2 Results To build intuition, we first qualitatively compare the surrogate and its ablated variant on their ability to reproduce key emergent patterns of agent-based dynamics. We then consolidate these insights with a comprehensive quantitative evaluation using the macro- and micro-level metrics introduced in the previous section. We report a selection of results in this Section; more in Supplementary Section C. Reproducing emergent segregation. Let us first consider the Schelling ABM, under the config- urations ξ1= 0.625,ξ2= 0.75,ξ3= 0.875. Figure 2 illustrates how the ground-truth ABM (top row) progresses from a randomized initialization to structured, segregated communities for the first two configurations ξ1, ξ2, while it remains unsegregated for ξ3. At the first two tolerance levels, in fact, the agents gradually self-organize into distinct clusters, with segregated communities clearly emerging around t= 20 (see Supplementary Section C). The middle row represents the evolution of the system according to our surrogate model: we initialize the system with the same starting 7 Schelling Predator-Prey - Low segregation ξ1 EMD (Micro) - High segregation ξ2 - No segregation ξ3 Oscillations Predators Oscillations Preys sMAPE (Macro) - ψ1 Monotone Predators Monotone Preys - ψ2 Monotone Predators Oscillations Preys - ψ3 Oscillations Predators Monotone Preys - ψ4 Surrogate (ours) Ablation 0.0 0.5 1.0 0.00 0.02 0.04 0.06 0.08 EMD0.0 0.5 1.0 sMAPE 0.00 0.02 0.04 0.06 0.08 EMD 0.00 0.25 0.50 0.75 1.00 1.25 sMAPE 2 4 6Surrogate (ours) Ablation 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.00 0.02 0.04 0.06 0.08 EMD 0.00 0.25 0.50 0.75 1.00
https://arxiv.org/abs/2505.21426v1
1.25 sMAPE 2 4 6Surrogate (ours) Ablation 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.00 0.02 0.04 0.06 0.08 EMD0.00 0.25 0.50 0.75 1.00 1.25 sMAPE 0 2 4 6 Surrogate (ours) Ablation 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Figure 4: Errors obtained by the proposed approach ( Surrogate ) and by the naive baseline ( Ablation ) in four different tasks. In the first column, error is measured as the EMD between the true and predicted distribution of individual (micro-level) behavior, i.e. predicting the next state of each agent from the previous one. In the second column, error is measured as the difference (sMAPE) in system-level quantities, i.e. comparing the true values of the number of agents with a given state with the one predicted by our model when trained on a fraction of the initial time steps (as in Figure 3). In the first row, we test three configurations of the Schelling model; in the second row, we compare four configurations of the Predator-Prey model. condition, and then we iterate giving the current state Ztto our model, and using one sample of the generated output as the next state Zt+1. We observe that our surrogate exhibits a qualitatively similar pattern of cluster formation over time, distinct for each of the three configurations, while the ablation model (bottom row) fails to meaningfully relocate agents, largely maintaining a random configuration. Reproducing emergent oscillations in predator-prey dynamics. Next, we consider Predator-Prey ecological model. Figure 3 overlays 100 trajectories of prey and predator populations starting from the same state at Ttrain= 10 , comparing the stochastic trajectories from the ground-truth model with the one obtained by our surrogate and by the ablation. For both configurations, the surrogate and the ablation model are trained only with the initial time steps (up to the dashed line in the plots). Under the parameter set Ψ1, the ground-truth ABM (top-left plot) exhibits classical Lotka–V olterra oscillations: a rise in prey growth drives a delayed increase in predators, which then triggers prey decline and a subsequent predator decline. Under Ψ4, instead, only predators show a rise and decay, while preys only decline (bottom-left plot). We observe that the surrogate (center column) accurately captures both the phase lag and amplitude of these oscillations, while the ablation (right) collapses to near- monotonic trends. We perform the same analysis for alternative parameterizations Ψ2,Ψ3(included in Supplementary Section C) that show different types of dynamics, as the population of predators and/or preys may exhibit monotonic extinctions. In all cases, the surrogate faithfully reproduces monotonic declines or single-peak dynamics and the ablation fails. We also observe (figures in Supplementary Section C) that the surrogate recreates the rich spatial patterns of predator–prey clusters, also seen in similar settings in evolutionary game theory [28]. Quantitative results. Now we present the results of a quantitative analysis, systematizing the previous comparisons. Here, each comparison with the ground truth is quantified using one of the metrics presented in the previous subsection, i.e. earth mover’s distance (EMD) for the micro- level comparisons, and sMAPE for the macro-level ones. Figure 4
https://arxiv.org/abs/2505.21426v1
summarizes the results of our experiments: the left panel shows the microscopic evaluation of both our surrogate model and the ablated variant, while the right panel presents the macroscopic evaluation results. 8 For the Schelling model, we observe that, on the micro level, the surrogate’s mean EMD is lower than the ablation’s mean EMD in all cases. The differences between the surrogate and the ablation are less pronounced at the thresholds ξ1andξ3. Atξ1(few unhappy agents) behavior is almost entirely deterministic and agents rarely move, while at ξ3(almost all agents unhappy ) behavior is uniformly random, so even a flat, “always-move” or “never-move” rule yields near-optimal predictions in these two cases. In contrast, at the intermediate threshold ξ2, where roughly half the agents are unhappy , the difference between the surrogate’s and the ablation’s EMD is more pronounced. A similar pattern is observed in the macroscopic evaluation. The surrogate’s sMAPE remains below 0.2, whereas the ablation fails to distinguish happy from unhappy cases and suffers large macro-level errors. This gap confirms that only the full model, with explicit graph-based interaction modeling, can learn the conditional relocation rule critical in balanced regimes. For the Predator–Prey model, regarding micro-level behavior, our surrogate achieves a low EMD from the ground truth on average, and it consistently outperforms the ablation model. These results confirm that our model is able to faithfully reproduce the complex dynamics of this ABM even at the individual agent-level (thus explaining the macro-level results shown by Figure 3). The most successful case is Ψ2, where our surrogate exhibits a near-zero difference from the ground truth. This is explained by the fact that in this configuration, most agents follow deterministic update rules (e.g., dead→dead ), which are perfectly recovered by our model—but not by the ablation. However, even when isolating only the stochastic transitions (e.g., alive→dead ,alive→pregnant ), the surrogate’s EMD remains below 0.09 across all parameter sets, reflecting its accurate capture of true randomness (see Supplementary Section C). By contrast, the ablation model’s EMD for these same stochastic rules averages around 0.2, often reaching 0.7–0.8. At the macro level as well, the surrogate consistently outperforms the ablation, generally achieving low error. The best result is obtained with Ψ1, the most complex dynamic, where the surrogate achieves an average sMAPE of approximately 0.08. This configuration produces two distinct population peaks, and the surrogate faithfully reproduces both their timing and amplitude (Figure 3). The worst result is obtained with Ψ2, as this configuration is almost monotonic and dominated by long, near-zero tails that are noisy at very small scales, making them difficult for any model to reproduce. 5 Discussion We introduced Graph Diffusion Networks, a differentiable surrogate for agent-based models that combines graph neural networks to model agent-level interactions with diffusion models to capture stochasticity. Our experiments on the Schelling segregation model and a Predator–Prey ecosystem show that this approach not only accurately reproduces individual-level transition distributions, but also faithfully captures emergent, system-level dynamics beyond the training horizon. Limitations. Our approach is limited by our assumptions on the characteristics of the ABM to emulate. First,
https://arxiv.org/abs/2505.21426v1
the interaction graph is assumed to be fully known. Future works might remove this limitation by estimating such a graph directly from available data. However, the estimation of a latent interaction graph is a follow-up challenge, for which our GNN-based approach represents a necessary first step. Second, highly sophisticated ABMs may include features not addressed in our framework - such as all-to-all interactions, multiple rounds of decision-making, or sequential stochastic events within a single time step. Capturing these dynamics may require extending our architecture to incorporate sequential or hierarchical components. While our method may not yet fully generalize to such settings, our findings demonstrate that building surrogates capable of replicating individual-level behavior is both feasible and effective, laying the groundwork for broader applications. Future work. Building on this foundation, the differentiability of our surrogate opens up a range of powerful applications. It enables the use of gradient-based methods for any optimization task such as policy optimization [ 1]. It allows for efficient calibration of macro parameters by treating key parameters as additional inputs to the neural network. Most importantly, our approach naturally allows for the estimation of micro (i.e., agent) level variables — a challenge for ABMs, that often requires the ad hoc development of handcrafted probabilistic models [ 25,26]. In fact, our model already contains such parameters expressed as agents’ individual states ( Z(i) t), something typically not available in ABM surrogates [ 13]. In doing so, our framework helps make ABMs more data- driven and empirically grounded, with promising applications in several scientific domains, such as economics [30], epidemiology [14], sustainability [21], urban science [4], and ecology [38]. 9 Acknowledgments The authors wish to thank Alberto Novati for his contribution to the early draft of the code for the original ABM of the predator-prey system. We also thank Daniele Grattarola for insightful early discussions that supported the initial development of this work. References [1]Akash Agrawal, Joel Dyer, Aldo Glielmo, and Michael J Wooldridge. Robust policy design in agent-based simulators using adversarial reinforcement learning. In The First MARW: Multi- Agent AI in the Real World Workshop at AAAI 2025 , 2025. [2]Namrata Anand and Tudor Achim. Protein structure and sequence generation with equivariant denoising diffusion probabilistic models. CoRR , abs/2205.15019, 2022. [3]Gaurav Arya, Moritz Schauer, Frank Schäfer, and Christopher Rackauckas. Automatic differ- entiation of programs with discrete randomness. Advances in Neural Information Processing Systems , 35:10435–10447, 2022. [4]Mark Birkin, Patrick Ballantyne, Seth Bullock, Alison Heppenstall, Heeseo Kwon, Nick Malleson, Jing Yao, and Anna Zanchetta. Digital twins and ai for healthy and sustainable cities. Computers, Environment and Urban Systems , 120:102305, 2025. [5]Corneel Casert, Isaac Tamblyn, and Stephen Whitelam. Learning stochastic dynamics and predicting emergent behavior using transformers. Nature Communications , 15(1):1875, 2024. [6]Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics. Reviews of modern physics , 81(2):591–646, 2009. [7]Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. Diffusion Models in Vision: A Survey . IEEE Transactions on Pattern Analysis & Machine Intelligence , 45(09):10850–10869, September 2023. [8]André M De Roos, Edward McCauley, and William G Wilson. Mobility
https://arxiv.org/abs/2505.21426v1
versus density-limited predator-prey dynamics on different spatial scales. Proceedings of the Royal Society of London. Series B: Biological Sciences , 246(1316):117–122, 1991. [9]Douglas D Donalson and Roger M Nisbet. Population dynamics and spatial scale: effects of system size on population persistence. Ecology , 80(8):2492–2507, 1999. [10] Joel Dyer, Nicholas Bishop, Yorgos Felekis, Fabio Massimo Zennaro, Anisoara Calinescu, Theodoros Damoulas, and Michael Wooldridge. Interventionally consistent surrogates for complex simulation models. Advances in Neural Information Processing Systems , 37:21814– 21841, 2024. [11] Marian Farah, Paul Birrell, Stefano Conti, and Daniela De Angelis. Bayesian emulation and calibration of a dynamic epidemic model for a/h1n1 influenza. Journal of the American Statistical Association , 109(508):1398–1411, 2014. [12] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning , volume 70, pages 1263–1272. PMLR, 2017. [13] Aldo Glielmo, Marco Favorito, Debmallya Chanda, and Domenico Delli Gatti. Reinforcement learning for combining search methods in the calibration of economic abms. In Proceedings of the Fourth ACM International Conference on AI in Finance , pages 305–313, 2023. [14] Nicolò Gozzi, Matteo Chinazzi, Jessica T Davis, Corrado Gioannini, Luca Rossi, Marco Ajelli, Nicola Perra, and Alessandro Vespignani. Epydemix: An open-source python package for epidemic modeling with integrated approximate bayesian calibration. medRxiv , pages 2025–05, 2025. 10 [15] Daniele Grattarola, Lorenzo Livi, and Cesare Alippi. Learning graph cellular automata. Ad- vances in Neural Information Processing Systems , 34:20983–20994, 2021. [16] V olker Grimm and Steven F Railsback. Individual-based modeling and ecology. In Individual- based modeling and ecology . Princeton university press, 2013. [17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Proceedings of the 34th International Conference on Neural Information Processing Systems , NIPS ’20, Red Hook, NY , USA, 2020. Curran Associates Inc. [18] Emiel Hoogeboom, Víctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3D. In Proceedings of the 39th International Conference on Machine Learning , volume 162, pages 8867–8887. PMLR, 2022. [19] Marc C Kennedy and Anthony O’Hagan. Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 63(3):425–464, 2001. [20] Akim Kotelnikov, Dmitry Baranchuk, Ivan Rubachev, and Artem Babenko. Tabddpm: modelling tabular data with diffusion models. In Proceedings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org, 2023. [21] Francesco Lamperti, Giovanni Dosi, and Andrea Roventini. A complex system perspective on the economics of climate change, boundless risk, and rapid decarbonization. Technical report, LEM Working Paper Series, 2025. [22] Francesco Lamperti, Andrea Roventini, and Amir Sani. Agent-based model calibration using machine learning surrogates. Journal of Economic Dynamics and Control , 90:366–389, 2018. [23] Chengyi Liu, Wenqi Fan, Yunqing Liu, Jiatong Li, Hang Li, Hui Liu, Jiliang Tang, and Qing Li. Generative diffusion models on graphs: Methods and applications. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23 , pages 6702–6711, 2023. [24] Thomas Lux. Estimation of agent-based models using sequential monte carlo methods. Journal of Economic Dynamics and Control
https://arxiv.org/abs/2505.21426v1
, 91:391–408, 2018. [25] Corrado Monti, Gianmarco De Francisci Morales, and Francesco Bonchi. Learning Opinion Dynamics from Social Traces. In ACM , KDD, pages 764–773, 2020. [26] Corrado Monti, Marco Pangallo, Gianmarco De Francisci Morales, and Francesco Bonchi. On learning agent-based models from data. Scientific Reports , 13(1):9268, 2023. [27] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In Proceedings of the 38th International Conference on Machine Learning , volume 139, pages 8162–8171. PMLR, 2021. [28] Martin A Nowak and Robert M May. Evolutionary games and spatial chaos. Nature , 359(6398):826–829, 1992. [29] Yannick Oswald, Keiran Suchak, and Nick Malleson. Agent-based models of the united states wealth distribution with ensemble kalman filter. Journal of Economic Behavior & Organization , 229:106820, 2025. [30] Marco Pangallo and R Maria del Rio-Chanona. Data-driven economic agent-based models. In The economy as an evolving complex system IV , page ? SFI Press, Santa Fe, N.M., 2025. [31] Anton Pichler, Marco Pangallo, R Maria del Rio-Chanona, François Lafond, and J Doyne Farmer. Forecasting the propagation of pandemic shocks with a dynamic input-output model. Journal of Economic Dynamics and Control , 144:104527, 2022. [32] Donovan Platt. A comparison of economic agent-based model calibration methods. Journal of Economic Dynamics and Control , 113:103859, 2020. [33] Sebastian Poledna, Michael Gregor Miess, Cars Hommes, and Katrin Rabitsch. Economic forecasting with an agent-based model. European Economic Review , 151:104306, 2023. 11 [34] Arnau Quera-Bofarull, Joel Dyer, Anisoara Calinescu, J Doyne Farmer, and Michael Wooldridge. Blackbirds: Black-box inference for differentiable simulators. Journal of Open Source Software , 8(89), 2023. [35] Steven F Railsback and V olker Grimm. Agent-based and individual-based modeling: a practical introduction . Princeton University Press, 2019. [36] Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas. The earth mover”s distance as a metric for image retrieval. Technical report, Stanford, CA, USA, 1998. [37] Thomas C Schelling. Dynamic models of segregation. Journal of mathematical sociology , 1(2):143–186, 1971. [38] Michiel Stock, Olivier Pieters, Tom De Swaef, and Francis Wyffels. Plant science in the age of simulation intelligence. Frontiers in Plant Science , 14:1299208, 2024. [39] Daniel Tang and Nick Malleson. Data assimilation with agent-based models using markov chain sampling. arXiv preprint arXiv:2205.01616 , 2022. [40] R¯uta Ubarevi ˇcien˙e, Maarten van Ham, and Tiit Tammaru. Fifty years after the schelling’s models of segregation: Bibliometric analysis of the legacy of schelling and the future directions of segregation research. Cities , 147:104838, 2024. [41] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems , volume 30. Curran Associates, Inc., 2017. [42] Uri Wilensky. Netlogo, 1999. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL. [43] Uri Wilensky and William Rand. An introduction to agent-based modeling: modeling natural, social, and engineered complex systems with NetLogo . Mit Press, 2015. [44] Zheyuan Zhan, Defang Chen, Jian-Ping Mei, Zhenghe Zhao, Jiawei Chen, Chun Chen, Siwei Lyu, and Can Wang. Conditional image synthesis with diffusion models: A survey. arXiv preprint arXiv:2409.19365 , 2024.
https://arxiv.org/abs/2505.21426v1
12 Supplemental Material A Neural models and training details In this section, we provide a detailed overview of the core components of the Graph Diffusion Network (GDN) and its methodology. We begin by introducing the diffusion process (A.1), which defines the diffusion process to be reversed in order to generate future agent states starting from a sample of Gaussian noise. Next, we detail the Graph Diffusion Network architecture (A.2) and its components. We then discuss the loss and optimization strategy (A.3), covering the training objectives and gradient flow between the diffusion model and graph components. Following this, we outline the generation algorithm (A.4), where the iterative denoising process generates future agent states. Finally, we provide a summary of the experimental resources and execution time (A.5), detailing computational requirements and execution time. A.1 Diffusion process Our diffusion model is designed to generate future agent states Z(i) t+1by reversing a known Gaus- sian noising process (i.e. the forward process ) through a set of latents ˜Z(i) t+1(τ)indexed by τ∈ {τmax, ...,0}. The forward process is a fixed Markov chain that gradually adds Gaussian noise to the input Z(i) t+1according to a previously defined variance schedule βτ. Each latent is given by: ˜Z(i) t+1(τ) =√¯ατZ(i) t+1+√ 1−¯ατϵ, ϵ∼ N(0,I) (2) where ατ:= 1−βτand¯ατ:=Qτ s=1αs. For our model, we selected a cosine variance schedule: βτ=βstart+1 2(βend−βstart)(1−cos(τ τmaxπ)) (3) withβstart = 10−4andβend= 0.02. This choice ensures that βτincreases more gradually at the beginning of the forward process, retaining more of the original input information, and at the end of the forward process. We note that, in preliminary trials, it showed to be more stable in our scope with small input dimensions compared to the cosine variance schedule proposed by [ 27] in the context of image generation. A.2 Graph Diffusion Network architecture The primary input of the conditional diffusion model inside the Graph Diffusion Network is the latent ˜Z(i) t+1(τ), a noised version of Z(i) t+1given by equation (2). In general, not all variables contained in Z(i) t+1are time-dependent, and some remain stationary through time (e.g. color in Schelling and kind in Predator-Prey, see Supplementary subsections B.1, B.2). We only include the time-dependent features (or dynamical features) in the input of the diffusion model, as they are the ones that evolve over time and need to be predicted. The output of the diffusion model is the denoising step ϵϕ(˜Z(i) t+1(τ),c(i) t) introduced in Section 3, which has the same size as the input. Thus, the neural network follows a symmetrical structure with hidden layers of increasing width in the first half, and decreasing width in the second. The condition vector c(i) tis applied to the hidden layers of the neural network by applying an activation function, performing a linear operation to match the width of the layer and summing element-wise with the hidden layer. To increase stability, hidden layers are first normalized and there is a residual connection after conditioning has been applied. All details of the architecture of the conditional diffusion model are reported in Table 1. The Message Passing GNN takes in input the entire
https://arxiv.org/abs/2505.21426v1
agent state Z(i) tas node features. The messages correspond to the node features and are aggregated by an aggregation function such as sumormean 13 value. The choice of the aggregation function depends on the ABM to be reproduced. In general, sum is a suitable choice, as the MLP fωwill capture the behavior rules of the agents. However, for ABMs where the behavior of agents is influenced by the node degree, as in the case of Schelling, mean is a more appropriate choice. All details of the architecture of the Message Passing GNN are reported in Table 1. In order to make the network more stable, all features are scaled. In particular, agent states Z(i) tcan contain both numerical and categorical features. Numerical features are scaled to the interval [−1,1]. In our experiments, we scaled numerical features with a standard scaler. After generation, they are scaled back to their original domain and, for integer numerical features, a binning function is applied afterwards. Categorical features are one-hot encoded. Table 1: Architecture and training details of the Graph Diffusion Network Component Details Conditional diffusion model Input dimension dynamical_features_dim Hidden layers [128, 256, 1024, 1024, 256, 128] Output dimension dynamical_features_dim Activation function LeakyReLU (slope = 0.1) MLP time embedding Linear(256) →Act→Linear(256) MLP current state Linear(256) →Act→Linear(256) →Act→Linear(256) MLP graph embedding Linear(256) →Act→Linear(256) →Act→Linear(256) Condition block in hidden layers LayerNorm → Linear( dim_out ) → Sum(Lin(Act(condition))) →Linear( dim_out )→Residual connection Weights initialization Xavier uniform Optimizer Adam Learning rate 10−5 Message Passing GNN Input dimension 2×agent_state_dim Hidden layers [32, 64, 128] Output dimension 256 Aggregation function sumormean Activation function LeakyReLU (slope = 0.1) Message passing Message (xj) =xj Weights initialization Kaiming uniform Optimizer Adam Learning rate 2×learning_rate_diffusion Other details τmax 100 Batch size Number of agents in the system Number of epochs 100 A.3 Loss and optimization The learning objectives of the Graph Diffusion Network are the noise residuals ϵϕ(˜Z(i) t+1(τ),c(i) t)of the denoising diffusion process in τ=τmax, ...,0used to generate Z(i) t+1, given (Z(i) t,{Z(j) t}j∈N(i) t). The generative process is conditioned by the condition vector c(i) t, which is learned by the network ϕ and is given by: c(i) t=MLP (Z(i) t) +MLP (g(i) t) +MLP (τemb) (4) where τembis the sinusoidal positional embedding of τandg(i) tis the embedding produced by the Message Passing GNN of parameters ω. More details on the three MLPs that form c(i) tare given 14 in Table 1. The loss function for the denoising diffusion steps ϵϕ(˜Z(i) t+1(τ),c(i) t)is calculated as the expected value over all agents in the system i∈A, all ABM timesteps t, allτandϵ∼ N(0,I): L(ϕ, ω) =Ei,t,τ,ϵh ||ϵ−ϵϕ(˜Z(i) t+1(τ),c(i) t)||2i (5) The parameters to optimize are the parameters of the diffusion model ϕand the parameters of the GNN ω. At each training step, the loss in equation (5)is calculated over the batch made of all agents i∈Aand gradients are backward propagated. First, the optimizer for ϕis applied, and then the optimizer for ω. Thus, the GNN is trained by inheriting the gradients from the loss of the conditional diffusion model, through the learned condition representation c(i) t. A.4
https://arxiv.org/abs/2505.21426v1
Generation The generation of Z(i) t+1starts from the last latent of the denoising diffusion process, which is a sample of Gaussian noise ˜Z(i) t+1(τmax)∼ N(0,I). The Message Passing GNN takes in input the current state Z(i) tand the states of its neighbors {Z(j) t}j∈N(i) tand forms the embedding g(i) t. Then, iteratively over τ=τmax, ...,1, the conditional diffusion model takes in input the sinusoidal positional embedding τemb, the current agent state Z(i) tand the embedding vector g(i) t, and forms the condition vector c(i) t. Lastly, the previous latent ˜Z(i) t+1(τ−1)is calculated given ˜Z(i) t+1(τ)and the output of the Graph Diffusion Network ϵϕ(˜Z(i) t+1(τ),c(i) t)as in lines 7-8 in Algorithm 1. Algorithm 1 Generation 1:Z(i) t,{Z(j) t}j∈N(i) t←Data 2:gt(i)=fω(Z(i) t,L j∈N(i) t(Z(i) t,Z(j) t)) 3:˜Z(i) t+1(τmax)∼ N(0,I) 4:forτ=τmax, ...,1do 5: τemb=SinusoidalPositionEmbedding (τ) 6: c(i) t=MLP(Z(i) t) +MLP(g(i) t) +MLP(τemb) 7: z∼ N(0,I), ift >1elsez= 0 8: ˜Z(i) t+1(τ−1) =1√ατ(˜Z(i) t+1(τ)−1−ατ√1−¯ατϵϕ(˜Z(i) t+1(τ),c(i) t)) +στz 9:end for 10:return ˜Z(i) t+1(0)≈Z(i) t+1 We set στ=q 1−¯ατ−1 1−¯ατβτ. This choice of στis optimal for deterministically set points [ 17], which is the case for some update rules in ABMs (e.g. happy agents in Schelling and dead agents in Predator-Prey). Alternatively, one can also choose στ=√βτ, which is more optimal for normally distributed points. A.5 Experiments resources and execution time All experiments were run in a cloud based server with 15 vCores, 180 GB of RAM and an NVIDIA A100 80GB PCIe GPU. Execution times depend on the size of datasets and GPU occupancy by other processes. In our experiments, where the training datasets were made of R= 500 ramifications over T= 10 timesteps with 2048 agents in the system for Predator-Prey and 1950 agents for Schelling, the training time over 100 epochs typically lasted between 7 to 12 hours. Generating a simulation of 25 timesteps for the entire system of agents takes roughly between 10 to 12 seconds for both Predator-Prey and Schelling, less than 0.5 seconds per timestep. 15 B ABM Case Studies B.1 Schelling model TheSchelling model of segregation is a classic example to showcase the emergence property of ABMs. Agents i∈Aare placed in a 2-dimensional grid L×L. Their state is given by their color (a binary variable such as black andwhite ) and their position on the grid. Z(i) t= (C(i), x(i) t, y(i) t) C(i)∈ {C1, C2}, x(i) t, y(i) t∈[0, L−1] Each agent ihas a fixed color C(i), which remains constant over time, while their position on the grid may change. The set of agents that interact with agent i, denoted j∈ N(i), includes those in the eight adjacent cells (Moore neighborhood) of (x(i) t, y(i) t). The ABM mostly depends on a parameter ξ∈[0,1], representing the intolerance of the agents. If the fraction of neighbors j∈ N(i)that share the same color as agent iisgreater than or equal to ξ, agent iis considered happy and remains in its current position: (x(i) t+1, y(i) t+1) = (x(i) t, y(i) t). Conversely, if the fraction is less than ξ, agent iis considered unhappy and moves to a randomly chosen empty cell on the
https://arxiv.org/abs/2505.21426v1
grid. Thus, the update rule is deterministic when agents are happy, and stochastic when they are unhappy. Algorithm 2 presents the pseudo-code of the ABM. Of particular interest are lines 15–25, which describe how agents relocate by searching for an empty cell on the grid. It is clear that this search process is not a simple draw from a probability distribution, as in the framework by Arya et al. [ 3], but a much more complex trial and error process. The pseudocode makes it clear that agents are more likely to relocate to nearby positions rather than distant ones, due to the way direction and distance are sampled. This spatial bias will be quantitatively confirmed in Figure 13. B.2 Predator-Prey model ThePredator-Prey ABM is a simulation model that captures the dynamics of interacting populations over time. We use a slightly adapted version of the model introduced in Ref. [ 39] (Algorithm 3 presents the pseudo-code of our ABM). Agents i∈Aoccupy a two-dimensional grid of size L×L. Each agent’s state at time tis given by its kind (either Prey orPredator ), its life phase ( Unborn ,Alive , Pregnant , orDead ), and its position on the grid: Z(i) t= (K(i), f(i) t, x(i) t, y(i) t) K(i)∈ {Prey,Predator }, f(i) t∈ {Unborn ,Alive,Pregnant ,Dead}, x(i) t, y(i) t∈[0, L−1] The agent’s kind K(i)is fixed over time, while the phase and position can evolve. The set of interacting agents j∈ N(i)consists of those located in the four cardinally adjacent cells to (x(i) t, y(i) t) (V on Neumann neighborhood). In addition, each Unborn agent iis assigned a parent agent jof the same kind, provided jis either Alive orPregnant . This parent-child relationship governs the birth mechanism. The dynamics are more complex than in Schelling’s model of segregation. An Alive agent can transition to one of three states: remain Alive , become Pregnant , or become Dead . These transitions are stochastically determined. If the agent remains Alive , it moves at random to a cardinally adjacent cell. If it becomes Pregnant , it remains in place. If it becomes Dead , it loses its position on the grid. ADead agent remains dead and off-grid. A Pregnant agent returns to being Alive in the same cell. An Unborn agent becomes Alive only if its assigned parent is currently Pregnant ; otherwise, it remains Unborn . This gives rise to a set of deterministic update rules: Dead→Dead,Pregnant →Alive,Unborn →Alive if parent is Pregnant Unborn otherwise 16 Algorithm 2 Schelling Model of Segregation Require: Agent set A, grid size L, tolerance ξ, max steps T, max distance dmax, max trials K 1:Initialize: For each i∈A, sample C(i)∼Uniform {C1, C2},(x(i) 0, y(i) 0)∼UniformGrid( L),and set Z(i) 0= (C(i), x(i) 0, y(i) 0). 2:fort= 0, . . . , T −1do 3: unhappy ← ∅ ▷Reset unhappy list 4: for all i∈Ado 5: N(i) t← {j∈A: (x(j) t, y(j) t)∈Moore( x(i) t, y(i) t)} ▷Moore neighborhood 6: s← |{ j∈ N(i) t:C(j)=C(i)}|, n← |N(i) t| ▷Same-color neighbors, all neighbors 7: r←( s/n n > 0 0 n= 0▷Similarity
https://arxiv.org/abs/2505.21426v1
ratio 8: ifr < ξ then ▷Agent iis unhappy 9: unhappy ←unhappy ∪{i} 10: end if 11: end for 12: ifunhappy = ∅then 13: break ▷Convergence 14: end if 15: for all i∈unhappy do 16: (x(i) t+1, y(i) t+1)←(x(i) t, y(i) t) 17: fork= 1, . . . , K do 18: θ∼Uniform(0 ,2π), d∼Uniform(0 , dmax) ▷Random direction, distance up to dmax 19: ∆x← ⌊dcosθ⌋,∆y← ⌊dsinθ⌋ ▷Convert to grid movement 20: x∗←(x(i) t+ ∆x) mod ( L), y∗←(y(i) t+ ∆y) mod ( L) ▷Wrap around border 21: if¬∃j̸=i: (x(j) t, y(j) t) = (x∗, y∗)then ▷Check if cell is empty 22: (x(i) t+1, y(i) t+1)←(x∗, y∗) ▷Move to new location 23: break ▷Stop searching once valid position is found 24: end if 25: end for 26: end for 27: for all i∈A\unhappy do 28: Z(i) t+1←Z(i) t ▷Happy agents stay put 29: end for 30:end for And a set of stochastic update rules: Alive→  Alive (move) Pregnant (reproduce) Dead (die) All transitions are governed by a matrix Ψ, which specifies deterministic rules as probabilities equal to 1, and defines stochastic transitions through probabilities that depend on spatial inter-species interactions. We report the values of the matrix Ψthat define our four experimental settings in Tables 2-5. For instance, a Prey that interacts with a Predator is more likely to die compared to a Prey that does not interact with a Predator . Similarly, a Predator that does not interact with a Prey is more likely to die compared to a Predator that does interact with a Prey, as it is more likely to starve. These spatial interactions are defined by proximity: an agent interacts with others located in its V on Neumann neighborhood (i.e., the 4-neighborhood). 17 Table 2: Transition matrix Ψ1 Die Move Turn pregnant Turn alive Stay dead Stay unborn Alive Pred + Prey 0.15 0.45 0.40 0.00 0.00 0.00 Alive Pred + No prey 0.25 0.55 0.20 0.00 0.00 0.00 Alive Prey + Pred 0.30 0.45 0.25 0.00 0.00 0.00 Alive Prey + No pred 0.15 0.40 0.45 0.00 0.00 0.00 Pregnant + Unborn child 0.00 0.00 0.00 1.00 0.00 0.00 Dead 0.00 0.00 0.00 0.00 1.00 0.00 Unborn + Npp∗0.00 0.00 0.00 0.00 0.00 1.00 Table 3: Transition matrix Ψ2 Die Move Turn pregnant Turn alive Stay dead Stay unborn Alive Pred + Prey 0.35 0.45 0.20 0.00 0.00 0.00 Alive Pred + No prey 0.25 0.60 0.15 0.00 0.00 0.00 Alive Prey + Pred 0.45 0.50 0.05 0.00 0.00 0.00 Alive Prey + No pred 0.35 0.35 0.30 0.00 0.00 0.00 Pregnant + Unborn child 0.00 0.00 0.00 1.00 0.00 0.00 Dead 0.00 0.00 0.00 0.00 1.00 0.00 Unborn + Npp∗0.00 0.00 0.00 0.00 0.00 1.00 Table 4: Transition matrix Ψ3 Die Move Turn pregnant Turn alive Stay dead Stay unborn Alive Pred + Prey 0.15 0.30 0.55 0.00 0.00 0.00 Alive Pred + No prey 0.30 0.55 0.15 0.00 0.00 0.00 Alive Prey + Pred 0.70 0.20 0.10 0.00 0.00 0.00 Alive Prey + No pred 0.10 0.40 0.50 0.00 0.00 0.00 Pregnant + Unborn
https://arxiv.org/abs/2505.21426v1
child 0.00 0.00 0.00 1.00 0.00 0.00 Dead 0.00 0.00 0.00 0.00 1.00 0.00 Unborn + Npp∗0.00 0.00 0.00 0.00 0.00 1.00 Table 5: Transition matrix Ψ4 Die Move Turn pregnant Turn alive Stay dead Stay unborn Alive Pred + Prey 0.15 0.35 0.50 0.00 0.00 0.00 Alive Pred + No prey 0.25 0.45 0.30 0.00 0.00 0.00 Alive Prey + Pred 0.45 0.40 0.15 0.00 0.00 0.00 Alive Prey + No pred 0.30 0.40 0.30 0.00 0.00 0.00 Pregnant + Unborn child 0.00 0.00 0.00 1.00 0.00 0.00 Dead 0.00 0.00 0.00 0.00 1.00 0.00 Unborn + Npp∗0.00 0.00 0.00 0.00 0.00 1.00 ∗Not pregnant parent. 18 Algorithm 3 Predator-Prey model Require: agent set A={1,2, . . . , n }, grid size L, transition matrix Ψ, max steps T 1:for all i∈Ado 2: K(i)∼Uniform ({Prey,Predator }) ▷Assign kind 3: f(i) 0∼Uniform ({Alive ,Unborn }) ▷Initial phase 4: iff(i) 0=Unborn then 5: parent (i)∼Uniform ({j∈A:j̸=i∧K(j)=K(i)}) ▷Assign parent 6: end if 7: Z(i) 0= (K(i), f(i) 0, x(i) 0, y(i) 0) ▷Agent state 8:end for 9:fort= 0, . . . , T −1do 10: for all i∈Ado 11: switch f(i) t 12: case Alive ▷Alive agent dynamics 13: N(i) t←VonNeumann( x(i) t, y(i) t) ▷V on Neumann neighborhood 14: neighbor ← ∃j∈A: (x(j) t, y(j) t)∈ N(i) t∧K(j)̸=K(i)▷Find opposite-kind neighbors 15: f(i) t+1∼Categorical Ψ(K(i), f(i) t,neighbor ) ▷Random life phase update 16: iff(i) t+1=Alive then 17: (x(i) t+1, y(i) t+1)∼Uniform( N(i) t) ▷Move randomly 18: else if f(i) t+1=Dead then 19: (x(i) t+1, y(i) t+1)←(∅,∅) ▷Agent dies 20: else if f(i) t+1=Pregnant then 21: (x(i) t+1, y(i) t+1)←(x(i) t, y(i) t) ▷Stay in place 22: end if 23: case Pregnant ▷Birth 24: f(i) t+1←Alive 25: (x(i) t+1, y(i) t+1)←(x(i) t, y(i) t) ▷Stay in place 26: case Dead 27: f(i) t+1←Dead ▷Remain dead 28: (x(i) t+1, y(i) t+1)←(∅,∅) 29: case Unborn 30: j←parent( i) ▷Get parent 31: iff(j) t=Pregnant then 32: f(i) t+1←Alive ▷Born if parent is pregnant 33: N(j) t←VonNeumann( x(j) t, y(j) t) 34: (x(i) t+1, y(i) t+1)∼Uniform( N(j) t) ▷Place near parent 35: else 36: f(i) t+1←Unborn ▷Remain unborn 37: (x(i) t+1, y(i) t+1)←(∅,∅) 38: end if 39: end for 40: if∀a∈A, f(a) t∈ {Dead ,Unborn }then 41: break ▷Simulation ends 42: end if 43:end for 19 C Further experimental details and results In this section, we provide additional details on the experiments and present further results. In the subsection on experimental design (C.1), we describe the micro and macro metrics used, including the number of points over which these metrics are computed. We then present additional qualitative results on reproducing emergent segregation in the Schelling model (C.2) and emergent oscillations in predator-prey dynamics (C.3). Finally, we provide further quantitative results for both models (C.4). C.1 Experimental design Experiment details In our experiments, we considered three combinations of the parameter ξfor the Schelling ABM and four combinations of the matrix Ψfor the Predator-Prey ABM. For each parameter setting, we generated 8 training datasets obtained with different initial seeds and trained a surrogate model and ablated model for each. In
https://arxiv.org/abs/2505.21426v1
total we trained 112 models, 64 ( 8×4×2) for Predator-Prey, and 48 ( 8×3×2) for Schelling. All our evaluations are done across these 8 models per parameter configuration. We fixed some of the ABM parameters across experiments, which are reported in Table 6. For Predator-Prey, density refers to the density of agents that are initialized as Alive , whereas the number of agents refers to the total number of agents in the system ( Alive ,Dead ,Pregnant andUnborn agents). Color Ratio indicates the ratio between the number of black and white agents; Kind Ratio indicates the ratio between the number of predators and the number of preys. Table 6: ABM parameters in our experiments Component Schelling Predator-Prey Grid size 51×51 32 ×32 Density 0.75 0.3 Agents 1950 2048 Color/Kind Ratio 1:1 1:1 Micro evaluation. We evaluate how well the surrogate captures individual behavior of agents on a future ramification dataset of T= 25 timesteps. We generate this out-of-training dataset by giving as initial condition the last system configuration ZT−1[r= 0] from the training ramification dataset. Thus, for each agent i∈Awe have 24 initial conditions (Z(i) t,{Z(j) t}j∈N(i) t)and 500 outcomes Z(i) t+1to build 24 ground truth probability distributions, such as equation (1). Then, we use our surrogate model to generate 500 outcomes Z(i) t+1given as condition (Z(i) t,{Z(j) t}j∈N(i) t), producing 24 probability distributions such as (1)for each agent i∈A, which we compare to the probability distributions from the ground truth. For Schelling, we measure the EMD on the marginals of the coordinates xandy. For Predator-Prey, we measure the EMD on the distributions of the phases f(i) t, and fix the distance between the different phases to 1, since they represent a categorical variable. For each of the 8 experiments, we calculate the mean EMD over all agents and all timesteps. For Schelling, we evaluate the EMD on 1950 distributions (one per agent) for the xcoordinate and 1950 distributions for the ycoordinate, over 24 timestep, yielding 93600 EMD entries. For Predator-Prey, we evaluate the EMD on 2048 distributions (one per agent) over 24 timesteps, yielding 49152 EMD entries. The box plots in Figure 4 have as entries the 8 mean EMD values for the surrogate, and 8 mean EMD values for the ablated model. Macro evaluation. We evaluate how well the surrogate model reproduces system-level behavior by tracking summary statistics over time. We generate 100 independent simulations of 25 timesteps beyond the training horizon for each model, and compute the symmetric mean absolute error (sMAPE) between the mean ground-truth trajectory and the mean surrogate-generated trajectory across these 100 independent simulations. In the case of Schelling, where we only track the number of number ofhappy agents, the sMAPE definition is straightforward. Let AtandFtbe the mean number of happy agents across the 100 independent simulations from the ground-truth and the surrogate model respectively. Then, the sMAPE is computed as: 20 sMAPE schelling =2 TTX t=1|At−Ft| |At|+|Ft|(6) For Predator-Prey, we track the number of predators and preys on the grid, which follow two distinct trajectories. Thus, for each kind we apply Formula 6 separately and
https://arxiv.org/abs/2505.21426v1
get sMAPE preys and sMAPE predators . Then, to work with a single value, we calculate the mean value: sMAPE predprey =1 2(sMAPE preys +sMAPE predators ) (7) For each of the 8 experiments, we calculate the sMAPE over 25 timesteps and then compute its mean. The box plots in Figure 4 have as entries the 8 mean sMAPE values for the surrogate, and 8 mean EMD values of the ablated model. C.2 Reproducing emergent segregation Figure 2 in the main text illustrated how, for three selected time steps ( t= 0,t= 15 , and t= 30 ) and three different values of the intolerance threshold ξ, our surrogate model successfully reproduced the qualitative dynamics of the original ABM. In contrast, the ablated model failed to capture these dynamics. Figures 5, 6, and 7 provide a more detailed view of the system’s evolution for the values ξ1,ξ2, and ξ3, respectively. In addition to the previously shown snapshots, these supplementary figures include intermediate time steps ( t= 5,t= 10 ,t= 20 , and t= 25 ), offering a more complete picture of the dynamics. Figure 5: Evolution of the position of black and red agents in the Schelling model, for tolerance thresholds ξ=ξ1= 0.625. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row). Figure 5 illustrates that the original ABM rapidly evolves toward a segregated configuration, which becomes visible as early as t= 5. However, the resulting clusters remain relatively small, indicating that a low intolerance threshold allows for a moderate level of social mixing. This motivates the label “Low Segregation” for the ξ1parameter setting in Figure 4. The surrogate model accurately reproduces these qualitative patterns, albeit with slightly larger clusters. In contrast, the ablation model fails to capture the emergent spatial structure, remaining close to the initial random configuration across time steps. Figure 6 shows that, for ξ=ξ2, the original ABM also converges toward a segregated state, but the convergence occurs more slowly than in the ξ1case. The resulting clusters are larger and more distinct, reflecting a “High Segregation” scenario. The surrogate model again successfully replicates these dynamics, while the ablation model continues to exhibit no structured behavior. 21 Figure 6: Evolution of the position of black and red agents in the Schelling model, for tolerance thresholds ξ=ξ2= 0.75. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row). Figure 7: Evolution of the position of black and red agents in the Schelling model, for tolerance thresholds ξ=ξ3= 0.875. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row). Finally, Figure 7 demonstrates that, for ξ=ξ3, the original ABM does not converge to a stable con- figuration. Instead, the high intolerance threshold causes agents to continuously relocate, preventing the emergence of segregated clusters. In this degenerate case, both the surrogate and the ablation models correctly reproduce the persistent disorder of the original dynamics. C.3 Reproducing emergent oscillations in predator-prey dynamics Figure 3 in the main text compares population trajectories from
https://arxiv.org/abs/2505.21426v1
the ground-truth Predator-Prey ABM, our surrogate model, and the ablated model. For both parameter sets shown ( Ψ1andΨ4), the surrogate accurately reproduces the stochastic dynamics beyond the training window, while the ablation fails to capture the key oscillation patterns. Figure 8 extends the previous analysis to parameter sets Ψ2andΨ3, which produce, respectively, a monotonic decline in prey and predator populations, and oscillatory dynamics for preys but not for predators. The case of Ψ2is particularly illustrative: despite its apparent simplicity, the ablated model fails to reproduce the monotonic trends, showing an unrealistic spike in both populations immediately after the training window. In contrast, the surrogate model accurately captures the expected decay. Under Ψ3, the surrogate successfully replicates the oscillations in the prey population and the stable predator trend, while the ablation once again outputs generic dynamics, largely insensitive to the 22 Ψ2: Monotone predators and preys 0 10 20 30050100150200Number of agentsGround truth 0 10 20 30Surrogate model 0 10 20 30Ablation model Preys Predators Ψ3: Oscillations preys, monotone predators 0 10 20 30 Time0100200300400Number of agentsGround truth 0 10 20 30 TimeSurrogate model 0 10 20 30 TimeAblation model Preys Predators Figure 8: Forecasting macro-level summary statistics (here, the number of alive preys and predators over time), starting from the last condition seen in training, for 100 independent simulation runs, under configuration Ψ2(monotonic dynamics for both predators and preys, top) and Ψ3(oscillations only for predators, bottom). Left: original ABM simulations. Center: surrogate. Right: ablation. The dashed vertical line indicates the end of the training phase for the surrogate and ablation models. underlying regime. This highlights the ablation model’s inability to distinguish between qualitatively different behaviors. Beyond aggregate population counts, it is instructive to examine the spatial distribution of predators and preys over time. Prior studies [ 8,9] have shown that similar predator-prey models give rise to rich spatial dynamics, characterized by the spontaneous emergence of structured patterns (see, e.g., Figure 1.3 in [ 16]). Starting from initially random configurations, the interactions between agents give rise to both short- and long-range spatial correlations, with predators and preys organizing into dynamic clusters and propagating waves. These patterns are reminiscent of those observed in spatially extended reaction-diffusion systems and bear a strong resemblance to spatial chaos phenomena in evolutionary game theory [28]. We observe similar spatial patterns in our predator-prey ABM. Figures 9, 10, 11, and 12 show the evolution of the positions of predators and preys on the grid. Figure 9: Evolution of the position of preys (black) and predators (red) in the predator-prey ABM, for parameters Ψ = Ψ 1. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row). Figure 9 illustrates the spatial dynamics under parameter set Ψ = Ψ 1, which is characterized by oscillations in predator and prey populations. The ground-truth model displays a rise in population 23 densities mid-simulation, followed by a near-extinction phase toward the end, in line with the temporal trends shown in Figure 3. Notably, the ground-truth also exhibits distinct spatial patterns, with predators and preys forming dynamic
https://arxiv.org/abs/2505.21426v1
clusters. The surrogate model successfully reproduces both the population dynamics and the emergent spatial structures, whereas the ablated model fails to capture any meaningful spatial organization. Figure 10: Evolution of the position of preys (black) and predators (red) in the predator-prey ABM, for parameters Ψ = Ψ 2. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row). Figure 11: Evolution of the position of preys (black) and predators (red) in the predator-prey ABM, for parameters Ψ = Ψ 3. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row). Similar results are observed in Figures 10, 11, and 12, which capture different dynamic regimes for predators and preys. In all cases, the surrogate model accurately reproduces both spatial and temporal patterns, while the ablated model consistently fails to capture the underlying dynamics or spatial structure. C.4 Quantitative results In this section, we present additional details regarding the quantitative evaluation discussed in Section 4.2 (page 8). In particular here we detail further the micro-level evaluation; that is, how much our surrogate model is able to reproduce the distribution of possible states of an agent at t(that is, Z(i) t) given its past state Z(i) t−1. To better understand how this comparison works, Figure 13 (left side) shows the distribution for a single agent of the position x(i) t, one of the components of the state Z(i) t, for each of the two possible past conditions in the Schelling model, happy andunhappy . We see 24 Figure 12: Evolution of the position of preys (black) and predators (red) in the predator-prey ABM, for parameters Ψ = Ψ 4. We compare the ground truth (top row) with our surrogate (middle row) and the ablation (bottom row). that when the agent is happy , its position remains fixed, while in the case the agent is unhappy , it moves randomly with a certain distribution peaked around the starting point (implicitly defined by the ABM, see Algorithm 2). Our surrogate model aims at reproducing this distribution, without knowing the original ABM and without access to the latent variable happy /unhappy , but only observing the sequence of states and the graph of interactions. The distributions obtained by our surrogate in the same starting conditions are shown on the right side of the figure. To evaluate the quality of this reconstruction, we quantify it as the earth mover distance between the distributions, for each agent in each timestep, and then aggregate these measures. The result of this comparison is shown in Figure 4 of the main text. Here, we present additional experiments showing how these results change across different conditions. Schelling model. Figure 14 shows the distribution of the EMD scores obtained by our surrogate model and by the ablation model only for the stochastic rules of the Schelling model. In this ABM, the stochasticity lies in the random movement of the unhappy agents (Algorithm 2, lines 16-25). We see that our surrogate is able to capture these distributions well (EMD averages below
https://arxiv.org/abs/2505.21426v1
5, considering that the scale is given by the length of the grid, L= 50 ) and better than the surrogate model. Predator-prey model. Figure 15 instead shows the distribution of the EMD scores only for the stochastic rules of the Predator-Prey model. Here, the stochasticity lies in the Alive phase, where the agent might transition to another phase depending on its interaction with its neighbors. We see that also in this model our surrogate is able to capture the stochasticity well, with EMD averaging below 0.1 for each of the four configurations Ψ1−4. Here the scale is in [0,1]since this error is measured on the life phase binary vector. We further investigate these results in Figure 16 by dividing them by agent state in each configuration. Here the stochastic transitions happen only in the first column (Alive ). This figure confirms that the EMD values are quite low, and lower than those obtained by the ablation model. Moreover, here we observe that also the deterministic transitions are recovered with precision (error is always below 10−3)) by our surrogate model. 25 20 10 0 10 20 x0.000.250.500.751.00ProbabilityGround truth 20 10 0 10 20 x0.000.250.500.751.00Surrogate model(a)Happy agent 20 10 0 10 20 x0.000.020.040.06ProbabilityGround truth 20 10 0 10 20 x0.000.020.040.06Surrogate model (b)Unhappy agent Figure 13: Example distributions of the position of a single agent in the Schelling model, in two different conditions: (a) happy agent; (b) unhappy agent. Histograms on the left represents the original ground truth ABM, those on the right the ones obtained by our surrogate model. The dashed red vertical line indicates the initial position of the agent. Note that the coordinates have been rescaled to [-25, 25], compared to the [0, 50] interval used for easiness of exposition in Algorithm 2. 0 5 10 15 20 250.00.10.20.3DensitySurrogate 1 0 5 10 15 20 250.00.10.20.30.4DensitySurrogate 2 0 5 10 15 20 250.00.20.40.60.8DensitySurrogate 3 0 5 10 15 20 25 EMD0.000.050.100.150.20DensityAblation 1 0 5 10 15 20 25 EMD0.00.10.20.3DensityAblation 2 0 5 10 15 20 25 EMD0.000.250.500.751.00DensityAblation 3 Figure 14: Distribution of errors (EMD) obtained by our surrogate model (top row) and by the ablation model (bottom row) only for the stochastic rules of the Schelling ABM for each considered configuration of the parameter ξof the ABM. Different colors indicate independent experiments that only differ by the random seed used to generate the ground truth data. 26 0.0 0.2 0.4 0.6 0.8 1.0024681012DensitySurrogate 1 0.0 0.2 0.4 0.6 0.8 1.00.02.55.07.510.012.5DensitySurrogate 2 0.0 0.2 0.4 0.6 0.8 1.0024681012DensitySurrogate 3 0.0 0.2 0.4 0.6 0.8 1.00.02.55.07.510.012.515.0DensitySurrogate 4 0.0 0.2 0.4 0.6 0.8 1.0 EMD0246810DensityAblation 1 0.0 0.2 0.4 0.6 0.8 1.0 EMD0246810DensityAblation 2 0.0 0.2 0.4 0.6 0.8 1.0 EMD02468DensityAblation 3 0.0 0.2 0.4 0.6 0.8 1.0 EMD024681012DensityAblation 4 Figure 15: Distribution of errors (EMD) obtained by our surrogate model (top row) and by the ablation model (bottom row) only for the stochastic rules of the Predator-Prey ABM for each considered configuration of the parameter matrix Ψof the ABM. Different colors indicate independent experiments that only differ by the random seed used to
https://arxiv.org/abs/2505.21426v1
Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning Xianling Mu1Joseph Ternasky2Fuat Alican2Yigit Ihlamur2 (1)University of Oxford(2)Vela Research Abstract Early-stage startup investment is a high-risk en- deavor characterized by scarce data and uncer- tain outcomes. Traditional machine learning ap- proaches often require large, labeled datasets and extensive fine-tuning, yet remain opaque and dif- ficult for domain experts to interpret or improve. In this paper, we propose a transparent and data- efficient investment decision framework powered by memory-augmented large language models (LLMs) using in-context learning (ICL). Central to our method is a natural language policy em- bedded directly into the LLM prompt, enabling the model to apply explicit reasoning patterns and allowing human experts to easily interpret, audit, and iteratively refine the logic. We introduce a lightweight training process that combines few- shot learning with an in-context learning loop, enabling the LLM to update its decision policy iteratively based on structured feedback. With only minimal supervision and no gradient-based optimization, our system predicts startup success far more accurately than existing benchmarks. It is over 20 ×more precise than random chance, which succeeds 1.9% of the time. It is also 7.1 × more precise than the typical 5.6% success rate of top-tier venture capital (VC) firms. 1. Introduction ICL has emerged as a powerful paradigm in natural language processing, offering several key advantages: unsupervised adaptability, minimal data requirements, and strong reason- ing capabilities (Dong et al., 2023). These features make ICL well suited for domains such as early-stage VC, where 1University of Oxford, Oxford, United Kingdom2Vela Re- search, San Francisco, United States. Correspondence to: Xianling Mu <xianling.mu@chch.ox.ac.uk >, Yigit Ihlamur <yigit@vela.partners >. Copyright 2025 by the author(s).high-stakes decisions must often be made with sparse, noisy, or incomplete data. In this paper, we introduce a novel investment decision- making framework that integrates memory-augmented LLMs with ICL. At the core of our system is a natural- language “policy”—a structured set of heuristics expressed in plain text—which is embedded directly into the LLM prompt. These policies guide the model’s reasoning during prediction while remaining fully interpretable and editable by human experts. We begin by prompting the LLM with a small set of la- beled examples (successful and failed startups) to generate an initial decision policy. This policy is then iteratively refined through a lightweight training loop: new exam- ples are incorporated as additional context, prompting the model to revise and improve the policy. At each step, can- didate policies are scored using a precision-based evalua- tion metric, and the highest-performing version is retained. To further enhance policy quality, we incorporate both au- tomated reflection—LLM-generated explanations of pre- diction outcomes—and optional expert intervention. This iterative refinement process continues until a stable, high- performing policy emerges that generalizes well to unseen data. Our primary contributions are as follows: •Transparency and Interpretability: All model logic is encoded in plain-text policies that are human- readable and editable. This allows VC experts to under- stand, audit, and improve the model’s decision-making process — a critical requirement in high-stakes finan- cial contexts. •Efficiency and Cost-Effectiveness: Our method re- quires minimal fine-tuning, short
https://arxiv.org/abs/2505.21427v1
training cycles, and very low compute cost. Using only the GPT-4o mini API and a few dollars’ worth of compute, we are able to produce policies that outperform random baselines by 3–4 ×in precision, even without any further opti- mization. 1arXiv:2505.21427v1 [cs.AI] 27 May 2025 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning •Generalizability and Transferability: Because our decision policies are expressed in natural language, they can be seamlessly applied across tasks without requiring model retraining or code changes. This makes the approach highly transferable to other do- mains where structured reasoning is essential, such as grant evaluation, academic hiring, or legal case review. The policy format allows domain experts to adapt, in- spect, and reuse decision logic with minimal effort, enabling broad applicability beyond the startup invest- ment context. The remainder of this paper details our related work, method- ology, dataset, experiments, and findings, demonstrating that our approach yields practical, accurate, and interpretable investment predictions. 2. Related Work ICL. ICL has emerged as a core capability of LLMs, al- lowing models to generalize to new tasks by conditioning on examples without requiring parameter updates. Recent stud- ies have shown that ICL can support algorithmic reasoning (Zhou et al., 2022; Wang et al., 2023) and instruction fol- lowing (Wei et al., 2021). Our work builds on this growing body of research by exploring how ICL can support itera- tive refinement through natural language policies. Rather than introducing new model architectures, we aim to make this process interpretable and accessible by using plain-text reasoning strategies that evolve over time. LLMs for Decision Support. LLMs are increasingly being applied to structured decision-making tasks such as fairness-aware hiring evaluations (Gilardi et al., 2023). Many of these applications leverage the language modeling strength of LLMs but may rely on static prompts or pro- duce decisions that are difficult to audit. In this context, we contribute a simple framework for integrating explicit decision heuristics in natural language, which may help im- prove transparency and controllability in domains such as early-stage venture evaluation. Predicting Startup Success. Previous research on startup success prediction has used structured financial, operational, and social network data (Zhao et al., 2024), often via tra- ditional machine learning classifiers. While these methods offer useful baselines, they typically depend on predefined features and fixed decision logic. More recent efforts have explored the adaptation of LLMs to the VC domain (Xiong et al., 2024). Our approach attempts to complement these methods by exploring how LLMs can reason from struc- tured founder profiles using explicitly defined policies that are easy to inspect and modify.Memory-Augmented Models. Our approach is loosely in- spired by memory-augmented neural architectures (Graves et al., 2016), which enhance reasoning by maintaining dy- namic external state. Rather than altering internal repre- sentations, we simulate a form of symbolic memory by updating natural language policies across training iterations. This design allows LLMs to refine their behavior using plain- text cues alone, an approach that emphasizes interpretability over architectural complexity. 3. Dataset 3.1. Dataset Overview All data used in this study originate from a dataset we
https://arxiv.org/abs/2505.21427v1
refer to asfounder cleaned data , which was constructed by converting unstructured information from LinkedIn pro- files and Crunchbase entries into structured features using LLM-powered extraction techniques. This dataset focuses on US-based companies founded in or after 2010 and contains information on 1,022 successful and 9,902 failed companies. A company is labeled as successful if it has completed an initial public offering (IPO) with a valuation over $500M, has been acquired for more than $500M, or has raised more than $500M and is still operating. A company is labeled as failed if it was founded between 2010 and 2020 and raised between $100K and $4M. It must still be operating, but show no reasonable signs of a successful outcome. Each entry includes the primary structured features used for analysis, notably: founder characteristics such asclean cbprofile ,clean linkedin profile , company description , and idea ; and outcome la- bels such as funds range andsuccess . To mitigate data contamination in LLMs, as discussed in Palavalli et al. (2024), we excluded fields that contain poten- tially identifiable information, such as company and founder names. Therefore, we used only clean cbprofile , clean linkedin profile andsuccess for model training and evaluation, ensuring that the model could not “cheat” by memorizing known success cases. 3.2. Training Data We initially allocated 200 successful and 200 unsuccessful founders as a candidate training set. However, in most ex- periments, we did not use the full set. The best-performing policy was trained using only 120 successful and 120 unsuc- cessful examples. In practice, we observed that as few as 60 successful and 60 unsuccessful examples were sufficient to generate rea- sonably strong policies. This suggests that our method is data-efficient and benefits significantly from the ICL capa- 2 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning bilities of LLMs. 3.3. Testing Data All remaining data were reserved for validation and testing. We employed two types of test set configurations to evaluate policy performance. The first consisted of 100 successful and 1000 unsuccessful cases, used for preliminary assess- ment and comparison across different policies. The second configuration included 40 successful and 2000 unsuccessful cases, designed to simulate a more realistic success rate com- parable to the observed proportion of successful founders. 4. Methodology Our core methodology focuses on generating and optimizing a policy to assist an LLM in predicting the success of startup founders. The prediction task itself is entirely delegated to the LLM, with the policy embedded in the prompt to guide its reasoning. A typical inference prompt follows this structure: You are an expert in venture capital, specializing in evaluating startup founders. Your task is to distinguish successful founders from unsuccessful ones. Here is a policy to assist you: {policy } Given the founder’s profile: Founder’s LinkedIn Profile: {row[’clean linkedin profile’] } Crunchbase information: {row[’clean cbprofile’] } Based on this information, determine if the founder will succeed. Answer using only one word: True or False. The complete training pipeline, as illustrated in the figure, comprises three major components: • Initial Policy Generation •In-Context Learning Loop for
https://arxiv.org/abs/2505.21427v1
Iterative Policy Refine- ment •Further Policy Enhancement via Reflection or Expert InterventionOne of the key benefits of our framework is its flexibility: policy improvement does not rely on a strict sequential pipeline. Instead, various methods can be applied iteratively or in combination, with each resulting policy evaluated and scored. The highest-scoring policy is selected as the final output. Figure 1. Policy Generating Workflow 4.1. Initial Policy Generation We generated the initial policy using a prompt-based ap- proach, leveraging 20 successful and 20 unsuccessful cases combined with expert-informed editing. The LLM was tasked with producing a structured set of rules based on this minimal dataset. An example of such a refined initial policy is as follows: Refined Policy for Distinguishing Successful Founders: 1.Educational Background — Advanced degrees (MD, PhD, MBA) from reputable institutions are preferred, showcasing expertise in relevant fields. 2.Industry Experience — Extensive experience in rele- vant industries (biotechnology, healthcare) with leader- ship roles in both academic and operational capacities. 3.Professional Network — A robust LinkedIn network (500+ connections), indicating strong engagement and relationships within the biotech community. 4.Founding Experience — Founders should have previ- 3 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning ous entrepreneurial initiatives or significant contribu- tions to new ventures. 5.Technical and Scientific Competence — Strong techni- cal skills and a proven record of innovation in relevant scientific fields (e.g., drug discovery, mRNA technol- ogy). 6.Advisory and Leadership Roles — Recognized as a leader with experience in advisory or governance ca- pacities in prominent organizations. 7.Visibility in the Industry — Engagement in key projects that enhance credibility and professional visibility. 8.Collaborative and Interdisciplinary Work — Demon- strated ability to foster collaborations between academia and industry or across various sectors. 9.Investment and Funding Experience — Proven track record in securing funding or navigating financial structuring for biotech ventures. 4.2. In-Context Learning Loop This forms the core of our training strategy. For each train- ing data point, we prompt the LLM to incorporate new information into the existing policy. A sample prompt is as follows: You are an expert in venture capital tasked with distinguishing successful founders from unsuccessful ones. Based on past experience, you have established the following policy: {policy }. Recently, a new case was discovered: Founder’s LinkedIn Profile: {row[’clean linkedin profile’] } Crunchbase information: {row[’clean cbprofile’] } The founder was eventually unsuccessful. Summarize this new case to refine and expand the existing policy. Provide only the updated policies as your response. Your policy should be well-structured and have fewer than 20 rows. We implemented two complementary strategies:•Sequential Update: Iterate through each training data point. At each step, generate a new policy from the current policy and the new data. Evaluate both the original and the new policy using a scoring system, and always keep the one with the higher score. •Parallel Update with Selection: For each training data point, simultaneously generate a new policy from the current policy and the data. All policies are scored, and the top 10% are selected as effective examples. Then a new policy is generated on the basis of these
https://arxiv.org/abs/2505.21427v1
high-quality samples. Each strategy has its strengths. The parallel method is faster and performs better in the early stages of training. However, the sequential method typically produces better final policies when aiming for maximum performance. These two strategies can also be used in a looped fashion to form an iterative refinement cycle. For example, after generating new candidate policies from training examples and scoring them, we can: 1. Select the best-performing policy, 2.Feed it back into another round of sequential or parallel updates using fresh or prioritized examples, 3. Re-score and repeat the process. This looped architecture enables continual improvement of the policy through multiple passes over the data, allowing the model to integrate increasingly refined distinctions and patterns from both successful and failed cases. 4.3. Further Refinement After applying the ICL loop, the resulting policies are al- ready strong. However, we further enhance performance by integrating reflections and expert edits. For selected representative examples, we prompt the LLM to generate a one-sentence reflection on why a given founder succeeded or failed: You are an expert in venture capital, specializing in evaluating startup founders. Your task is to distinguish successful founders from unsuccessful ones. Given the founder’s profile: Founder’s LinkedIn Profile: {row[’clean linkedin profile’] } Crunchbase information: {row[’clean cbprofile’] } 4 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning The founder was eventually {success status }. In ONE SENTENCE, using the founder’s background, clearly explain the key reason why this founder {success verb}. These reflections are then used to revise and expand the policy. In addition, human experts can manually modify or reorder the policy rules based on domain expertise, pro- viding an interpretable, high-performance policy ready for deployment. 5. Experiments To evaluate the effectiveness of our approach, we conducted experiments using a fixed test set composed of 1,000 failed founders and 100 successful founders as the standard test set. This reflects a natural precision baseline of 9.09%, which aligns with the natural precision observed in our dataset and facilitates the acquisition of additional test data. Given the nature of the investment domain, where capital efficiency and outcome quality are paramount, we prioritize precision over recall and use the F0.5score as our primary evaluation metric. For all evaluations, the LLM was prompted with the subject’s clean cbprofile and clean linkedin profile , along with the decision- making policy where applicable. The model was tasked with making a binary prediction ( True orFalse ) for each founder’s likelihood of success. 5.1. Vanilla Test In the vanilla test, we prompted several LLMs without any policy guidance, relying solely on the internal reasoning ca- pabilities of the model. Importantly, this setup was identical to our complete prompting pipeline except that the decision policy was removed. All other components, including in- put fields, output format, and overall structure, remained unchanged. The prompt structure was as follows: You are an expert in venture capital, specializing in evaluating startup founders. Your task is to distinguish successful founders from unsuccessful ones. Given the founder’s profile: Founder’s LinkedIn Profile: {row[’clean linkedin profile’] }Crunchbase information: {row[’clean cbprofile’] } Based
https://arxiv.org/abs/2505.21427v1
on this information, determine if the founder will succeed. Answer using only one word: True or False The performance of three different LLMs: GPT-4o-mini, GPT-4o and the most powerful o3 is summarized below: Model Accuracy Precision Recall F0.5 GPT-4o-mini 0.653 0.137 0.530 0.160 GPT-4o 0.772 0.202 0.510 0.229 o3 0.769 0.229 0.650 0.263 Table 1. Vanilla prompting baseline Notably, we observe a consistent progression in capabil- ity from GPT-4o-mini to GPT-4o and o3, highlighting the advancement in LLM-based reasoning even without task- specific tuning. 5.2. Policy-Guided Test We now examine how predictive performance improves when the model is guided by the natural language policy we generated through our ICL procedure. For this evalua- tion, we used the most lightweight and cost-effective model available: GPT-4o-mini. The inference prompt used is as follows: You are an expert in venture capital, specializing in evaluating startup founders. Your task is to distinguish successful founders from unsuccessful ones. Here is a policy to assist you: {policy } Given the founder’s profile: Founder’s LinkedIn Profile: {row[’clean linkedin profile’] } Crunchbase information: {row[’clean cbprofile’] } Based on this information, determine if the founder will succeed. Answer using only one word: True or False We tested two policy versions: 5 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning •Theinitial policy , derived from a small seed set and a single prompt •Thebest-performing policy , refined through multiple iterations of our ICL loop The structure of the prompt remained the same, except the policy field was filled with either the initial or the best policy. The following are the evaluation results for the two policies: Model Accuracy Precision Recall F0.5 Initial Policy 0.879 0.229 0.140 0.203 Best Policy 0.917 0.645 0.200 0.446 Table 2. Performance of initial and best-refined policy These results demonstrate that an optimized policy substan- tially improves model performance, particularly in terms of precision and the overall F0.5score. In contrast, the initial policy provides only marginal gains over the vanilla model. The improvement from the initial to the best policy high- lights the effectiveness of our ICL loop. Despite using the lightweight model GPT-4o-mini, our method achieved re- sults that clearly surpassed both the vanilla baselines and the early-stage policy. This suggests that the iterative refinement and scoring strategy we employ allows even smaller models to simulate domain expertise and apply learned reasoning patterns effectively. 5.3. Test Stability and Results To assess robustness, we tested our best-performing policy on eight distinct 100-success / 1000-failure test subsets. The results are shown below: Test Set Accuracy Precision Recall F0.5 1 0.917 0.645 0.20 0.446 2 0.915 0.652 0.15 0.391 3 0.913 0.583 0.14 0.357 4 0.907 0.450 0.09 0.250 5 0.912 0.552 0.16 0.370 6 0.882 0.250 0.15 0.221 7 0.870 0.205 0.15 0.191 8 0.902 0.400 0.16 0.308 Mean 0.902 0.467 0.15 0.317 Table 3. Robustness of best-performing policy across eight test subsets While overall performance is strong, we note that Test Sets 6 and 7 exhibit noticeably lower precision and F0.5scores. Further investigation using GPT-4o in vanilla (non-policy) mode confirmed that model performance on these subsets
https://arxiv.org/abs/2505.21427v1
isinherently weaker, suggesting variability in data difficulty or distribution shift. Despite this, the mean precision across all eight test sets is approximately 0.467, indicating strong average performance of 5 ×and robustness to moderate dataset variation. 5.4. Evaluation on Real-World Distribution VC firms typically review 30 to 50 startups per week, av- eraging around 2,000 per year. To simulate more realistic investment scenarios, we evaluated our best policy on four test sets that closely match real-world outlier1rates. Each test set contains 40 successful founders and 2,000 failed ones, approximating a 1.96% success rate, which is the real-world random chance of success. Table 4 shows that our model performs significantly better in relative terms on the more imbalanced dataset. Achieving an 8.8×lift over random precision suggests that the model is capable of identifying rare outliers under realistic startup success distributions. This demonstrates both robustness and practical utility in real-world venture scenarios. Test Set Accuracy Precision Recall F0.5 1 0.975 0.308 0.200 0.278 2 0.973 0.233 0.175 0.219 3 0.953 0.077 0.125 0.083 4 0.943 0.068 0.150 0.077 Mean 0.961 0.172 0.163 0.164 Table 4. Evaluation results on unicorn-base-rate test sets (40 suc- cess / 2000 failure) 5.5. Finalizing the Policy with o3 To explore the upper-bound potential of our approach, we employed o3 for the policy generation stage, leveraging its superior capabilities in logical reasoning and technical writing. For all other tasks, including case evaluation and scoring, we continued using GPT-4o-mini to maintain cost efficiency. This hybrid setup proved highly effective, yield- ing policies with significantly improved performance. The table below presents performance metrics on the same four test sets described in Section 5.4. Notably, our final policy achieved an average precision of around 40% on realistic test sets, representing more than 20×lift over random precision. These results underscore the potential of incorporating more advanced LLMs for policy induction. A targeted use of stronger models at critical stages can unlock significant downstream performance gains in founder evaluation. 1An outlier is often referred to as a unicorn in the real world. 6 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning Test Set Accuracy Precision Recall F0.5 1 0.981 0.600 0.150 0.375 2 0.980 0.500 0.175 0.365 3 0.976 0.250 0.100 0.192 4 0.975 0.269 0.175 0.243 Mean 0.978 0.405 0.150 0.294 Table 5. Evaluation results on unicorn-base-rate test sets (40 suc- cess / 2000 failure) 6. Training Details 6.1. Scoring System As described in Section 4.2, we adopted a simple but effec- tive scoring system using the same data for both training and evaluation. Specifically, we evaluated candidate policies based on their precision on the training set. This approach is suitable because policies are general, language-based ab- stractions designed to capture broader reasoning patterns rather than memorize specific examples. Empirically, we found no significant difference in performance trends be- tween using the training set and using a separate validation set. Thus, to reduce data, time, and cost requirements, we standardized our process to evaluate policy quality using the training set itself. 6.2. Training Set Ratio We experimented with various success-to-failure ratios in the
https://arxiv.org/abs/2505.21427v1
training data, including 1:5, 1:2, and 1:1. Our findings suggest that the absolute number of successful founders is more important than the overall ratio. Too few success examples reduce learning efficiency, leading to longer con- vergence times. A balanced 1:1 or lightly skewed ratio yielded the best trade-off between performance and training speed. 6.3. Training Time A full training round using 120 data points (e.g., 60 success and 60 failure cases) with GPT-4o-mini typically takes 3– 4 hours. Because policy generation and scoring involve multiple asynchronous LLM calls, training is I/O-bound and benefits from parallelization. Moreover, optimizing for high policy quality often requires many additional rounds of tuning and manual inspection, especially in the later stages of refinement. 7. Conclusions Our methodology demonstrates that effective decision poli- cies can be learned with as few as 120 training examples and no gradient-based updates, making it highly accessible.Through a looped ICL mechanism and lightweight scoring based on training precision, we achieved substantial gains in predictive power. The best policy achieved a mean preci- sion of 0.405 across diverse test subsets, an improvement of more than 20 ×over the real-world precision baseline. This performance surpasses the estimated outlier-picking precision of top-tier VCs by 7.1 ×. The transparency of our approach also enables human ex- perts to inspect, intervene, and contribute directly to policy refinement. This makes the system not only accurate but also trustworthy and adaptable. Because all decision logic remains in natural language, our framework can be ported to other domains that rely on structured reasoning, such as grant evaluation, academic hiring, or legal case triage. Despite these promising results, several limitations remain. First, the training pipeline is inherently nondeterministic and sensitive to prompt phrasing and ordering, which can result in inconsistent outcomes across runs. Second, although we attempted to mitigate data contamination by excluding entity names, the LLMs may still retain latent exposure to parts of the dataset, potentially inflating the model’s performance. Lastly, as seen in Test Sets 6 and 7, performance can vary substantially across different data segments, indicating that certain founder profiles remain challenging for LLMs to evaluate reliably. In future work, we plan to explore multi-agent consensus generation, automated policy clustering, and integration with retrieval-augmented generation for more data-aware decision making. Our findings suggest that LLMs, when equipped with simple but powerful prompting strategies, can become practical, interpretable decision-making tools in real-world high-stakes domains such as venture capital. Impact Statement This work contributes to the field of interpretable AI by providing a lightweight and explainable framework for high- stakes decision-making using LLMs. In particular, our method supports early-stage startup evaluation with a trans- parent and editable reasoning process. All datasets used in this study were collected and processed in accordance with relevant ethical standards, and no personally identifi- able information was exposed to the model. We believe that this approach promotes responsible and auditable machine learning practices in financial domains. References Dong, Y ., Mao, Y ., Huang, Y ., He, X., and Zhang, Y . Survey of in-context learning, 2023. URL https://arxiv. org/abs/2301.00234 . Gilardi, F., Alizadeh, M.,
https://arxiv.org/abs/2505.21427v1
and Kubli, M. Chatgpt out of 7 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning the box: How does it fare on political and legal rea- soning?, 2023. URL https://arxiv.org/abs/ 2305.03511 . Graves, A., Wayne, G., and Danihelka, I. Hybrid computing using a neural network with dynamic external memory. Nature , 538(7626):471–476, 2016. doi: https://doi.org/ 10.1038/nature20101. Palavalli, V . et al. Leakage in language models: Identifying and mitigating training data contamination, 2024. URL https://arxiv.org/abs/2407.08716 . Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou, D. Self-consistency im- proves chain of thought reasoning in language mod- els, 2023. URL https://arxiv.org/abs/2203. 11171 . Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E. H., Le, Q. V ., and Zhou, D. Fine- tuned language models are zero-shot learners, 2021. URL https://arxiv.org/abs/2109.01652 . Xiong, S., Ihlamur, Y ., Alican, F., and Yin, A. O. Gptree: Towards explainable decision-making via llm-powered decision trees, 2024. https://arxiv.org/abs/ 2411.08257 . Zhao, W. X., Liu, H., Wu, J.-R., et al. Unicorn or bust? eval- uating venture capital performance with public data, 2024. URL https://arxiv.org/abs/2402.12345 . Zhou, D., Dai, Z., Tsvetkov, Y ., and Tu, L. A. Least-to- most prompting enables complex reasoning in large lan- guage models, 2022. URL https://arxiv.org/ abs/2205.10625 . 8 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning A. Appendix: Best Policy and Experimental Details A.1. Training Configuration The best-performing policy was generated using the follow- ing configuration: Training Round 4 Training Size 240 (120 success / 120 failure) Training Ratio 1:1 Training Range 0–120 (success and failure) Policy Length Limit 20 lines Table 6. Training configuration for best policy A.2. Best Policy Below is the final version of the best-performing policy generated after four rounds of in-context refinement: Updated Policies for Distinguishing Successful Founders: 1.Industry Fit & Scalability : Prioritize founders building scalable tech, AI, or deep-tech products over service- heavy models. 2.Sector-Specific Innovation & Patent Verification : Re- quire defensible IP with issued or published patents validated through public databases. 3.Quantifiable Outcomes, Exits & (for Bio/Med) Reg- ulatory Milestones : Demand audited revenue, exits, or documented IND/clinical-phase progress—not just pre-clinical claims. 4.Funding & Investor Validation : Look for credible, re- cent third-party capital or follow-on rounds; stale or absent fundraising signals stagnation. 5.Press & Recognition Depth : Favor independent, rep- utable coverage within the last 24 months and cross- checked with filings; outdated or missing press is a red flag. 6.Product vs. Service Assessment : Score higher for au- tomated, high-margin SaaS, platform, or therapeutics with clear IP; pure services rank lower. 7.Market Traction Specificity : Require cohort-level data on growth, retention, margins; name-dropping clients or “pilot” studies alone don’t qualify. 8.Location Advantage with Proof : Presence in a tech/biotech hub must align with active local partner- ships, accelerators, or ecosystem leadership roles.9.Crisis Management & Pivot History : Validate data- backed pivots that preserved or grew value during downturns. 10.Sustainable 3–5-Year Roadmap : Roadmap must tie to market trends, capital needs, and measurable mile- stones. 11.Skill Alignment & Visibility :
https://arxiv.org/abs/2505.21427v1
Match proven technical, operational, or sales expertise to venture stage; generic “entrepreneur” labels penalize. 12.Consistent Role Tenure & Title Concentration : Favor ≥4-year focus in one core venture; multiple simul- taneous C-suite/advisory titles or role inflation is a downgrade. 13.Network Quality & Engagement : Measure depth and actual engagement of investor and domain-expert ties over raw connection counts. 14.Third-Party Validation & References : Require testimo- nials, case studies, regulatory filings, or audits corrob- orating performance and scientific claims. 15.Investment Ecosystem Participation : Credit active, re- cent angel or fund roles that demonstrate curated deal flow and learning loops. 16.Differentiated Value Proposition : Demand a clear, data- supported statement of competitive advantage and de- fensibility. 17.Tech Currency & Relevance : Ensure the founder’s ex- pertise, tech stack, and go-to-market playbook are cur- rent; legacy success alone is insufficient. 18.Data Consistency Across Platforms : Cross-verify LinkedIn, Crunchbase, press, and regulatory filings; inconsistencies or absent data trigger deeper diligence or rejection. A.3. Vanilla GPT-4o Performance on Difficult Sets To better understand the performance gap, we ran GPT-4o in vanilla (no policy) mode on Test Sets 1 (for comparison), 6 and 7: •Test Set 1 (Vanilla GPT-4o) : Precision = 0.202, F0.5 = 0.229 •Test Set 6 (Vanilla GPT-4o) : Precision = 0.153, F0.5 = 0.178 •Test Set 7 (Vanilla GPT-4o) : Precision = 0.134, F0.5 = 0.157 9 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning These results confirm that the lower scores for Sets 6 and 7 are not unique to our policy-based model and may reflect structural difficulty in those segments. 10
https://arxiv.org/abs/2505.21427v1
arXiv:2505.21432v1 [cs.RO] 27 May 2025Hume: Introducing System-2 Thinking in Visual-Language-Action Model Haoming Song1,2∗Delin Qu2∗Yuanqi Yao2Qizhi Chen3,2Qi Lv2Yiwen Tang2 Modi Shi4Guanghui Ren4Maoqing Yao4Bin Zhao2Dong Wang2†Xuelong Li2 1Shanghai Jiao Tong University2Shanghai AI Laboratory3Zhejiang University4AgiBot VariousReal Robot TasksSystem2 RegressSystem1RegressVLMFM4B4hzVisuoFM80M90hzLSVSVFlowMatchingProgressA*Cascaded Dual-system Action DenoisingValue-Guided System-2 Thinking SOTARobotControlPerformance BestofNProbability Path Hume (Ours)π0π0-FastOpenVLAGR00TSpatialVLAOCTOSimplerGoogleRobotMarchWidowXRobotZero-ShotSimplerGoogleRobotAggSimplerWidowXBridgeLIBEROBenchmarkFranka RobotSetupsAgiBot G-1SetupsThinking Figure 1: We present Hume, a dual-system vision-language-action model exploring human-like thinking capabilities for dexterous robot control. Equipped with value-guided System-2 thinking and cascaded action denoising, the model achieves superior complex reasoning and control capabilities. The model achieves state-of-the-art performance across a diverse range of evaluations and shows significantly advancement in complex robot control tasks. Abstract Humans practice slow thinking before performing actual actions when handling complex tasks in the physical world. This thinking paradigm, recently, has achieved remarkable advancement in boosting Large Language Models (LLMs) to solve complex tasks in digital domains. However, the potential of slow thinking remains largely unexplored for robotic foundation models interacting with the physical world. In this work, we propose Hume : a dual-system Vision-Language-Action (VLA) model with value-guided System-2 thinking and cascaded action denoising, exploring human-like thinking capabilities of Vision-Language-Action models for dexterous robot control. System 2 of Hume implements value-guided thinking by extending a Vision-Language-Action Model backbone with a novel value-query head to estimate the state-action value of predicted actions. The value-guided thinking is conducted by repeat sampling multiple action candidates and selecting one according to state-action value. System 1 of Hume is a lightweight reactive visuomotor policy that takes System 2 selected action and performs cascaded action denoising for dexterous robot control. At deployment time, System 2 performs ∗Authors contributed equally: haomingsong@sjtu.edu.cn. †Corresponding author: dongwang.dw93@gmail.com. Preprint. Under review. value-guided thinking at a low frequency while System 1 asynchronously receives the System 2 selected action candidate and predicts fluid actions in real time. We show that Hume outperforms the existing state-of-the-art Vision-Language-Action models across multiple simulation benchmark and real-robot deployments. 1 Introduction A wise man proportions his belief to the evidence. David Hume, An Enquiry Concerning Human Understanding Creating generalist robots to perform various tasks like humans in the physical world has been a long-standing goal [ 31,4,24,28,56,29,55,41,47,52]. Cognitive psychologists have revealed that humans conduct a deep, deliberate form of thinking when tackling complex problems, such as mathematical proofs or dinner making. This slow and reflective thought process is known as System-2 thinking, while the fast thinking process that relies on intuition is called System-1 thinking [ 21]. Inspired by the dual process theory of human cognition, thinking and reasoning steps have been introduced to enhance LLMs’ capability to solve complex problems in digital domains and achieved significant results. Intuitively, generalist robots in the physical world also require similar System-2 slow thinking capabilities to perform dynamic, dexterous, and complex tasks, while intuition-based fast System-1 thinking is not capable of tackling delicate, tenuous, and fallible robot action prediction. Therefore, a crucial question for generalist robot policies designed to solve complex robotic tasks in diverse scenarios is: how to enable effective System-2 slow thinking in generalist robot policy for accurate action prediction? However, equipping a generalist robot policy
https://arxiv.org/abs/2505.21432v1
with System-2 thinking capabilities poses two primary challenges. First, thinking and reasoning techniques have mainly been demonstrated in text modality, while the delicate, tenuous robot actions lack of clear and consistent semantics, making it difficult to semantic Chain-of-Thought (CoT) [ 49] thinking as in LLMs. Second, the generalist robot policy needs to perform dexterous, complex tasks in real time. How to effectively balance the "slowness" of System- 2 thinking against the "fastness" demanded by robot control is essential. Recently, embodied chain- of-thought reasoning (ECoT) [ 53] enable a VLA model to predict helpful intermediate reasoning text before choosing robot actions, improving the generalization and performance of VLA models. However, performing intermediate reasoning steps significantly slow down policy inference speed. Moreover, dual-system architectures have been incorporated into Vision-Language-Action models in several works [ 17,54,6,50,2,44]. Typical work Helix [ 12] adopts a pretrained VLM backbone as System 2, while employing a smaller network as System 1 to outputs high-frequency actions for real time control. These approaches use either latent vectors or detailed language instructions as bridges to communicate between the two systems. Despite their faster inference speed, these models’ System 2 does not conduct effective thinking and reasoning to guide the System-1 action prediction. In this work, we introduce a dual-system Vision-Language-Action model Hume, which powers the VLA model with System-2 thinking capabilities through value-guided repeat sampling and cascaded action denoising. The System 2 of Hume is built on top of a pre-trained vision-language model (VLM) and attach it with two specialized heads: a flow matching denoising head to predict robot actions, and a value-query head to estimate the state-action value of predicted actions. It processes the robot’s observation and language instruction to predict long-horizon action chunk through action denoising head. Subsequently, the corresponding state-action value are estimated conditioning on predicted action chunk. The value-guided thinking is conducted by repeat sampling multiple action chunks and selecting one with the highest value. For System 1 of Hume, it takes one short segment from the selected long-horizon action chunks from System 2, current visual observation, and robot states, then conducts cascaded action denoising to generate final fluid robot actions via a separate lightweight diffusion policy. At deployment time, System 2 performs value-guided thinking at a low frequency (4 Hz) while System 1 asynchronously receives the System 2 selected action chunk and predicts fluid actions in real time (90 Hz). Equipped with the proposed value-guided thinking and cascaded action denoising, Hume explores powerful System 2 slow thinking paradigm to enhance the VLA models. We extensively evaluate and ablate Hume on both standard simulation benchmarks and real-robot platforms, including 21 real-world robot settings and 3 simulation environments. To validate Hume’s capability to solve complex robot control tasks with the assistance of System-2 Thinking, the test scenarios include variations in viewpoint, texture, lighting, layout, unseen objects, 2 unseen environments, as well as the most challenging humanoid robot control tasks. In summary, the main contributions of this work are three-folds: •We propose Hume, a dual-system generalist robot policy that explores System-2 slow thinking paradigm for Vision-Language-Action models. •We introduce novel
https://arxiv.org/abs/2505.21432v1
value-guided thinking and cascaded action denoising to seamlessly combine low frequency System 2 and high frequency System 1, resulting in effective thinking and reasoning in various robot deployments. •Hume achieves state-of-the-art performance on multiple benchmarks and real-robot tests, achieving +4.4% increase in success rate over π0on the LIBERO [ 34] benchmark, +25.9% in Simpler benchmark [32], and +12.9% improvement in real-world deployments. 2 Related Work Dual-System Vision-Language-Action Models. Recently, several studies [ 31,4,24,28,56,29, 55,41,6,12,17] have extended VLMs for robot control. RT-2 [ 4] fine-tunes PaLI-X [ 9] with discretized action tokens, while OpenVLA [ 24] adapts Prismatic VLM [ 22] on the OXE dataset [ 11]. π0[3] integrates PaliGemma with flow-matching for continuous actions. To address efficiency and integration challenges, dual-system architectures have emerged. HiRT [ 54] runs VLMs at low frequencies while maintaining high-frequency vision-based control for real-time interaction. DexVLA [ 50] uses diffusion action experts with embodiment curriculum learning across multiple robot types. GR00T N1 [ 2] features an end-to-end trained dual-system specifically for humanoid robots. Gemini Robotics [ 47] builds on Gemini 2.0 with specialized models for control and reasoning. HiRobot [44] enables processing of complex instructions with situated feedback. These approaches improve efficiency, success rates, and adaptability compared to monolithic architectures. System 2 and System-2 Thinking. Numerous studies [ 49,51,45,18,43,37,38,35,46] have explored System 2 reasoning approaches to enhance LLMs’ problem-solving capabilities. Chain- of-Thought [ 49] introduces intermediate reasoning steps before producing answers, while Tree- of-Thoughts [ 38] explores multiple solution paths with self-verification. Reflexion [ 45] enables verbal reflection on previous attempts. System-2 Thinking frameworks explicitly model human- like deliberative processes. SETS [ 8] combines self-critique with multiple reasoning paths and majority voting. SC-MCTS [ 14] combines multiple reward models for more robust tree search. Addressing efficiency concerns, O1-Pruner [ 36] introduces length-penalty loss to create concise reasoning processes. While these approaches have improved reasoning in language tasks, their application to visual-language-action models remains largely unexplored. Cascaded Denoising. Cascaded Denoising [ 19] was first proposed as a diffusion model that generates higher-resolution images through a cascading approach. Starting with a diffusion model at low resolution, it continuously upsamples the generated images to obtain high-resolution results. f- DM [ 15] applies inter-stage transformations for progressive signal restoration through function learning. Cas-DM [ 1] cascades noise-prediction and image-prediction modules to integrate perceptual losses in diffusion models. SCDM [ 7] cascades generation across spectral dimensions, reconstructing hyperspectral bands progressively. HiFI [ 20] cascades consistent-resolution patches for memory- efficient high-resolution frame interpolation. CDM-VTON [ 27] employs two-stage cascading for virtual try-on applications. While these approaches have advanced capabilities in image domains, the application of cascaded denoising to integrate System 1 and System 2 remains largely unexplored. 3 Methodology In this section, we describe Hume model architecture and its training and deployment strategy in detail. The process of Value-Guided System-2 Thinking with the help of state-action value estimator was described in detail in Sec. 3.1. Next, we detail how the System 1 module and System 2 module cooperate asynchronously through the proposed cascaded action denoising in Sec. 3.2. Finally, the multi-stage training and
https://arxiv.org/abs/2505.21432v1
deployment strategy of the model is explained in Sec. 3.3. 3.1 Value-Guided System-2 Thinking As shown in Fig. 2, the System 2 module is instantiated as a vision-language-action model (VLA) built upon a pretrained Vision-Language Model. Formally, the inputs of System 2 module consists of RGB 3 FlowMatchingProgress FlowMatching HeadS2PaligemmawithExpert4BValue Query HeadVL“Get me some snacks.”imageobservation robotstatesinstruction S1VisuoExpert80MS Sampling NVVV Value-Guided ActionSearchingVL S headJointEffectorwaistVVVnoiseactionQQQquery tokentemperatureSystem2LSV FlowMatching HeadSystem1 BoN Actionaaa WholeUpper Body ControlBest Execution L=0L=1L=2L=4aaa stateactionvalueaction candidatesBestofNProbability PathFigure 2: Overview of Hume. Hume contains two systems working asynchronously. Given the observation, System 2 of Hume first generates Ncandidate action chunks with different noise level, and the best-of-N candidate with the highest Qvalue will be selected as the optimal candidate Aτ∗ t, which is segmented and conveyed to System 1 for continuous action denoising. images it= [I1 t, ...,In t]at time step t, natural language instructions ℓt, and robot state information st. Similar as VLA models, we first augment the VLM backbone with an action denoising head to learn a mapping function F(·)to generate candidate robot actions Atfrom the observation ot,i.e., At=F(ot). Moreover, to empower Hume with System-2 Thinking ability, we attach the VLM backbone another value-query head, which is designed to estimate the state-action value Qθ(qt,At) of the candidate robot action At. Candidate Actions Generation The candidate robot actions are generated by a action denoising head that aims to model the data distribution p(At|ot), where otconsists of images it, language instructions ℓt, and robot state information st. It is implemented as a transformer-based flow matching denoising process that predicts the remaining action noise vθ(Aτ t,ot)in the “noisy action” Aτ t, where τ∈[0,1]is the flow matching time step that represents the noise level of the action. Starting from a random noise A0 t∼ N(0,I), the denoising head generates actions by gradually removing the noise from A0 ttoA1 tstep by step using the forward Euler method: Aτ+δ t=Aτ t+δvθ(Aτ t,ot), where δis the size of the denoising step. In practice, we use 10 denoising steps, corresponding to δ= 0.1. During training, for the ground truth action Atsampled from the dataset, the denoising head is optimized by minimizing the loss between the actual remaining noise ϵ−Atand the network output vθ(Aτ t,ot)given the observation otand the noisy action Aτ t=τAt+ (1−τ)ϵas input. After training, conditioned on the same observation otat timestep t, the action denoising head generates Ncandidate robot action chunks Aτn t∈ {Aτ1 t,Aτ2 t, . . . , AτN t}with different noise levels by integrating the learned vector field vθ(Aτ t,ot)fromτ= 0toτ= 1−(n−1)ξseparately: Aτn t=Z1−(n−1)ξ 0vθ(Aτ t,ot)dτ+A0 t, (1) where ξis used to control the noise gap between adjacent candidates, and A0 tis the initial action sampled from the normal distribution A0 t∼ N(0,I). Note that n∈ {1,2, . . . , N }, which leading to most of generated candidate actions Aτn tis not fully denoised. State-Action Value Estimation The state-action values are estimated with the proposed value-query head built on the same VLM backbone via learning a latent conditioned Q function. The value-query head is composed of two critic networks estimating state-action values and
https://arxiv.org/abs/2505.21432v1
one actor network for assisting the training of critic networks. Specifically, a special query token qtis introduced and attached at the end of the VLM input sequence, which is a learnable token with the same embedding dimension as the language tokens. Then, for one action chunk At(either ground-truth action At or denoised candidate actions Aτn t), it is combined with this special query token qtto feed into the value-query head. Due to its last position of the input sequence, the query token qtattends to all previous tokens and aggregates necessary information from the VLM inputs, i.e., current RGB images it= [I1 t, ...,In t]at time step t, natural language instructions ℓt, and robot state information st. In this way, the value-query head estimate the state-action value Qθ(qt,Aτn t)of the action chunk Aτn t 4 conditioned on the input query token qt. We visualize the state-action value map of candidate actions in Fig. 8, and provide a detailed analysis in Sec. 6.2. This value-query head is trained on pre-collected robot demonstration dataset with ground-truth action Atvia offline RL [ 26]. We construct the training dataset Dusing the reward function following [ 25], where the rewards of last 3 transitions in one robot episode is defined as +1, and the rest is 0. During training, we use the calibrated Q-learning algorithm [39] to optimize the value-query head. Value-Guided Thinking The System-2 value-guided thinking is implemented with Best-of-N selec- tion strategy and the selected action chunk is conveyed to System 1 for cascaded action denoising. Specifically, conditioned on the same observation, the action denoising head generates Ncandidate action chunks {Aτ1 t,Aτ2 t, . . . , AτN t}with different noise levels. Then, these candidates are passed to the value-query head to estimate their state-action values. Guided by the estimated state-action values, we select the action with the highest value as the optimal candidate Aτ∗ ttransferred to System 1, depicted as Aτ∗ t= arg maxAτi tQ(qt,Aτi t), where i= 1. . . N . 3.2 Cascaded Dual-system Action Denoising In order to achieve rapid, reactive robot control, System 1 module needs to be lightweight and fast in inference. In detail, System-1 consists of a DINOv2-small visual encoder and a lightweight transformer for cascaded action denoising. Given the selected candidate action chunk Aτ∗ tfrom System 2, the System 1 module takes the observation ˜ot+kh(including current image it={I1 t, ..,In t}, robot state st), and sub-action chunks ˜Atsegmented from the selected candidate Aτ∗ tas input, and produces refined robot actions by continuously denoising on sub-action chunks ˜At. Specifically, at timestep t, the selected action chunk from System 2 is ˜At= [at,at+1, ...,at+H−1], then ˜Atis segmented into K:=H/h sub-action chunks {˜At,˜At+h, . . . , ˜At+(K−1)h}with a horizon of h. The System 1 sequentially performs cascaded denoising on these sub-action chunks with observation ˜ot+kh. Note that System 1 is much faster than System 2, so that System 2 could finish cascaded denoising on all sub-action chunks before next action chunk ˜Atarriving. System 1 module is trained with the same flow matching loss using the action denoising head of System
https://arxiv.org/abs/2505.21432v1
2: Lω(θ) =Ep(˜At+kh|˜ot+kh),q(˜Aω t+kh|˜At+kh)||vθ(˜Aω t+kh,˜ot+kh)−u(˜Aω t+kh|˜At+kh)||2, (2) while the superscript ωrepresents the flow matching timestep in System 1. Note that, during training and inference, the generated candidate action chunks from System 2 are not fully denoised, i.e.,˜Aτ∗ t+kh̸=˜A1 t+kh, requiring the continuous denoising for accurate action prediction. Following the continuous denoising [ 19] from image generation, System 1 refine the action by intergrating the learned vector field from ω= 0toω= 1. Instead of starting with random noise, for the k-th sub-action chunk the integration process of System 1 starting with the sub-action chunk ˜Aτ∗ t+kh: ˜Aω t+kh=Zω 0vθ ˜Aω t+kh,˜ot+kh dω+˜Aτ∗ t+kh, (3) where vθ ˜Aω t+kh,ot+kh is the vector field learned by System 1. With the same forward Euler method used in System-2 action denoising head, System 1 produces the final denoised action ˜A1 t+kh with 10 denoise steps (corresponding to δ= 0.1). After all K sub-action chunks have been processed by System 1, System 2 will generate a new selected action chunk Aτ∗ t+H, and System 1 will continue to refine segments from the new selected action chunk. 3.3 Training and Deployment Strategy The training process of Hume contains two stages. In the first stage, the VLM backbone and the action denoising head of System 2 are trained first using the flow matching loss similar to eq. (2) to ensure System 2 can predict reliable actions. In the second stage of training, the VLM backbone and the action denoising head of System 2 are frozen, while System 1 and value-query head of System 2 are trained from scratch. The training objective of the Value-Query Head is to minimize the Bellman 5 3rd personcamera WidowX 250Robot ArmGripperAgiBot G-1 Humanoid RobotHeadcameraLeft handcameraright handcameraeffector Franka PandaEmika Robot Arm3rd personcameraEffector 6-DOF Servo RobotControl10 Tasks with 15 objectsWhole Upper Body Control4 Robot LearningScenarios7-DOF Collaborative Arm Control4 Robot Learning Scenarios SimplerEnv Evaluation of Google Robot and WidowXLIBEROSpatial/Object/Goal/LongEvaluationon FrankaSimplerEnvBenchmarkLiberoBenchmarkFigure 3: Experiments setup on WidowX, AgiBot G-1 and Franka Robot. We evaluate Hume across 3 simulation environments and 3 different real-world robotic platforms, covering 15 robot learning scenarios and 21 real-world manipulation tasks. error with a regularization term, which is defined as: min θαR(θ) +1 2Eqt,At,q′ t∼Dh Qθ(qt,At)− Bπ¯Q(qt,At)2i , (4) whereR(θ)is a calibrated conservative regularizer that aims to prevent overestimation in the Q-values, R(θ) :=Eqt∼D,a∼π[max ( Qθ(qt,At), Qµ(qt,At))]−Eqt,At∼D[Qθ(qt,At)], andBπ¯Q(qt,At) is the backup operator applied to the delayed target Q-network Q¯θ. During the inference phase, the System 2 and System 1 modules are cooperating at asynchronous mechanism to boost up the overall control frequency. Specifically, at the initial timestep t, the action denoising head of System 2 generates N= 5multiple action chunks Aτn 0∈ {Aτ1 0,Aτ2 0, . . . , AτN 0} with horizon of H= 30 as candidates at 4 Hz. Then the selected optimal action candidate Aτ∗ t is stored in a shared queue. Then ˜Aτ∗ t, a sub-action chunk with horizon h= 15 , is segmented from the first hsteps of Aτ∗ tand passed into System 1. System 1 removes the remaining noise from ˜Aτ∗ t, and produces the fully denoised action ˜Atat 6
https://arxiv.org/abs/2505.21432v1
Hz and execute all h= 15 actions on real robot immediately, resulting in a overall 90 Hz robot action control frequency. After the robot executes all K=H/h = 2 sub-action chunks in ˜At, System 1 repeatly get the newest selected action chunks from the shared queue for subsequent action denoising. Due the different working frequencies of System 2 and System 1, they asynchronously cooperate to achieve a balance between slow, human-like thinking and fast, reactive real robot control. 4 Experiment Our experiments aim to evaluate whether Hume, as a dual-system Vision Language Action Model, can effectively utilize System-2 Thinking to solve complex robot control tasks. Our extensive experiments include evaluating the model’s ability to perform complex manipulation tasks on various robotic platforms in both simulated and real-world environments, including humanoid robots. Hume is compared with previous state-of-the-art generalist policies and alternative designs of various model components. Specifically, our experiments aim to answer the following research questions: 1. How is Hume’s capability to learn multiple tasks on standard simulation benchmarks? 2. Can Hume effectively solve a variety of complex robot control tasks in the real world? 3. To what extent do value-guided thinking and cascaded denoising improve the performance? To answer these questions, as shown in Fig. 3, we tested Hume’s capabilities across a diverse range of representative robot learning scenarios, including 3 simulation environments and 3 different real- world robotic platforms, covering 15 robot learning scenarios and 21 real-world manipulation tasks. First, we evaluated Hume’s capability to finish multiple tasks in SimplerEnv [ 32] and LIBERO [ 34] simulation benchmarks, validating that Hume’s design can effectively accomplish multiple tasks in simulated environments. Second, we extensively tested Hume’s capability to control 3 real-world robotic platforms, WidowX, Franka, and AgiBot G-1, in completing tasks of varying difficulty, effectively validating Hume’s generalization capability in aspects such as object positions, language descriptions, deformable objects, and long-horizon operations in the real world. Finally, we con- ducted comprehensive ablation experiments on the model design in both simulation and real-world environments to validate the design choices in Hume. 4.1 Multitask Evaluation on Simulation Benchmarks Evaluation Setups and Baselines. To assess the robustness of Hume in diverse environmental variations, we employ the SimplerEnv [ 32] simulation benchmark to evaluate visual matching and 6 variant aggregation metrics. SimplerEnv features WidowX and Google Robot setups, providing diverse manipulation scenarios with varied lighting, color, textures, and robot camera pose conditions, bridging the visual appearance gap between real and simulated environments. We compare our model with the latest state-of-the-art generalist manipulation policies, including RT-1 [ 5], RT-1-X [ 11], RT- 2-X [ 11], Octo [ 40], OpenVLA [ 24], HPT [ 48], TraceVLA [ 56], RoboVLM [ 30], SpatialVLA [ 42], GR00T [2], π0-FAST [41], and π0[3]. Evaluation Results. Tab. 1 present the LIBERO [ 34] experimental results. We observe that Hume can be effectively adapted to tasks in the LIBERO environments, as it obtains the highest average success rate of 98.6% and the first rank across all the policies. In particular, Hume achieves a remarkable 96.7% success rate (+11.5% over π0, +6.1% over
https://arxiv.org/abs/2505.21432v1
GR00T) on the LIBERO-Long task, which consists of long-horizon tasks, demonstrating the model’s strong long-term planning capabilities. Tab. 2 presents the SimplerEnv experimental results on WidowX and Google robot tasks. Hume also achieves state-of-the-art performance on WidowX multitasks, with an average success rate of 72.6%, representing a significant improvement compared to all current generalist manipulation policies (+32.5% over π0, +39.6% over GR00T, +64.8% over OpenVLA). Similarly, Hume achieves an average success rate of 76.4% (+19.6% over π0) on Google robot tasks. In summary, Hume demonstrates its versatility as a generalist robot control policy, achieving better performance across various tasks. Table 1: LIBERO Benchmark Results. We present the success rate (SR) and standard error for each method across four task suites, which are averaged over three random seeds with 500 trials. Hume achieve the highest average success rate and ranking, followed by OpenVLA-OFT and π0. LIBERO-Spatial LIBERO-Object LIBERO-Goal LIBERO-Long Average SR (↑) Rank ( ↓) SR (↑) Rank ( ↓) SR (↑) Rank ( ↓) SR (↑) Rank ( ↓) SR (↑) Rank ( ↓) Diffusion Policy [10] 78.5±1.1% 6 87.5±0.7% 6 73.5±1.2% 6 64.8±1.3% 5 76.1±0.7% 6 OpenVLA-OFT [23] 97.6±0.9% 2 98.4±0.8% 3 97.9±1.0% 2 94.5±1.3% 2 97.1±0.6% 2 π0[3] 96.8±0.8% 3 98.8±0.9% 2 95.8±1.1% 3 85.2±1.2% 4 94.2±0.9% 3 π0-FAST [41] 96.4±0.7% 4 96.8±0.7% 5 88.6±1.0% 5 60.2±1.4% 6 85.5±1.0% 4 GR00T N1 [2] 94.4±0.9% 5 97.6±1.0% 4 93.0±1.2% 4 90.6±1.0% 3 93.9±1.1% 5 Hume 98.6±0.2% 1 99.8±0.1% 1 99.4±0.3% 1 96.7±0.9% 1 98.6±0.7% 1 Table 2: SimplerEnv evaluation across different policies on robot tasks . The zero-shot results denote performance of OXE dataset [11] pre-trained models. SimplerEnv on WidowX Robot Tasks ModelPut Spoon on Towel Put Carrot on Plate Stack Green Block on Yellow Block Put Eggplant in Yellow Basket #Overall Grasp Spoon Success Grasp Carrot Success Grasp Green Block Success Grasp Eggplant Success Average RT-1-X [11] 16.7% 0% 20.8% 4.2% 8.3% 0% 0.0% 0% 6.3% Octo-Base [40] 34.7% 12.5% 52.8% 8.3% 31.9% 0% 66.7% 43.1% 31.3% Octo-Small [40] 77.8% 47.2% 27.8% 9.7% 40.3% 4.2% 87.5% 56.9% 43.9% OpenVLA [24] 4.1% 0% 33.3% 0% 12.5% 0% 8.3% 4.1% 7.8% RoboVLM [30] 54.2% 29.2% 25.0% 25.0% 45.8% 12.5% 58.3% 58.3% 38.5% SpatialVLA [42] 25.0% 20.8% 41.7% 20.8% 58.3% 25.0% 79.2% 70.8% 42.7% π0[3] 45.8% 29.1% 25.0% 0% 50.0% 16.6% 91.6% 62.5% 40.1% π0-FAST [41] 62.5% 29.1% 58.5% 21.9% 54.0% 10.8% 83.3% 66.6% 48.3% Hume 73.8% 58.0% 83.3% 66.7% 83.2% 45.5% 97.8% 72.8% 72.6% SimplerEnv on Google Robot Tasks ModelVisual Matching Variant Aggregation Pick Coke Can Move Near Open/Close Drawer #Average Pick Coke Can Move Near Open/Close Drawer #Average RT-1 [5] (Begin) 2.7% 5.0% 13.9% 7.2% 2.2% 4.0% 6.9% 4.4% RT-1 [5] ( 15%) 71.0% 35.4% 56.5% 54.3% 81.3% 44.6% 26.7% 56.2% RT-1 [5] (Converged) 85.7% 44.2% 73.0% 74.6% 89.8% 50.0% 32.3% 63.3% RT-1-X [11] 56.7% 31.7% 59.7% 53.4% 49.0% 32.3% 29.4% 39.6% RT-2-X [11] 78.7% 77.9% 25.0% 60.7% 82.3% 79.2% 35.3% 65.6% Octo-Base [40] 17.0% 4.2% 22.7% 16.8% 0.6% 3.1% 1.1% 1.1% OpenVLA [24] 16.3% 46.2% 35.6% 27.7% 54.5% 47.7% 17.7% 39.8% TraceVLA [56] 28.0% 53.7% 57.0% 42.0% 60.0% 56.4% 31.0%
https://arxiv.org/abs/2505.21432v1
45.0% RoboVLM [30] 77.3% 61.7% 43.5% 63.4% 75.6% 60.0% 10.6% 51.3% SpatialVLA [42] 86.0% 77.9% 57.4% 73.8% 88.0% 72.7% 41.8% 70.7% HPT [48] 56.0% 60.0% 24.0% 46.0% ————– ————– ————– ————– π0[3] 72.7% 65.3% 38.3% 58.8% 75.2% 63.7% 25.6% 54.8% π0-FAST [41] 75.3% 67.5% 42.9% 61.9% 77.6% 68.2% 31.3% 59.0% Hume 97.0% 80.4% 58.8% 78.7% 98.0% 79.7% 44.6% 74.1% 4.2 Real-World Embodiment Control Real-world WidowX Evaluation. Fig. 4 presents the results of the real-world evaluation in WidowX robot platform. We compared some representative single-system VLA models and dual-system VLA models on multiple tasks. We observe that, in simple task scenarios (#1-2), most policies exhibit some generalizability, successfully completing tasks in unseen environments. However, in more complex tasks (#3-7), policies such as GR00T, π0-FAST, and OpenVLA struggle with manipulation, frequently encountering grasp failures issues, such as inability to accurately grasp or place target objects. Even when the policies attempt to recover from failures, they often fall in error states that prevent successful execution. In contrast, Hume leverages value-guided thinking to effectively recover 7 1.000.730.820.820.500.770.820.780.640.360.360.180.230.320.230.300.450.270.320.230.360.230.270.281.000.730.730.910.640.820.820.79 1.000.730.500.360.410.640.680.581.000.911.000.950.770.911.000.91#Close microwave#Lift red peper#Put green cup on pinkcloth#Put green cup onstove#Put purple cup onwhite plate#Put eggplant in thebasket#Put carrot on theplate#AvgOpenVL ASpatia lVLAGR00TPi0-FastPi0Hume (Ours) ALL10TasksWidowX Robot Zero-shot SetupFigure 4: Real-world evaluation on WidowX Robot tasks. We evaluate Hume across 10 zero-shot tasks with varying backgrounds, poses, and motion distractors. Hume achieves the highest average success rate, surpassing π0and all other generalist manipulation policies in comparative evaluations. 0.84 0.130.86 0.420.98 0.670.96 0.660.86 0.280.54 0.120.9 0.620.89 0.730.98 0.820.98 0.88 #Restock Bag #Pour Water #Pass the Water #Fold ShortsRDT GO-1 GR00T Pi0 Hume (Ours) sucessAgibot G-1 Experiment Setup Franka Robot Setup 0.450.450.450.64 0.480.55 0.550.71 0.320.270.45 0.370.61 0.640.730.91 0.770.820.910.98 #1-3 Three cube RGB #4 Make tea #5 Plush toy #6-7 KitchenOpenVLA Groot Pi0-Fast Pi0 Hume (Ours) sucess Figure 5: Evaluation on Franka and Agibot G-1 Robot. We evaluate Hume across 11 real-world common tasks on Franka and Agibot G-1 robot. from failures, demonstrating superior performance on various complex tasks. That is, when Hume falls into a wrong state, it can evaluate multiple candidate actions and select the another correct trajectory forward. As a result, even if failures occur during initial attempts, Hume can adjust its trajectory and successfully complete the task on subsequent tries (please refer to supplementary video for more details), achieving strong robustness ability across various complex unseen tasks (91% average success rate), improves by +12% over π0, and +33% over OpenVLA. Real-world Franka and Agibot G-1 Evaluation. Fig. 5 presents the results of the real-world evaluation on Franka and Agibot G-1 robot platforms. The task design incorporates multiple real- world daily long-horizon tasks, deformable objects manipulation, tool-usage, and other challenging scenarios, with further validation conducted on both the Franka robot and the humanoid robot Agibot G-1. We observe that the long-term planning capability of System-2 thinking employed by Hume helps us better solve long-horizon tasks. For example, in Agibot’s long-horizon deformable objects manipulation task (#Fold Shorts), where the models need to make the robot fold two shorts, Hume achieves a success rate of 88%, improving by +15% over π0. In
https://arxiv.org/abs/2505.21432v1
the complex long-horizon task (#Pour Water), Hume achieves success rate of 82%, significantly improving by +20% over π0, and +60% over GR00T. Additionally, Hume also achieves an average success rate of 87% across various tasks on the Franka robot, improving by +14.75% over π0, and +37.25% over OpenVLA. 4.3 Ablations on Design Decisions In this section, we conduct value-guided thinking and cascaded denoising ablations across multiple tasks in both simulation and real-world environments , with results presented in Tab. 3 and Fig. 6. 0.860.910.98 0.820.98 0.88 0.91 0.150.190.23 0.000.18 0.000.130.550.650.85 0.740.810.72 0.72 0.400.350.72 0.580.65 0.53 0.54 0.180.270.43 0.190.37 0.210.28 #WidowX Robot ALL #Franka Robot ALL #Restock Bag #Pour Water #Pass the Water #Fold Shorts #AvgHume w/o Value-Query Head w/o Cascade d Denoising w/o Repeat Sampling w/o System 1 success Figure 6: Real-world Ablations on WidowX, Franka and Agibot G-1 Robot. We conducted ablation studies of Hume across 3 different real-world robotic platforms, covering 15 robot learning scenarios and 21 real-world manipulation tasks. 8 Table 3: Ablations in LIBERO and SimplerEnv tasks. We conducted ablation studies of Hume across LIBERO [ 34] and SimplerEnv [ 32] on WidowX and Google Robot tasks. Models are trained with mixtures from the OXE dataset [11] in the SimplerEnv experiments. LIBERO Tasks #setting LIBERO-Spatial LIBERO-Object LIBERO-Goal LIBERO-Long Average [1]. Hume 98.6±0.2% 99.8 ±0.1% 99.4 ±0.3% 96.7 ±0.9% 98.6 ±0.7% [2]. w/o Cascaded Denoising 95.4±0.8% 97.2 ±0.5% 96.8 ±0.6% 94.2 ±0.7% 95.9 ±0.5% [3]. w/o Repeat Sampling 93.6±0.4% 94.8 ±0.2% 95.2 ±0.3% 91.4 ±0.9% 93.8 ±0.5% [4]. w/o System 1 90.2±0.6% 91.8 ±0.9% 92.4 ±0.7% 84.6 ±0.2% 89.8 ±0.6% [5]. w/o Value-Query Head 85.2±0.2% 86.9 ±0.4% 88.2 ±0.5% 79.4 ±0.6% 84.9 ±0.5% SimplerEnv on WidowX Robot Tasks #settingPut Spoon on Towel Put Carrot on Plate Stack Green Block on Yellow Block Put Eggplant in Yellow Basket Overall Grasp Spoon Success Grasp Carrot Success Grasp Green Block Success Grasp Eggplant Success Average [1]. Hume 73.8% 58.0% 83.3% 66.7% 83.2% 45.5% 97.8% 72.8% 72.6% [2]. w/o Cascaded Denoising 70.2% 55.6% 78.1% 62.5% 79.6% 42.1% 93.9% 67.3% 68.7% [3]. w/o Repeat Sampling 68.8% 49.2% 76.8% 57.9% 75.4% 39.9% 90.2% 66.9% 65.6% [4]. w/o System 1 64.7% 42.3% 74.3% 53.4% 71.2% 36.2% 87.3% 63.1% 61.6% [5]. w/o Value-Query Head 58.2% 36.8% 67.6% 47.3% 66.9% 31.9% 83.6% 57.8% 56.3% SimplerEnv on Google Robot Tasks . #settingVisual Matching Variant Aggregation Pick Coke Can Move Near Open/Close Drawer Average Pick Coke Can Move Near Open/Close Drawer Average [1]. Hume 97.0% 80.4% 58.8% 78.7% 98.0% 79.7% 44.6% 74.1% [2]. w/o Cascaded Denoising 95.3% 75.4% 57.5% 76.1% 94.8% 77.8% 42.2% 71.6% [3]. w/o Repeat Sampling 94.0% 70.8% 54.2% 73.0% 92.2% 74.6% 39.3% 68.7% [4]. w/o System 1 92.4% 65.4% 52.8% 70.2% 89.9% 70.8% 35.9% 65.5% [5]. w/o Value-Query Head 89.9% 63.8% 48.9% 67.5% 85.4% 65.9% 30.2% 60.5% Effect of Cascaded Denoising. According to the ablation results (#1 v.s.#2), the proposed cascaded denoising employs System 1 to remove the remaining noise, enabling the robot to perform precise and dexterous movements. Models w/o cascaded denoising utilize System 2 to complete the entire denois- ing process, leading to all candidates
https://arxiv.org/abs/2505.21432v1
being sampled from the same distribution. This consequently reduces the range of candidates that the model choose from, resulting in suboptimal candidates selection. The models suffer a significant performance drop in variant aggregation, showing an average decline of -3.2% across multiple SimplerEnv tasks, -2.7% across LIBERO tasks, and -19% in real-world robot tasks. Furthermore, we compared Hume with models directly using the candidate with the highest value output by System 2 (#1 v.s.#4). Without System 1 to remove the remaining noise, models w/o System 1 cannot perform precise and dexterous movements, ultimately leading to an average decrease of -9.8% across SimplerEnv tasks, -8.8% across LIBERO tasks, and -63% in real-world robot tasks. Effect of Value-Guided Thinking. According to the ablation results (#1 v.s.#5), the proposed value-guided thinking enables the System 2 to select the most valuable candidate from multiple noisy candidates to pass to System 1, which effectively improves Hume’s performance in handling robot control tasks. Models w/o value-guided thinking randomly select 1 out of 5 candidate actions generated by System 2 to pass to System 1. Since there are candidates with different levels of noise in the candidate actions, this random selection strategy may select harmful candidates for System 1, ultimately resulting in a significant decrease in success rate, showing an average decline of -14.95% across multiple SimplerEnv tasks, -13.7% across LIBERO tasks. Notably, this performance degradation is even more pronounced in more complex real-world robot scenarios, with an astonishing average decrease of -78% across various tasks in real-world environments. Additionally, according to the ablation results (#1 v.s.#3), we also compared Hume with model generate only one candidate from System 2 then passing it to System 1. Models w/o repeat sampling, due to having no additional candidates to choose from, cannot leverage the estimated value to provide more helpful action selection. This also led to performance degradation, with an average decrease of -6.2% across SimplerEnv tasks, -4.8% across LIBERO tasks, and -37% in real-world robot tasks. 5 Conclusion and Limitations In this paper, we present Hume, a dual-system Vision-Language-Action (VLA) model to explore human-like thinking capabilities for generalist robot policy. Hume implements value-guided System- 2 thinking by performing effective best-of-N selection with state-action estimation, and integrate System 1&2 with the proposed cascaded action denoising to achieve rapid and fluid control for dexterous tasks. With extensive experiments in both simulation and real robot platforms, we validated that Hume outperforms current state-of-the-art models, demonstrating the superiority of the Hume on various robot tasks, especially when failures occur in complex tasks during the deployment time, showing a promising research direction on generalist robot policy. 9 Limitations. First, the value-guided System-2 thinking is limited by the quality of sampled candidate action chunks, how to include more high-value candidates among the sampled candidates is a question worthy of discussion. Second, the estimated state-action value is not well-aligned with semantics, remaining a research direction for better value learning. Last, the System-2 thinking paradigm implemented in Hume is still relatively naive, and future work can explore more sophisticated paradigms such as tree search, self-correction, and reinforcement learning approaches.
https://arxiv.org/abs/2505.21432v1
10 Hume: Introducing System-2 Thinking in Visual-Language-Action Model Supplementary Material Abstract This supplementary material accompanies the main paper by providing more detailed visualization analysis of Hume’s workflow, as well as implementation details and additional experimental results: ▷Sec. 6 : Detail Hume’s workflow by visualize the value-guided thinking and cascaded action denoising processes. ▷Sec. 7 : Video demonstration and anonymous link: https://hume-vla.github.io. ▷Sec. 8 : Implementation details including loss functions and hyperparameters. ▷Sec. 9 : Experimental details in simulation and real world including experiment setup and detailed results. 6 Hume Visualization Analysis In this section, we first demonstrate the detailed inference process of Hume through the dexterous Push-T task in Sec. 6.1, then visualize two key designs of Hume: value-guided System-2 thinking and cascaded action denoising in Sec. 6.2 and Sec. 6.3 to provide a comprehensive understanding of Hume. 6.1 Hume Workflow Visualization To illustrate the detailed inference workflow of Hume, we evaluate Hume on the Push-T task that needs complex and contact-rich controls to push the T block precisely. The Push-T task requires the policy to control a blue dot on a two-dimensional plane to push a gray T-shaped block into the green area. Since the action space is two-dimensional, we can visualize the actions predicted by Hume as trajectories on a plane. Step=0Step=30Step=60Step=90 Figure 7: Visualization of Hume in Push-T. We visualize the candidate actions Aτi tsampled from System 2 with dashed lines and the final executed action ˜A1 t+khfrom System 1 with solid line. The intensity of colors of lines indicates the magnitude of state-action values Q(qt,Aτi t)of candidates. As shown in Fig. 7, we illustrate the detailed inference process of Hume in the Push-T task. Specifi- cally, in inference, we sample the candidate actions Aτi tfrom System 2 at time steps t= 0,30,60,90 with a horizon of H= 30 , and produce 10 candidate actions at each timestep. The selected action with highest value is conveyed to System 1 and continuously denoised to executed action as drew with solid line. We can see that the final denoised action from System 1 is smoother and more delicate for accomplishing the task. 6.2 Value-Guided Thinking Visualization Fig. 8 visualizes estimated state-action values the candidate actions at different time steps in the LIBERO-GOAL setting. Since the action space in LIBERO is 7-dimensional, we used Principal Component Analysis (PCA) to project the high dimensional actions onto a two-dimensional plane. For 1 Value Map of Candidate Actions Step=0 Step=1 Step=50 Step=100PCA1PCA2 PCA1PCA2 PCA1PCA2 PCA1PCA2 state -action valuesFigure 8: Value Map of Candidate Actions. The candidate actions Aτi tsampled from System 2 and ground-truth actions AGT tare projected into the same two-dimensional space through Principal Component Analysis (PCA). The intensity of colors indicates the magnitude of state-action values Q(qt,Aτi t)of candidate actions. each projected candidate action, we use different colors to represent their corresponding state-action values estimated by the value-query head. These projected points together with their corresponding state-action values generate the value map shown in the figure. In the value map, yellow color represent actions with high state-action values, while purple color represent actions
https://arxiv.org/abs/2505.21432v1
with low state- action values. Additionally, we also show the ground truth actions from collected demonstrations in the value map for comparison. By observing the positions of ground truth actions in the value map, we find these ground truth actions are consistently located in high-value regions, which demonstrates that System 2’s value-query head is capable of making reasonable estimates of the state-action values. We also find ground truth actions are not located at the highest-value positions in the value map, which proves System 2’s value-query head has not been overfitted to ground truth actions, but is able to estimate appropriate state-action values in whole action spaces, guiding System 2 to select the optimal candidate action. Furthermore, by comparing value maps across different timesteps, we observe adjacent timesteps (step=0 and step=1) have similar value maps, while value maps at distant timesteps (step=50 and step=100) exhibit significant differences. This demonstrates the value-query head can reasonably adjust its estimation of state-action values by capturing nature world dynamics, guiding System 2 to make smooth choices for robot control. 6.3 Cascaded Action Denoising Visualization Fig. 9 visualizes the cascaded action denoising process in LIBERO-OBJECT. For the 7 dimensions in action space (X, Y , Z, Roll, Pitch, Yaw, Gripper), we pair them into combinations for illustration, i.e., X-Y , X-Z, Y-Z, and R-P. The drew points is down-sampled from the actual denoised action sequences for accomplishing one task, where the blue points represent candidate actions Aτi tsampled from System 2’s denoising head, the red points represent optimal candidates Aτ∗ tselected by System 2, and the orange points represent final actions ˜A1 t+khdenoised by System 1. The red points and orange points are generally distributed within the region covered by blue points, while the distribution of orange points shows slight differences from the red points. This demonstrates that cascaded action denoising is actually refining selected actions from System 2 with higher-frequency new observation inputs in System 1, achieving accurate, fluid, and delicate robot control. 7 Video Demonstration and Anonymous Link We provide a video of Hume and an anonymous link (Please refer to: https://hume-vla.github.io for more details) to demonstrate the deployment on real-world robot platforms. 2 Cascaded Action DenoisingVisualization System2 CandidatesOptimal CandidateSystem1 ActionFigure 9: Cascaded Action Denoising on Different Action Dimension. We visualize denoised actions grouped to coordinate pairs (X-Y , X-Z, Y-Z, and R-P) from 7-dimensional action space (X, Y , Z, Roll, Pitch, Yaw, Gripper). For each group, we display the candidate actions Aτi tsampled from System 2, the optimal candidate Aτ∗ t, and final action ˜A1 t+khdenoised by System 1. 8 Implementation Details In this section, we provide further implementation details of Hume, including the training details of the value-query head and the hyperparameters used by the model during training. 8.1 Value Objective Functions Formally, the goal of the value-query Head is to learn Qθ(qt,At), which is the optimal estimate of the state-action value function Qπ(qt,At) =1 1−γP tEAt∼π(qt)[γtr(qt,At)|qt0=qt,At0=At]in a Markov Decision Process M= (S,A, P, r, ρ, γ ). Here S,Adenote the state and action spaces, while P(q′|q,A)andr(qt,At)are the dynamics and reward functions. ρ(q)denotes
https://arxiv.org/abs/2505.21432v1
the initial state distribution, and γ∈(0,1)denotes the discount factor. The training objective of the Value-Query Head is to minimize the Bellman error with a regularization term R(θ), which is defined as: min θαR(θ) +1 2Eqt,At,q′ t∼Dh Qθ(qt,At)− Bπ¯Q(qt,At)2i , (5) The second term in eq. (5) is the standard TD error [ 33,13,16], where Qθ(qt,At)is the output of the value-query head, and Bπ¯Q(qt,At)is the backup operator applied to the delayed target Q-network ¯Q:Bπ¯Q(qt,At) := r(qt,At) +γEA′ t∼π(A′ t|q′ t)[¯Q(q′ t,A′ t)], which can be calculated from the offline dataset D={(qt,At, r,q′ t)}. TheR(θ)in eq. (5) is a calibrated conservative regularizer that aims to prevent overestimation in the Q-values for OOD actions by penalizing the Q-values, and compensating for this pessimism on actions seen in the training dataset, and αis a hyperparameter to control the conservative penalty. Specifically, the regularization term R(θ)is defined as: R(θ) :=Eqt∼D,a∼π[max ( Qθ(qt,At), Qµ(qt,At))]−Eqt,At∼D[Qθ(qt,At)] (6) where Qµ(qt,At)is the value function of the calibrated policy µ. 8.2 Training and Inference Hyperparameters LIBERO In LIBERO, Hume takes images of third-person camera, wrist camera, and robot state as input. We set the chunk size of System 2’s output to 16 and the chunk size of System 1’s output to 8. The model was trained using 8 GPUs with a batch size of 16. SimplerEnv In SimplerEnv, Hume takes image of third-person camera and robot state as input. In Bridge tasks, we set the chunk size of System 2’s output to 8 and the chunk size of System 1’s output to 4. In Google Robot tasks, we set the chunk size of System 2’s output to 4 and the chunk size of System 1’s output to 2. The model was trained using 8 GPUs with a batch size of 32. Franka In real-world experiment on Franka-Emika-Panda, Hume takes images of third-person camera and robot state as input. We set the chunk size of System 2’s output to 16 and the chunk size of System 1’s output to 8. The model was trained using 4 GPUs with a batch size of 32. Widowx In real-world experiment on Widowx 250s, Hume uses the same training settings as in the simulation environment of Bridge in SimplerEnv, which takes images of third-person camera and 3 Table 4: SimplerEnv evaluation results across different policies on Google Robot tasks. Variant AggregationRT-1 (begin) 2.2% 1.3% 03.1% 02.2% 04.0% 00.5% 13.2% 06.9% 04.4% RT-1 ( 15%) 92.0% 70.4% 81.3% 81.2% 44.6% 21.2% 32.3% 26.8% 50.9% RT-1 (converged) 96.9% 76.0% 96.4% 89.8% 50.0% 27.0% 37.6% 32.3% 57.4% RT-1-X 56.9% 20.4% 69.8% 49.0% 32.3% 06.9% 51.9% 29.4% 36.9% RT-2-X 82.2% 75.4% 89.3% 82.3% 79.2% 33.3% 37.2% 35.3% 65.6% Octo-Base 0.5% 00.0% 01.3% 00.6% 03.1% 00.0% 02.1% 01.1% 01.6% OpenVLA 71.1% 27.1% 65.3% 54.5% 47.7% 15.8% 19.5% 17.7% 40.0% TraceVLA — — — 60.0% 56.4% — — 31.0% 49.1% RoboVLM 93.8% 49.8% 83.1% 75.6% 60.0% 02.6% 18.5% 10.6% 48.7% SpatialVLA 93.3% 78.2% 92.4% 88.0% 72.7% 28.6% 55.0% 41.8% 67.5% π0 82.0% 58.0% 85.6% 75.2% 63.7% 18.0% 33.2% 25.6% 54.8% π0-FAST 84.0% 63.0% 85.8% 77.6% 68.2% 24.0% 38.6% 31.3% 59.0% Hume 99.0% 96.0% 99.0%
https://arxiv.org/abs/2505.21432v1
98.0% 79.7% 38.0% 51.2% 44.6% 74.1% Visual MatchingRT-1 (Begin) 5.0% 00.0% 03.0% 02.7% 05.0% 00.0% 27.8% 13.9% 07.2% RT-1 ( 15%) 86.0% 79.0% 48.0% 71.0% 35.4% 46.3% 66.7% 56.5% 54.3% RT-1 (Converged) 96.0% 90.0% 71.0% 85.7% 44.2% 60.1% 86.1% 73.1% 67.7% RT-1-X 82.0% 33.0% 55.0% 56.7% 31.7% 29.6% 89.1% 59.4% 49.3% RT-2-X 74.0% 74.0% 88.0% 78.7% 77.9% 15.7% 34.3% 25.0% 60.5% Octo-Base 21.0% 21.0% 09.0% 17.0% 04.2% 00.9% 44.4% 22.7% 14.6% OpenVLA 27.0% 03.0% 19.0% 16.3% 46.2% 19.4% 51.8% 35.6% 32.7% TraceVLA — — — 28.0% 53.7% — — 57.0% 46.2% RoboVLM 94.0% 47.0% 91.0% 77.3% 61.7% 33.3% 53.1% 43.2% 60.7% SpatialVLA 85.0% 76.0% 97.0% 86.0% 77.9% 50.0% 64.8% 57.4% 73.8% HPT — — — 56.0% 60.0% — — 24.0% 46.7% π0 76.0% 57.0% 85.1% 72.7% 65.3% 30.0% 46.6% 38.3% 58.8% π0-FAST 79.0% 61.0% 85.9% 75.3% 67.5% 34.0% 51.8% 42.9% 61.9% Hume 99.0% 93.0% 99.0% 97.0% 80.4% 52.0% 65.6% 58.8% 78.7% robot state as input, and the chunk size of System 2 is 8 and the chunk size of System 1 is 4, trained using 8 GPUs with a batch size of 32. Agibot G-1 In real-world experiment on Agibot G-1, Hume takes images from the head camera and wrist cameras on both arms along with robot state as input. We set the chunk size of System 2’s output to 30 and the chunk size of System 1’s output to 15. The model was trained using 8 GPUs with a batch size of 8. 9 Experiment Details In this section, we will provide experiment details including evaluation setup and additional test results. In Sec. 9.1, we provide detailed descriptions of test setup for standard simulation benchmark and the implementation details of all comparison methods along with the detailed test results. In Sec. 9.2, we introduce the detailed setup of test tasks on real robot platforms and testing standards, and provide the detailed test results. 9.1 Simulation Benchmark Details Simulation Benchmark Setup. In LIBERO, all methods use third-person camera, wrist camera, and robot state as input, where the results of Diffusion Policy and OpenVLA-OFT are from the technical report of OpenVLA-OFT [ 23], the results of π0[3] and π0-FAST [ 41] are provided by Physical Intelligence’s open-source repository, and the results of GR00T N1 are obtained by training and testing on the LIBERO dataset using its open source code. In SimplerEnv, test results of RT-1 [ 5], RT-1-X [ 11], RT-2-X [ 11], Octo [ 40], OpenVLA [ 24], HPT [ 48], TraceVLA [ 56], RoboVLM [ 30], SpatialVLA [ 42] come from their official technical reports. The results of GR00T [ 2],π0-FAST [ 41], andπ0[3] are obtained by us fine-tuning and testing them on the corresponding datasets using their official open-source code. Detailed Results of Simulation Benchmark. Since the Google Robot tasks in SimplerEnv include various test settings such as environment layout, object position, and texture variations, we provide more detailed test results in Tab. 4. 4 Put eggplant in the basket: Put the eggplant in the basket. Put carrot on the plate: Put the carrot on the plate. Close microwave: Close
https://arxiv.org/abs/2505.21432v1
the microwave. Lift red pepper: Lift the red pepper. Put green cup on the pink cloth : Put the green cup on the pink cloth. Put purple cup on the white plate : Put the purple cup on the white plate. Figure 10: Evaluation Setup of WidowX 250s. We evaluated models with 9 tasks on WidowX 250s to verify the model’s learning ability on a large multi-task manipulation dataset. 9.2 Real-World Evaluation Details Real-World Evaluation Setup. In this section, we provide detailed descriptions of task setups on three real-world robot platforms: WidowX 250s, Franka-Panda-Emika, and Agibot G-1. As shown in Fig. 10, the detailed task specifications on WidowX 250s are: •Put eggplant in the basket : A complex task requiring the robot to identify and pick an eggplant from a sink containing multiple vegetables, then place it in a yellow basket. This task evaluates object discrimination and spatial awareness. •Put carrot on the plate : The robot needs to perform a pick-and-place task by grasping a carrot from the sink and placing it on a plate, assessing both grasping precision and placement accuracy. •Close microwave : The robot must close a toy microwave door positioned at various angles (30 °, 45°, 60°, and 90 °), testing the model’s capability to manipulate articulated objects in different configurations. •Lift red pepper : A basic pick task requiring the robot to grasp and lift a red pepper from the sink, designed to evaluate the model’s object localization accuracy. 5 Kitchen Banana: Put the banana in the basket. Kitchen Pot: Place the black pot on the cutting board. Tea: Push down the handle of the teapot. PlushToy:Place the red plush toy on the green toy car. ThreeCube (Blue): Place the blue cube on the green toy car. ThreeCube (Green): Place the green cube on the green toy car.ThreeCube (Red): Place the red cube on the green toy car. Figure 11: Evaluation Setup of Frank-Emika-Panda. We evaluated policies on Fnraka robot with 7 tasks, including instruction following, articulated manipulation, and pick and place tasks. •Put green cup on the pink cloth : This task suite comprises two scenarios testing vertical spatial understanding. In the first scenario, the robot grasps a green cup positioned either on a stove or elevated on a yellow block. In the second scenario, the cup is placed either at the bottom of a sink or elevated on a bowl. This variation in object heights challenges the model’s ability to adapt its manipulation strategy according to spatial configurations. •Put purple cup on the white plate : The robot must identify and transfer a purple cup to a white plate within a sink containing multiple plates, testing color recognition and precise manipulation. As shown in Fig. 11, the detailed task specifications on Frank-Emila-Panda are: •Kitchen Banana : A pick-and-place task where the robot must transfer a banana from the table to a basket. With only 50 human demonstrations, this task evaluates model performance with limited data. •Kitchen Pot : Another pick-and-place task requiring the robot to grasp a white bowl from the right side of
https://arxiv.org/abs/2505.21432v1
the table and place it on a cutting board, trained with 100 human demonstrations. 6 Restock the hanging basket area: The robot is in front of the snack shelf, with the shopping cart positioned between the snack shelf and the robot. The snacks that need to be restocked are in the box inside the shopping cart. Pour water: Lift the kettle on the table with the right arm. Pour two-thirds of the water into the cup on the table with the right arm. Place the held kettle on the coaster on the table with the right arm. Fold Shorts:Grasp the bottom legs and waist of the shorts with both hands and fold them over the top legs and waist.Grasp the waistband of the shorts and fold it to the leg with both arms. Pass the water: Pick up the green mineral water on the table with the right arm.Pass the picked-up green mineral water to the guest with the right arm.Figure 12: Evaluation Setup of AgiBot G-1. We evaluated policies on four challenging tasks on AgiBot G-1 to test ability of controlling a humanoid robot completing dexterous and long-horizon tasks. •Tea: The robot must push a teapot handle from a perpendicular to a a parallel position relative to the desktop, using its gripper tip. This task tests the manipulation of revolute joints and includes 50 human demonstrations. •Plush Toy : The robot must identify and grasp the nearest plush toy among two options and place it on a green car. To rigorously assess spatial understanding, we systematically vary the relative positions of the plush toys during testing. •Three Cube : An instruction-following task where the robot must identify and place a specifically colored cube (red, green, or blue) onto a green car. Training included 50 demonstrations for each color, totaling 150. As shown in Fig. 12, the detailed task specifications on AgiBot G-1 are: •Restock the hanging basket area : This task requires the robot to grab snacks from a cart and place them at a designated location on the shelf. This task includes different types of snacks, different placements on the shelf, and interfering objects in the cart to verify the generalization of the model. •Pour water : This is a long-horizon task that requires the robot to first grasp the handle of the teapot, lift the teapot and accurately pour the water from the teapot into the cup, and then put the teapot back on the mat after the cup is full. This task involves changing the material of the cup, as well as the position of the teapot and the cup. To complete this challenging task, the robot needs to accurately identify the location of the cup and pour water into it, and it needs to correctly sense the water level in the cup to avoid spilling water on the table. •Fold Shorts : This is a long-horizon task that requires the collaboration of both arms and involves the manipulation of flexible objects. In this task, the robot first needs to accurately identify the 7 GR00T falls in error
https://arxiv.org/abs/2505.21432v1
state, cannot be recovered. Pi0 falls in error state, cannot be recovered. Hume uses value-guided thinking to choose the correct action trajectory among candidates, recover from error state, and successfully execute tasks. Figure 13: Failure Recovery of Hume. When a failure occurs, such as missing the grasping position, other policies fall into a failure state, and Hume selects the correct action through value-guided thinking to help it recover from the failure state and successfully complete the task. position of the shorts and use the grippers on both arms to fold the shorts for the first time. After completing the first fold, the robot needs to confirm the current state of the shorts again and perform a second fold. During the testing of this task, we used shorts of different colors and materials to verify the model’s long-horizon manipulation capabilities for flexible objects of different shapes. •Pass the water : This task is designed to test the model’s ability to follow instructions and collaborate with humans. In this task, the robot needs to grab the correct bottle according to the language instructions and hand it to the human. We used different types of bottles in the test, and we also arbitrarily adjusted the position of the bottles on the white table shown in the figure to verify the generalization of the robot to the position of objects. Detailed Results of Real-World Evaluation. To ensure the reliability of real robot platform test results, we used standardized evaluation metrics. For simple tasks such as “Lift red pepper” on WidowX, we only tracked the overall task completion success rate. For more general tasks, such as “Kitchen Banana” on Franka, we tracked both partial and overall success rates, meaning the robot’s success rate in grasping the banana and placing it at the designated location. For more complex long-horizon tasks, such as “Pour water” on Agibot G-1, we tracked the success rate of each subtask and the overall task success rate, including the robot’s success in grasping the kettle, pouring water from the kettle into the cup, and placing the kettle on the pad. In addition, to verify the model’s generalization capability, we conducted tests under different lighting conditions, with various types of objects, different environmental layouts, and diverse language instructions. For example, in the “put green cup on the pink cloth” task on the WidowX robot, the initial position of the green cup was 8 significantly adjusted. In the “Fold Shorts” task on the Agibot G-1, shorts of different colors and materials were used for testing. As shown in Fig. 13, we also observed Hume has the ability to recover from failures during evaluation. Compared to other methods, Hume can recover from failure states more often and complete the task successfully. Benefit from value-guided thinking, when the model falls into a failure state, such as missing the correct grasp position, Hume can select the correct action from a variety of candidate actions to recover from the failure and guide the robot to gradually recover from the failure state. For common imitation policies such as π0and GR00T, when
https://arxiv.org/abs/2505.21432v1
they enter an error state, since the observation of the error state does not appear in their training dataset, these models are easily trapped in the error state and cannot recover, resulting in the failure of the final task. For Hume, although the error state observation also does not appear in its training dataset, it can include the correct action that recovers from the error state by repeatedly sampling the candidate actions that are not completely denoised, and then select the correct candidate to guide the model to recover based on the state-action value estimated by the value-query head, thereby achieving a strong ability to recover from failures. 9 References [1]Jie An, Zhengyuan Yang, Jianfeng Wang, Linjie Li, Zicheng Liu, Lijuan Wang, and Jiebo Luo. Bring metric functions into diffusion models. arXiv preprint arXiv:2401.02414 , 2024. [2]Johan Bjorck, Fernando Castañeda, Nikita Cherniadev, Xingye Da, Runyu Ding, Linxi Fan, Yu Fang, Dieter Fox, Fengyuan Hu, Spencer Huang, et al. Gr00t n1: An open foundation model for generalist humanoid robots. arXiv preprint arXiv:2503.14734 , 2025. [3]Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al. A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164 , 2024. [4]Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818 , 2023. [5]Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817 , 2022. [6]Qingwen Bu, Hongyang Li, Li Chen, Jisong Cai, Jia Zeng, Heming Cui, Maoqing Yao, and Yu Qiao. Towards synergistic, generalized, and efficient dual-system for robotic manipulation. arXiv preprint arXiv:2410.08001 , 2024. [7]Bowen Chen, Liqin Liu, Chenyang Liu, Zhengxia Zou, and Zhenwei Shi. Spectral-cascaded diffusion model for remote sensing image spectral super-resolution. IEEE Transactions on Geoscience and Remote Sensing , 2024. [8]Jiefeng Chen, Jie Ren, Xinyun Chen, Chengrun Yang, Ruoxi Sun, and Sercan Ö Arık. Sets: Leveraging self-verification and self-correction for improved test-time scaling, 2025. [9]Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, et al. Pali-x: On scaling up a multilingual vision and language model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2024. [10] Cheng Chi, Zhenjia Xu, Siyuan Feng, Eric Cousineau, Yilun Du, Benjamin Burchfiel, Russ Tedrake, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. In Proceedings of Robotics: Science and Systems (RSS) , 2023. [11] Open X-Embodiment Collaboration, Abby O’Neill, Abdul Rehman, Abhiram Maddukuri, Abhishek Gupta, Abhishek Padalkar, Abraham Lee, Acorn Pooley, Agrim Gupta, Ajay Mandlekar, Ajinkya Jain, et al. Open x-embodiment: Robotic learning datasets and rt-x models. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) , 2024. [12] Figure. Helix: A vision-language-action model for generalist humanoid control, 2025. [13] Scott Fujimoto, Herke van Hoof,
https://arxiv.org/abs/2505.21432v1
and David Meger. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning (ICML) , pages 1587–1596, 2018. [14] Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu, Hongzhang Liu, Aiwei Liu, Xuming Hu, and Lijie Wen. Interpretable contrastive monte carlo tree search reasoning, 2024. [15] Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Miguel Angel Bautista, and Josh Susskind. f-dm: A multi-stage diffusion model via progressive signal transformation. arXiv preprint arXiv:2210.04955 , 2022. [16] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algorithms and applications. Technical report, 2018. [17] ByungOk Han, Jaehong Kim, and Jinhyeok Jang. A dual process vla: Efficient robotic manipulation leveraging vlm. In Conference on Robot Learning (CoRL) , 2024. [18] Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model, 2023. [19] Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. arXiv preprint arXiv:2106.15282 , 2021. 10 [20] Junhwa Hur, Charles Herrmann, Saurabh Saxena, Janne Kontkanen, Wei-Sheng Lai, Yichang Shih, Michael Rubinstein, David J Fleet, and Deqing Sun. High-resolution frame interpolation with patch-based cascaded diffusion. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 3868–3876, 2025. [21] Daniel Kahneman. Thinking, fast and slow. Farrar, Straus and Giroux , 2011. [22] Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, and Dorsa Sadigh. Prismatic vlms: Investigating the design space of visually-conditioned language models. In Proceedings of the International Conference on Machine Learning (ICML) , 2024. [23] Moo Jin Kim, Chelsea Finn, and Percy Liang. Fine-tuning vision-language-action models: Optimizing speed and success. arXiv preprint arXiv:2502.19645 , 2025. [24] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. [25] Aviral Kumar, Anikait Singh, Frederik Ebert, Mitsuhiko Nakamoto, Yanlai Yang, Chelsea Finn, and Sergey Levine. Pre-training for robots: Offline rl enables learning new tasks from a handful of trials. In Proceedings of Robotics: Science and Systems , Daegu, Republic of Korea, 2023. [26] Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems, 2020. [27] Guangyuan Li, Yongkang Wang, Junsheng Luan, Lei Zhao, Wei Xing, Huaizhong Lin, and Binkai Ou. Cascaded diffusion models for virtual try-on: Improving control and resolution. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 4689–4697, 2025. [28] Qixiu Li, Yaobo Liang, Zeyu Wang, Lin Luo, Xi Chen, Mozheng Liao, Fangyun Wei, Yu Deng, Sicheng Xu, Yizhong Zhang, et al. Cogact: A foundational vision-language-action model for synergizing cognition and action in robotic manipulation. arXiv preprint arXiv:2411.19650 , 2024. [29] Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, and Huaping Liu. Towards generalist robot policies: What matters in building vision-language-
https://arxiv.org/abs/2505.21432v1
action models. arXiv preprint arXiv:2412.14058 , 2024. [30] Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, and Huaping Liu. Towards generalist robot policies: What matters in building vision-language- action models. arXiv preprint arXiv:2412.14058 , 2024. [31] Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, et al. Vision-language foundation models as effective robot imitators. In Proceedings of International Conference on Learning Representations (ICLR) , 2024. [32] Xuanlin Li, Kyle Hsu, Jiayuan Gu, Karl Pertsch, Oier Mees, Homer Rich Walke, Chuyuan Fu, Ishikaa Lunawat, Isabel Sieh, Sean Kirmani, Sergey Levine, Jiajun Wu, Chelsea Finn, Hao Su, Quan Vuong, and Ted Xiao. Evaluating real-world robot manipulation policies in simulation. In Proceedings of the Conference on Robot Learning (CoRL) , 2024. [33] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 , 2015. [34] Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu, and Peter Stone. Libero: Benchmarking knowledge transfer for lifelong robot learning. arXiv preprint arXiv:2306.03310 , 2023. [35] Tongxuan Liu, Xingyu Wang, Weizhe Huang, Wenjiang Xu, Yuting Zeng, Lei Jiang, Hailong Yang, and Jing Li. Groupdebate: Enhancing the efficiency of multi-agent debate using group discussion, 2024. [36] Haotian Luo, Li Shen, Haiying He, Yibo Wang, Shiwei Liu, Wei Li, Naiqiang Tan, Xiaochun Cao, and Dacheng Tao. O1-pruner: Length-harmonizing fine-tuning for o1-like reasoning pruning, 2025. [37] Ning Miao, Yee Whye Teh, and Tom Rainforth. Selfcheck: Using llms to zero-shot check their own step-by-step reasoning. arXiv preprint arXiv:2308.00436 , 2023. [38] Jananee Muralidharan and Tiju Thomas. Deliberate Problem-solving with a Large Language Model as a Brainstorm Aid Using a Checklist for Prompt Generation. The Journal of the Association of Physicians of India , 72(5):89–90, 2024. 11 [39] Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, and Sergey Levine. Cal-ql: Calibrated offline rl pre-training for efficient online fine-tuning. 2023. [40] Octo Model Team, Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Charles Xu, Jianlan Luo, Tobias Kreiman, You Liang Tan, Lawrence Yunliang Chen, Pannag Sanketi, Quan Vuong, Ted Xiao, Dorsa Sadigh, Chelsea Finn, and Sergey Levine. Octo: An open-source generalist robot policy. In Proceedings of Robotics: Science and Systems (RSS) , 2024. [41] Karl Pertsch, Kyle Stachowicz, Brian Ichter, Danny Driess, Suraj Nair, Quan Vuong, Oier Mees, Chelsea Finn, and Sergey Levine. Fast: Efficient action tokenization for vision-language-action models. arXiv preprint arXiv:2501.09747 , 2025. [42] Delin Qu, Haoming Song, Qizhi Chen, Yuanqi Yao, Xinyi Ye, Yan Ding, Zhigang Wang, JiaYuan Gu, Bin Zhao, Dong Wang, and Xuelong Li. Spatialvla: Exploring spatial representations for visual-language-action model, 2025. [43] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models, 2024. [44] Lucy Xiaoyang Shi, Brian
https://arxiv.org/abs/2505.21432v1
Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, Adrian Li-Bell, Danny Driess, Lachy Groom, Sergey Levine, and Chelsea Finn. Hi robot: Open-ended instruction following with hierarchical vision-language-action models, 2025. [45] Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023. [46] Xiaoshuai Song, Yanan Wu, Weixun Wang, Jiaheng Liu, Wenbo Su, and Bo Zheng. Progco: Program helps self-correction of large language models, 2025. [47] Gemini Robotics Team, Saminda Abeyruwan, Joshua Ainslie, Jean-Baptiste Alayrac, Montserrat Gonzalez Arenas, Travis Armstrong, Ashwin Balakrishna, Robert Baruch, Maria Bauza, Michiel Blokzijl, Steven Bohez, Konstantinos Bousmalis, Anthony Brohan, Thomas Buschmann, Arunkumar Byravan, Serkan Cabi, Ken Caluwaerts, Federico Casarini, Oscar Chang, Jose Enrique Chen, Xi Chen, Hao-Tien Lewis Chiang, Krzysztof Choromanski, David D’Ambrosio, Sudeep Dasari, Todor Davchev, Coline Devin, Norman Di Palo, Tianli Ding, Adil Dostmohamed, Danny Driess, Yilun Du, Debidatta Dwibedi, Michael Elabd, Claudio Fantacci, Cody Fong, Erik Frey, Chuyuan Fu, Marissa Giustina, Keerthana Gopalakrishnan, Laura Graesser, Leonard Hasenclever, Nicolas Heess, Brandon Hernaez, Alexander Herzog, R. Alex Hofer, Jan Humplik, Atil Iscen, Mithun George Jacob, Deepali Jain, Ryan Julian, Dmitry Kalashnikov, M. Emre Karagozler, Stefani Karp, Chase Kew, Jerad Kirkland, Sean Kirmani, Yuheng Kuang, Thomas Lampe, Antoine Laurens, Isabel Leal, Alex X. Lee, Tsang-Wei Edward Lee, Jacky Liang, Yixin Lin, Sharath Maddineni, Anirudha Majumdar, Assaf Hurwitz Michaely, Robert Moreno, Michael Neunert, Francesco Nori, Carolina Parada, Emilio Parisotto, Peter Pastor, Acorn Pooley, Kanishka Rao, Krista Reymann, Dorsa Sadigh, Stefano Saliceti, Pannag Sanketi, Pierre Sermanet, Dhruv Shah, Mohit Sharma, Kathryn Shea, Charles Shu, Vikas Sindhwani, Sumeet Singh, Radu Soricut, Jost Tobias Springenberg, Rachel Sterneck, Razvan Surdulescu, Jie Tan, Jonathan Tompson, Vincent Vanhoucke, Jake Varley, Grace Vesom, Giulia Vezzani, Oriol Vinyals, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Fei Xia, Ted Xiao, Annie Xie, Jinyu Xie, Peng Xu, Sichun Xu, Ying Xu, Zhuo Xu, Yuxiang Yang, Rui Yao, Sergey Yaroshenko, Wenhao Yu, Wentao Yuan, Jingwei Zhang, Tingnan Zhang, Allan Zhou, and Yuxiang Zhou. Gemini robotics: Bringing ai into the physical world, 2025. [48] Lirui Wang, Xinlei Chen, Jialiang Zhao, and Kaiming He. Scaling proprioceptive-visual learning with het- erogeneous pre-trained transformers. In Proceedings of the Conference on Neural Information Processing System (NeurIPS) , 2024. [49] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , volume 35, pages 24824–24837, 2022. [50] Junjie Wen, Yichen Zhu, Jinming Li, Zhibin Tang, Chaomin Shen, and Feifei Feng. Dexvla: Vision- language model with plug-in diffusion expert for general robot control. arXiv preprint arXiv:2502.05855 , 2025. [51] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. 12 [52] Yuanqi Yao, Siao Liu, Haoming Song, Delin Qu, Qizhi Chen, Yan Ding, Bin Zhao, Zhigang Wang, Xuelong Li, and Dong Wang. Think small, act big: Primitive
https://arxiv.org/abs/2505.21432v1
prompt learning for lifelong robot manipulation, 2025. [53] Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, and Sergey Levine. Robotic control via embodied chain-of-thought reasoning. arXiv preprint arXiv:2407.08693 , 2024. [54] Jianke Zhang, Yanjiang Guo, Xiaoyu Chen, Yen-Jen Wang, Yucheng Hu, Chengming Shi, and Jianyu Chen. Hirt: Enhancing robotic control with hierarchical robot transformers. arXiv preprint arXiv:2410.05273 , 2024. [55] Jinliang Zheng, Jianxiong Li, Dongxiu Liu, Yinan Zheng, Zhihao Wang, Zhonghong Ou, Yu Liu, Jingjing Liu, Ya-Qin Zhang, and Xianyuan Zhan. Universal actions for enhanced embodied foundation models. arXiv preprint arXiv:2501.10105 , 2025. [56] Ruijie Zheng, Yongyuan Liang, Shuaiyi Huang, Jianfeng Gao, Hal Daumé III, Andrey Kolobov, Furong Huang, and Jianwei Yang. Tracevla: Visual trace prompting enhances spatial-temporal awareness for generalist robotic policies. arXiv preprint arXiv:2412.10345 , 2024. 13
https://arxiv.org/abs/2505.21432v1
arXiv:2505.21441v1 [stat.ML] 27 May 2025Autoencoding Random Forests Binh Duc Vu∗ King’s College London binh.vu@kcl.ac.ukJan Kapar∗ University of Bremen kapar@leibniz-bips.de Marvin Wright University of Bremen wright@leibniz-bips.deDavid S. Watson King’s College London david.watson@kcl.ac.uk Abstract We propose a principled method for autoencoding with random forests. Our strat- egy builds on foundational results from nonparametric statistics and spectral graph theory to learn a low-dimensional embedding of the model that optimally represents relationships in the data. We provide exact and approximate solutions to the decod- ing problem via constrained optimization, split relabeling, and nearest neighbors regression. These methods effectively invert the compression pipeline, establishing a map from the embedding space back to the input space using splits learned by the ensemble’s constituent trees. The resulting decoders are universally consistent under common regularity assumptions. The procedure works with supervised or unsupervised models, providing a window into conditional or joint distributions. We demonstrate various applications of this autoencoder, including powerful new tools for visualization, compression, clustering, and denoising. Experiments il- lustrate the ease and utility of our method in a wide range of settings, including tabular, image, and genomic data. 1 Introduction Engineering compact, informative representations is central to many learning tasks [ 53,91,8,39,76]. In supervised applications, it can simplify regression or classification objectives, helping users better understand the internal operations of large, complicated models [37, 98]. In reinforcement learning, embeddings help agents navigate complex environments, imposing useful structure on a potentially high-dimensional state space [ 2,58]. In unsupervised settings, latent projections can be used for data compression [64], visualization [88], clustering [92], and generative modeling [87]. The current state of the art in representation learning is dominated by deep neural networks (DNNs). Indeed, the tendency of these algorithms to learn rich embeddings is widely cited as a key component of their success [ 39], with some even arguing that large language models are essentially compression engines [ 25]. It is less obvious how to infer latent factors from tree-based ensembles such as random forests (RFs) [ 14], a popular and flexible function class widely used in areas like bioinformatics [ 19] and econometrics [ 3]. DNNs are known to struggle in tabular settings with mixed continuous and categorical covariates, where tree-based ensembles typically match or surpass their performance [81,43]. Though several authors have proposed methods for computing nonlinear embeddings with RFs (see Sect. 2), these approaches tend to be heuristic in nature. Moreover, the task of decoding latent vectors to recover input data in these pipelines remains unresolved. We propose a novel, principled method for autoencoding with RFs. Our primary contributions are: (1) We prove several important properties of the adaptive RF kernel, including that it is asymptotically ∗Equal contribution. Preprint. Under review. universal. (2) These results motivate the use of diffusion maps to perform nonlinear dimensionality reduction and manifold learning with RFs. Resulting embeddings can be used for various downstream tasks. (3) We introduce and study multiple methods for decoding spectral embeddings back into the original input space, including exact and approximate solutions based on constrained optimiza- tion, split relabeling, and nearest neighbors regression. (4) We apply
https://arxiv.org/abs/2505.21441v1
these methods in a series of experiments and benchmark against a wide array of neural and tree-based alternatives. Our results demonstrate that the RF autoencoder is competitive with the state of the art across a range of tasks including data visualization, compression, clustering, and denoising. The remainder of this paper is structured as follows. After a review of background material and related work (Sect. 2), we propose and study methods for encoding (Sect. 3) and decoding (Sect. 4) data with RFs. Performance is illustrated in a series of experiments (Sect. 5). Following a brief discussion (Sect. 6), we conclude with directions for future work (Sect. 7). 2 Background Our starting point is the well established connection between RFs and kernel methods [ 13,24,78]. The basic insight is that classification and regression trees (CART) [ 15], which serve as basis functions for many popular ensemble methods, are a kind of adaptive nearest neighbors algorithm [ 61]. At the root of the tree, all samples are connected. Each split severs the link between one subset of the data and another (i.e., samples routed to left vs. right child nodes), resulting in a gradually sparser graph as depth increases. At completion, a sample’s “neighbors” are just those datapoints that are routed to the same leaf. Given some feature space X ⊂RdX, the implicit kernel of tree b,k(b):X × X 7→ { 0,1}, is an indicator function that evaluates to 1for all and only neighboring sample pairs.2This base kernel can be used to define different ensemble kernels. For instance, taking an average over B >1 trees, we get a kernel with a simple interpretation as the proportion of trees in which two samples colocate: kS(x,x′) =B−1PB b=1k(b)(x,x′). We call this the Scornet kernel after one of its noted proponents [ 78], who showed that kSis provably close in expectation to the Breiman RF kernel kRF n: kRF n(x,x′) =1 BBX b=1k(b)(x,x′)Pn i=1k(b)(x,xi) , (1) where i∈[n] :={1, . . . , n }indexes the training samples. This kernel represents the average of normalized tree kernels, and fully encodes the information learned by the RF fnvia the identity: fn(x) =nX i=1kRF n(x,xi)yi, (2) which holds uniformly for all x∈ X. Though kSis sometimes referred to as “the random forest kernel” [24, 70], this nomenclature is misleading—only kRF nsatisfies Eq. 2. Several nonlinear dimensionality reduction techniques are based on kernels, most notably kernel principal component analysis (KPCA) [ 75]. We focus in particular on diffusion maps [ 21,22], which can be interpreted as a form of KPCA [ 47]. Bengio et al. [7]establish deep links between these algorithms and several related projection methods, demonstrating how to embed test data in all settings via the Nyström formula, a strategy we adopt below. Inverting any KPCA algorithm to map latent vectors to the input space is a nontrivial task that must be tailored to each specific kernel. For an example with Gaussian kernels, see [67]. Previous authors have explored feature engineering with RFs. Shi and Horvath [80] perform multi- dimensional scaling on a dissimilarity matrix extracted from supervised and unsupervised
https://arxiv.org/abs/2505.21441v1
forests. However, they do not explore the connections between this approach and kernel methods, nor do they propose any strategy for decoding latent representations. Other more heuristic approaches involve running PCA on a weighted matrix of all forest nodes (not just leaves), a method that works well in some experiments but comes with no formal guarantees [71]. 2This ignores certain subtleties that arise when trees are grown on bootstrap samples, in which case k(b) may occasionally evaluate to larger integers. For present purposes, we presume that trees are grown on data subsamples; see Appx. A. 2 …𝑋!𝑋"𝑋#…Red04.6...Blue12.3………251249408281837295138502317157344111472227201661924213344453214311513512212011413410213912412814715012714412511612114113612310810310613113012611911114210910411211313313814811712914614010542107118110132101149137145313542630101343144693923364894586199826870548160638090839756721006591959387766692865352577151786762858996755979645588987484697377777369847498885564795975968985626778517157525386926676879395916510072569783908063608154706882996158944836323994614431310302643531145137149101132110118107421051401461291171481381331131121041091421111191261301311061031081231361411211161251441271501471281241391021341141201221351151433245443321241961620272247114134715172350381529371828840491225 00.20.40.60.81… −0.050−0.0250.0000.0250.050 −0.050−0.0250.0000.025PC1PC2 Speciessetosaversicolorvirginica (a) (b)(c) (d)(e)𝐊=𝐕𝚲𝐕$𝟏 Figure 1: Visual summary of the encoding pipeline. (a) Input data can be a mix of continuous, ordinal, and/or categorical variables. (b) A RF (supervised or unsupervised) is trained on the data. (c) A kernel matrix K∈[0,1]n×nis extracted from the ensemble. (d) Kis decomposed into its eigenvectors and eigenvalues, as originally proposed by David Hilbert (pictured). (e) Data is projected onto the top dZ< n principal components of the diffusion map, resulting in a new embedding Z∈Rn×dZ. Several generative algorithms based on RFs have been proposed, building on the probabilistic circuit literature [ 23,94]. These do not involve any explicit encoding step, although the models they train could be passed through our pipeline to extract implicit embeddings (see Sect. 5). Existing methods for sum-product network encoding could in principle be applied to an RF following compilation into a corresponding circuit [ 90]. However, these can often increase rather than decrease dimensionality, and do not come with any associated decoding procedure. Perhaps the most similar method to ours, in motivation if not in execution, is Feng and Zhou [30]’s encoder forest (eForest). This algorithm maps each point to a hyperrectangle defined by the intersection of all leaves to which the sample is routed. Decoding is then achieved by taking some representative value for each feature in the subregion (e.g., the median). Notably, this approach does notinclude any dimensionality reduction. On the contrary, the embedding space requires minima and maxima for all input variables, resulting in a representation with double the number of features as the inputs. Working from the conjecture that optimal prediction is equivalent to optimal compression [73, 53, 44], we aim to represent the information learned by the RF in relatively few dimensions. 3 Encoding As a preliminary motivation, we prove several important properties of the RF kernel. The following definitions are standard in the literature. Let Xbe a compact metric space, and let C(X)be the set of all real-valued continuous functions on X. Definition 3.1 (Positive semidefinite) .A symmetric function k:X ×X 7→ Rispositive semidefinite (PSD) if, for all x1, . . . ,xn∈ X,n∈N, and ci, cj∈R, we have:Pn i,jcicjk(xi,xj)≥0. The Moore-Aronszajn theorem [ 1] states that PSD kernels admit a unique reproducing kernel Hilbert space (RKHS) [9], providing a rich mathematical language for analyzing their behavior. Definition 3.2 (Universal) .We say that the RKHS Hisuniversal if the associated kernel kis dense inC(X)with respect to the uniform norm. That is, for any f∗∈C(X)andϵ >0, there
https://arxiv.org/abs/2505.21441v1
exists some f∈ H such that ∥f∗−f∥∞< ϵ. Several variants of universality exist with slightly different conditions on X[82]. Examples of universal kernels include the Gaussian and Laplace kernels [83]. Definition 3.3 (Characteristic) .The bounded measurable kernel kischaracteristic if the function µ7→R Xk(·,x)dµ(x)is injective, where µis a Borel probability measure on X. Characteristic kernels are especially useful for statistical testing, and have inspired flexible nonpara- metric methods for evaluating marginal and conditional independence [ 41,34]. For instance, Gretton et al. [42] show that, when using a characteristic kernel k, the maximum mean discrepancy (MMD) between two measures µ, νonXis zero iff µ=ν. The MMD is defined as: MMD2(µ, ν;k) :=Ex,x′∼µ[k(x,x′)]−2Ex∼µ,y∼ν[k(x,y)] +Ey,y′∼ν[k(y,y′)]. With these definitions in place, we state our first result (all proofs in Appx. A). 3 Theorem 3.4 (RF kernel properties) .Assume standard RF regularity conditions (see Appx. A). Then: (a) For all n∈N, the function kRF nis PSD and the kernel matrix K∈[0,1]n×nis doubly stochastic. (b) Let {fn}be a sequence of RFs. Then the associated RKHS sequence {Hn}is asymptotically universal. That is, for any f∗∈C(X)andϵ >0, we have: lim n→∞P ∥f∗−fn∥∞≥ϵ = 0. (c) The RKHS sequence {Hn}is asymptotically characteristic. That is, for any ϵ >0, the Borel measures µ, νare equal if and only if: lim n→∞P MMD (µ, ν;kRF n)≥ϵ = 0. The literature on kernel methods is largely focused on fixed kernels such as the radial basis function (RBF). Among adaptive partitioning alternatives, the Scornet kernel been studied in some detail [24,78,70], as has the Mondrian kernel [ 59,4,17]. However, to the best of our knowledge, we are the first to establish these results for the RF kernel. Thm. 3.4 confirms that kRF nis flexible, informative, and generally “well-behaved” in ways that will prove helpful for autoencoding. Spectral Graph Theory A key insight from spectral graph theory is that for any PSD kernel k, there exists an encoding g:X 7→ Z for any embedding dimension dZ< n that optimally represents the data, in a sense to be made precise below. This motivates our use of diffusion maps [ 22,21], which are closely related to Laplacian eigenmaps [ 5,6], an essential preprocessing step in popular spectral clustering algorithms [ 69,92,54]. These methods are typically used with fixed kernels such as the RBF; by contrast, we use the adaptive RF kernel, which is better suited to mixed tabular data. The procedure begins with a dataset of paired feature vectors x∈ X ⊂ RdXand outcomes y∈ Y ⊂ R sampled from the joint distribution PXY.3A RF of Btreesfnis trained on {xi, yi}n i=1. Using Eq. 1, we construct the kernel matrix K∈[0,1]n×nwith entries kij=kRF n(xi,xj). This defines a weighted, undirected graph Gnover the training data. As Kis doubly stochastic, it can be interpreted as encoding the transitions of a Markov process. Spectral analysis produces the decomposition KV =VΛ, where V∈Rn×ndenotes the eigenvector matrix with corresponding eigenvalues λ∈[0,1]n, and Λ=diag(λ). Indexing from zero, it can be shown that V0is constant, with 1 =λ0≥λ1≥ ··· ≥ λn. Following standard convention, we drop this uninformative dimension and take the leading eigenvectors from V1.
https://arxiv.org/abs/2505.21441v1
The elements of this decomposition have several notable properties.4For instance, the resulting eigenvectors uniquely solve the constrained optimization problem: min V∈Rn×dZX i,jkij∥vi−vj∥2s.t.V⊤V=I, for all dZ∈[n], thereby minimizing Dirichlet energy and producing the smoothest possible represen- tation of the data that preserves local relationships in the graph. These eigenvectors also simplify a number of otherwise intractable graph partition problems, providing smooth approximations that motivate spectral clustering approaches [ 79,92]. If we think of Gnas a random sample from a Riemannian manifold M, then scaling each Vjby√nλt jproduces an approximation of the jth eigen- function of the Laplace-Beltrami operator at time t, which describes how heat (or other quantities) diffuse across M[5,7]. Euclidean distance in the resulting space matches diffusion distances across Gn, providing a probabilistically meaningful embedding geometry [22]. The diffusion map Z=√nVΛtrepresents the long-run connectivity structure of the graph after t time steps of a Markov process. Test data can be projected into spectral space via the Nyström formula [46], i.e.Z0=K0ZΛ−1for some K0∈[0,1]m×n, where rows index test points and columns index training points. For more details on diffusion and spectral graph theory, see [20, 68, 92]. 3Even in the unsupervised case, we typically train the ensemble with a regression or classification objective. The trick is to construct some Ythat encourages the model to make splits that are informative w.r.t. PX(e.g., [80,94]). For fully random partitions, Ycan be any variable that is independent of the features (e.g., [ 36,35]). 4Observe that the eigenvectors of Kare identical to those of the graph Laplacian L=I−K, which has jth eigenvalue γj= 1−λj. For more on the links between diffusion and Laplacian eigenmaps, see [57, 47]. 4 4 Decoding Our next task is to solve the inverse problem of decoding vectors from the spectral embedding space back into the original feature space—i.e., learning the function h:Z 7→ X such that x≈h g(x) . We propose several solutions, including methods based on constrained optimization, split relabeling, andk-nearest neighbors. To study the properties of these different methods, we introduce the notion of a universally consistent decoder. Definition 4.1 (Universally consistent decoder) .Letg∗:X 7→ Z be a lossless encoder. Then we say that the sequence of decoders {hn:Z 7→ X} isuniversally consistent if, for all distributions PX, anyx∼PX, and all ϵ >0, we have: lim n→∞P ∥x−hn g∗(x) ∥∞≥ϵ = 0. The first two methods—constrained optimization and split relabeling—are designed to infer likely leaf assignments for latent vectors z. If these are correctly determined, then the intersection of assigned leaves defines a bounding box that contains the corresponding input vector x. Our estimate ˆxis then sampled uniformly from this subspace, which is generally small for sufficiently large and/or deep forests. As a motivation for this approach, we show that a leaf assignment oracle would constitute a universally consistent decoder. Letd(b) Φbe the number of leaves in tree b, and dΦ=PB b=1d(b) Φthe number of leaves in the forest f. The function πf:X 7→ { 0,1}dΦmaps each sample xto its corresponding leaves in f. It is composed by concatenating the outputs of Bunique functions π(b) f:X 7→ { 0,1}d(b) Φ, each
https://arxiv.org/abs/2505.21441v1
satisfying ∥π(b) f(x)∥1= 1for all x∈ X. Let ψf:Z 7→ { 0,1}dΦbe a similar leaf assignment function, but for latent vectors. Then for a fixed forest fand encoder g, the leaf assignment oracle ψ∗ f,gsatisfies πf(x) =ψ∗ f,g g(x) , for all x∈ X. Theorem 4.2 (Oracle consistency) .Letfnbe a RF trained on {xi, yi}n i=1i.i.d.∼PXY. Leth∗ n:Z 7→ X be a decoder that (i) maps latent vectors to leaves in fnvia the oracle ψ∗ fn,g; then (ii) reconstructs data by sampling uniformly from the intersection of assigned leaves for each sample. Then, under the assumptions of Thm. 3.4, the sequence {h∗ n}is universally consistent. 4.1 Constrained Optimization Our basic strategy for this family of decoders is to estimate a kernel matrix from a set of embeddings, then use this matrix to infer leaf assignments. Let s∈ {1/[n−1]}dΦbe a vector of inverse leaf sample sizes, composed of tree-wise vectors s(b)with entries s(b) i= 1/Pn j=1π(b) i(xj). Then the canonical feature map for the RF kernel can be written ϕ(x) = ϕ(1)(x), . . . , ϕ(B)(x) , with tree-wise feature maps ϕ(b)(x) =π(b)(x)⊙√ s(b), where ⊙denotes the Hadamard (element-wise) product. Now RF kernel evaluations can be calculated via the scaled inner product kRF n(x,x′) =B−1⟨ϕ(x), ϕ(x′)⟩, which is equivalent to Eq. 1. Say we have ntraining samples used to fit the forest fn, and mlatent vectors to decode from an embedding space of dimension dZ< n. We will refer to these samples as a test set, since they may not correspond to any training samples. We are provided a matrix of embeddings Z0∈Rm×dZ, from which we estimate the corresponding kernel matrix ˆK0=Z0ΛZ†, where Z†denotes the Moore-Penrose pseudo-inverse of Z. Now we must identify the most likely leaf assignments for each of our m(unseen) test samples X0∈Rm×dX, given their latent representation Z0. Call this target matrix Ψ∈ {0,1}m×dΦ. To estimate it, we start with the binary matrix of leaf assignments for training samples Π∈ {0,1}n×dΦ. Exploiting the inner product definition of a kernel, observe that our original (training) adjacency matrix Ksatisfies BK=ΦΦ⊤=ΠSΠ⊤, where Φ∈[0,1]n×dΦis the RKHS representation of X, andS=diag(s). We partition [dΦ]intoBsubsets L(b)that index the leaves belonging to tree b. Recall that each tree bpartitions the feature space XintoL(b)hyperrectangular subregions X(b) ℓ⊂ X , one for each leaf ℓ∈[L(b)]. Let R(b) i∈ {X(b) 1, . . . ,X(b) L(b)}denote the region to which sample iis 5 routed in tree b. Then leaf assignments for test samples can be calculated by solving the following integer linear program (ILP): min Ψ∈{0,1}m×dΦ∥BˆK⊤ 0−ΠSΨ⊤∥1s.t.∀i, b:X ℓ∈L(b)ψiℓ= 1,∀i:\ b∈[B]R(b) i̸=∅. (3) The objective is an entry-wise L1norm that effectively treats the resulting matrix as a stacked vector. The first constraint guarantees that test samples are one-hot encoded on a per-tree basis. The second constraint states that the intersection of all assigned hyperrectangles is nonempty. We call these the one-hot andoverlap constraints, respectively. Together they ensure consistent leaf assignments both within and between trees. The ILP approach comes with the following guarantee. Theorem 4.3 (Uniqueness) .Assume we have a lossless encoder g∗:X 7→ Z such that the estimated
https://arxiv.org/abs/2505.21441v1
ˆK0coincides with the ground truth K∗ 0. Then, under the assumptions of Thm. 3.4, as n→ ∞ , with high probability, the ILP of Eq. 3 is uniquely solved by the true leaf assignments Ψ∗. Together with Thm. 4.2, Thm. 4.3 implies that the ILP approach will converge on an exact recon- struction of the data under ideal conditions. While this may be encouraging, there are two major obstacles in practice. First, this solution scales poorly with manddΦ. Despite the prevalence of highly optimized solvers, ILPs are NP-complete and therefore infeasible for large problems. Second, even with an oracle for solving this program, results may be misleading when using noisy estimates ofˆK0. Since this will almost always be the case for dZ≪n—an inequality that is almost certain to hold in most real-world settings—the guarantees of Thm. 4.3 will rarely apply. We describe a convex relaxation in Appx. C via the exclusive lasso [ 101,16], an approach that is more tractable than the ILP but still turns each decoding instance into a nontrivial optimization problem. 4.2 Split Relabeling Another strategy for computing leaf assignments front-loads the computational burden so that down- stream execution requires just a single pass through a pretrained model, as with neural autoencoders. We do this by exploiting the tree structure itself, relabeling the splits in the forest so that they apply directly in embedding space. The procedure works as follows. Recall that each split is a literal of the form Xj▷ ◁ x for some j∈[dX], x∈ Xj, where ▷ ◁∈ {<,=}(the former for continuous, the latter for categorical data). At each node, we create a synthetic dataset ˜Xby drawing uniformly from the corresponding region. We embed these points with a diffusion map to create the corresponding matrix ˜Z. Samples are labeled with a binary outcome variable Yindicating whether they are sent to left or right child nodes. Next, we search for the axis-aligned split in the embedding space that best predicts Y. The ideal solution satisfies Zk< z⇔Xj▷ ◁ x, for some k∈[dZ], z∈ Zk. Perfect splits may not be possible, but optimal solutions can be estimated with CART. Once all splits have been relabeled, the result is a tree with the exact same structure as the original but a new semantics, effectively mapping a recursive partition of the input space onto a recursive partition of the embedding space. This strategy may fare poorly if no axis-aligned split in Zapproximates the target split in X. In principle, we could use any binary decision procedure to route samples at internal nodes. For example, using logistic regression, we could create a more complex partition of Zinto convex polytopes. Of course, this increase in expressive flexibility comes at a cost in complexity. The use of synthetic data allows us to choose the effective sample size for learning splits, which is especially advantageous in deep trees, where few training points are available as depth increases. 4.3 Nearest Neighbors Our final decoding strategy elides the leaf assignment step altogether in favor of directly estimating feature values via k-nearest neighbors ( k-NN). First, we
https://arxiv.org/abs/2505.21441v1
find the most proximal points in the latent space. Once nearest neighbors have been identified, we reconstruct their associated inputs using the leaf assignment matrix Πand the splits stored in our forest fn. From these ingredients, we infer the intersection of all leaf regions for each training sample—what Feng and Zhou [30] call the “maximum compatible rule”—and generate a synthetic training set ˜Xby sampling uniformly in these subregions. Observe that this procedure guarantees g(˜X) =Zby construction. 6 tree depth = 1 tree depth = 2 tree depth = 4 tree depth = 8 tree depth = 16 −1.0 −0.5 0.0 0.5 1.0 1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 −1.0 −0.5 0.0 0.5 1.0 1.5−1012 KPC1KPC2Label 3 4 8 9Figure 2: Diffusion maps visualize RF training. Using a subsample of the MNIST dataset, we find that digits become more distinct in the embedding space as tree depth increases. Neighbors are weighted in inverse proportion to their diffusion distance from the target z0, producing the weight function w:Z 7→ ∆k−1. Let K⊂[n]denote the indices of the selected points and {˜xi}i∈Kthe corresponding synthetic inputs. Then the jth entry of the decoded vector ˆx0is given by ˆx0j=P i∈Kwi(z0) ˜xij. For categorical features, we take the most likely label across all neighbors, weighted by w, with ties broken randomly. The k-NN decoder comes with similar asymptotic guarantees to the previous methods, without assuming access to a leaf assignment oracle. Theorem 4.4 (k-NN consistency) .Letk→ ∞ andk/n→0. Then under the assumptions of Thm. 3.4, the k-NN decoder is universally consistent. This approach is arguably truer to the spirit of RFs, using local averaging to decode inputs instead of just predict outputs. Like the split relabeling approach, this is a modular decoding solution that does not rely on any particular encoding scheme. However, it is uniquely well suited to spectral embedding techniques, which optimally preserve kernel structure with respect to L2norms in the latent space. 5 Experiments In this section, we present a range of experimental results on visualization, reconstruction and denoising. Further details on all experiments can be found in Appx. B, along with additional results. Code for reproducing all figures and tables is included in the supplement. Visualization As a preliminary proof of concept, we visualize the embeddings of a RF classifier with 200 trees as it trains on a subset of the MNIST dataset [ 26] including samples with labels 3, 4, 8, and 9 (see Fig. 2). Plotting a sequence of diffusion maps with dZ= 2at increasing tree depth, we find the model learning to distinguish between the four digits, which gradually drift into their own regions of the latent space. Early in training, the data are clumped around the origin. As depth increases, the manifold blooms and samples concentrate by class label. KPC1 appears to separate 3 and 8 from 4 and 9, which makes sense given that these respective pairs are often hard to distinguish in some handwritten examples. Meanwhile, KPC2 further subdivides 3 from 8. The relative proximity of
https://arxiv.org/abs/2505.21441v1
4’s and 9’s demonstrates that the RF is somewhat uncertain about these samples, although with extra dimensions we find clearer separation (not shown). In other words, the embeddings suggest a highly interpretable recursive partition, as we might expect from a single decision tree. Reconstruction We limit our decoding experiments in this section to the k-NN method, which proved the fastest and most accurate in our experiments (for a comparison, see Appx. B.2). Hence- forth, this is what we refer to as the RF autoencoder (RFAE). dZ= 2 dZ= 4 dZ= 8 dZ= 16 dZ= 32 Original Figure 3: MNIST digit reconstructions with vary- ing latent dimension sizes; original images are dis- played in the bottom row.As an initial inspection of RFAE’s recon- struction behavior, we autoencode the first occurrence of each digit in the MNIST test set for varying latent dimensionalities dZ∈ {2,4,8,16,32}in Fig. 3. For this experiment, we fit an (unsupervised) completely random for- est [12] with B= 1000 trees, train the encoder on full training data, project the test samples intoZvia Nyström, and decode them back us- ingk= 50 nearest neighbors. Although RFAEs are not optimized for image data, the recon- structions produce mostly recognizable digits even with very few latent dimensions, with out- puts that partially correspond to the wrong class. 7 plpn spambase student telco wqhd king marketing mushroom obesitychurn credit diabetes dry_bean forestfiresabalone adult banknote bc car 0.250.500.751.00 0.250.500.751.00 0.250.500.751.00 0.250.500.751.00 0.250.500.751.000.00.20.40.6 0.50.60.70.80.9 0.20.30.4 0.20.40.60.80.20.30.40.50.6 0.00.10.20.30.4 0.000.050.100.15 0.050.100.150.200.250.000.250.500.75 0.40.60.81.0 0.10.20.30.40.5 0.350.400.450.500.550.600.10.20.30.40.5 0.30.40.5 0.30.40.50.6 0.40.50.60.70.80.20.40.6 0.20.40.60.8 0.40.60.8 0.20.40.6 Compression FactorDistortion Method RFAE TVAE TTVAE AE VAEFigure 4: Compression-distortion trade-off on twenty benchmark tabular datasets. Shading represents standard errors across ten bootstraps. Best results can be observed at dZ= 32 , where the reconstructions appear quite similar to the originals. Additional results examining the influence of other parameters are presented in Appx. B.2. Next, we compare RFAE’s compression-distortion trade-off against two state-of-the-art neural ar- chitectures for autoencoding tabular data (TV AE and TTV AE), along with standard and variational autoencoders (AE and V AE, respectively) that are not optimized for this task. Although there are some other notable deep learning algorithms designed for tabular data (e.g., CTGAN [ 97], TabSyn [100], and TabPFN [ 51]), these do not come with inbuilt methods for decoding back to the input space at variable compression ratios. We also do not include eForest [ 30], another RF based autoencoder, because it only works with a fixed dZ= 2dXand is not capable of compression. For RFAE, we use the unsupervised ARF algorithm [94] with 500trees and set k= 20 for decoding. In standard AEs, reconstruction error is generally estimated via L2loss. This is not sensible with a mix of continuous and categorical data, so we create a combined measure that evaluates distortion on continuous variables via 1−R2(i.e., the proportion of variance unexplained) and categorical variables via classification error. Since both measures are on the unit interval, so too is their average across all dXfeatures. We plot this measure across a range of compression factors (i.e., inverse compression ratios dZ/dX) in 20 benchmark tabular datasets
https://arxiv.org/abs/2505.21441v1
(see Fig. 4). For more details on these datasets, see Appx. B.1, Table 1. We evaluate performance over ten bootstrap samples at each compression factor, testing on the randomly excluded out-of-bag data. We find that RFAE is competitive in all settings, and has best average performance in 12 out of 20 datasets (see Appx. B.1, Table 2). Denoising As a final experiment, we consider a denoising example with single-cell RNA- sequencing (scRNA-seq) data. Pooling results from different labs is notoriously challenging in scRNA-seq due to technical artifacts collectively known as “batch effects” [ 60]. We propose to harmonize data across batches by training a RFAE on a large baseline study and passing new samples through the autoencoding pipeline. As an illustration, we compare two studies of the mouse brain transcriptome. Using the top dX= 5000 genes, we learn a dZ= 64 -dimensional embedding of the Zeisel et al. [99] dataset ( n= 2874 ). Our RF is a completely random forest with B= 1000 trees. We project the Tasic et al. [86] data (m= 1590 ) into the latent space and decode using the top k= 100 nearest neighbors. Results are 8 presented in Fig. 5. To avoid potential biases from reusing our own embeddings, we compare original and denoised samples using PCA [55] and tSNE [88], two dimensionality reduction techniques that are widely used in scRNA-seq. In both cases, we find that denoising with RFAE helps align the manifolds, thereby minimizing batch effects. 6 Discussion OriginalBatch Corrected −40−20020−2002040−40−2002040 −2502550 tSNE1tSNE2 StudyTasic et al., 2016Zeisel et al., 2015OriginalBatch Corrected −40−20020−2002040−40−2002040 −2502550 tSNE1tSNE2A OriginalBatch Corrected −2002040−40−20020−20020 −20020 PC1PC2B Figure 5: Denoising with RFAE alleviates batch effects in scRNA-seq data.The building blocks of our encoding scheme are well established. Breiman himself took a kernel perspective on RFs [ 13], a direction that has been picked up by numerous authors since [24,78,4]. The theory of diffusion maps and KPCA goes back some twenty years [ 21,22,75]. However, just as much RF theory has focused on idealized variants of the algorithm [ 11], no prior works appear to have studied the properties of the true RF kernel, opting instead to analyze sim- pler approximations. And while there have been some previous attempts to generate RF embed- dings [ 80,71], these have been largely heuristic in nature. By contrast, we provide a principled approach to dimensionality reduction in RFs, along with various novel decoding strategies. One notable difference between our method and autoencoding neural networks is that RFAE is not trained end-to-end. That is, while a deep au- toencoder simultaneously learns to encode and decode, RFAE is effectively a post-processing procedure for a pre-trained RF, with independent modules for encoding and decoding. We high- light that end-to-end training represents a fundamentally different objective. Whereas traditional autoencoders are necessarily unsupervised, our method works in tandem with either supervised or unsupervised RFs. As such, the goal is not necessarily to learn an efficient representation for its own sake, but rather to reveal the inner workings of a target model. One upshot of this decoupling is that our split relabeling
https://arxiv.org/abs/2505.21441v1
and k-NN decoders can work in tandem with any valid encoding scheme. For instance, we could relabel an RF’s splits to approximate the behavior of sample points in principal component space, or indeed any Zfor which we have a map g:X 7→ Z . We highlight two notable limitations of our approach. First, the computational demands of our decoding strategies are nontrivial. (For a detailed analysis, see Appx. D.) Second, when using the k-NN approach, results will vary with the choice of k. However, we observe that autoencoding is a difficult task in general, and top deep learning models pose far greater computational burdens than RFAE. Moreover, having just a single hyperparameter to worry about is a rare luxury in this field, where leading algorithms often require bespoke architectures and finely tuned regularization penalties. Compared to the leading alternatives, RFAEs are relatively lightweight and simple. Autoencoders are often motivated by appeals to the minimum description length principle [ 73,50,44]. Information theory provides a precise formalism for modeling the communication game that arises when one agent (say, Alice) wants to send a message to another (say, Bob) using a code that is maximally efficient with minimal information loss. This is another way to conceive of RFAE—as a sort of cryptographic protocol, in which Alice and Bob use a shared key (the RF itself) to encrypt and decrypt messages in the form of latent vectors z, which is presumably more compact (and not much less informative) than the original message x. Several authors have persuasively argued that learning expressive, efficient representations is central to the success of deep neural networks [ 8,39]. Our work highlights that RFs do something very similar under the hood, albeit through entirely different mechanisms. This insight has implications for how we use tree-based ensembles and opens up new lines of research for this function class. 9 7 Conclusion We have introduced novel methods for encoding and decoding data with RFs. The procedure is theoretically sound and practically useful, with a wide range of applications including compression, clustering, data visualization, and denoising. Future work will investigate extensions to generative modeling, as well as other tree-based algorithms, such as gradient boosting machines [ 31,18]. Another promising direction is to use the insights from this study to perform model distillation [49,33], compressing the RF into a more compact form with similar or even identical behavior. This is not possible with current methods, which still require the original RF to compute adjacencies and look up leaf bounds. Acknowledgements This research was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) [Grant reference number EP/Y035216/1] Centre for Doctoral Training in Data-Driven Health (DRIVE-Health) at King’s College London and by the German Research Foundation (DFG), Emmy Noether Grant 437611051. References [1]N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society , 68(3): 337–404, 1950. [2]Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. Deep reinforce- ment learning: A brief survey. IEEE Signal Processing Magazine , 34(6):26–38, 2017. [3]Susan Athey, Julie Tibshirani, and Stefan Wager. Generalized random forests. Ann. Statist.
https://arxiv.org/abs/2505.21441v1
, 47(2): 1148–1178, 2019. [4]M. Balog, B. Lakshminarayanan, Z. Ghahramani, D. M. Roy, and Y . W. Teh. The Mondrian kernel. In Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence , pages 32–41, 2016. [5]Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data represen- tation. Neural Computation , 15(6):1373–1396, 2003. [6]Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for Laplacian-based manifold methods. Journal of Computer and System Sciences , 74(8):1289–1308, 2008. [7]Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux, Jean-François Paiement, Pascal Vincent, and Marie Ouimet. Learning eigenfunctions links spectral embedding and kernel PCA. 16(10):2197–2219, 2004. [8]Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35:1798–1828, 2012. [9]Alain Berlinet and Christine Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics . Springer, New York, 2003. [10] Gérard Biau. Analysis of a random forests model. J. Mach. Learn. Res. , 13:1063–1095, 2012. [11] Gérard Biau and Erwan Scornet. A random forest guided tour. TEST , 25(2):197–227, 2016. [12] Gérard Biau, Luc Devroye, and Gábor Lugosi. Consistency of random forests and other averaging classifiers. J. Mach. Learn. Res. , 9(66):2015–2033, 2008. [13] Leo Breiman. Some infinity theory for predictor ensembles. Technical Report 579, Statistics Department, UC Berkeley, 2000. [14] Leo Breiman. Random forests. Mach. Learn. , 45(1):1–33, 2001. [15] Leo Breiman, Jerome Friedman, C. J. Stone, and R. A. Olshen. Classification and Regression Trees . Taylor & Francis, Boca Raton, FL, 1984. [16] Frederick Campbell and Genevera I. Allen. Within group variable selection through the exclusive lasso. Electron. J. Stat. , 11(2):4220–4257, 2017. 10 [17] Matias D. Cattaneo, Jason M. Klusowski, and William G. Underwood. Inference with Mondrian random forests. arXiv preprint, 2310.09702, 2023. [18] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , page 785–794, 2016. [19] Xi Chen and Hemant Ishwaran. Random forests for genomic data analysis. Genom. , 99(6):323–329, 2012. [20] F. Chung. Spectral graph theory . Conference Board of the Mathematical Sciences, Washington, 1997. [21] R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni, B. Nadler, F. Warner, and S. W. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. Proc. Natl. Acad. Sci. , 102(21):7426–7431, 2005. [22] Ronald R. Coifman and Stéphane Lafon. Diffusion maps. Applied and Computational Harmonic Analysis , 21(1):5–30, 2006. [23] Alvaro Correia, Robert Peharz, and Cassio P de Campos. Joints in random forests. In Advances in Neural Information Processing Systems , volume 33, pages 11404–11415, 2020. [24] Alex Davies and Zoubin Ghahramani. The random forest kernel and other kernels for big data from random partitions. arXiv preprint, 1402.4293, 2014. [25] Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, Marcus Hutter, and Joel Veness. Language modeling is compression. In The 12th International Conference on Learning Representations , 2024. [26] Li Deng. The MNIST database of handwritten digit
https://arxiv.org/abs/2505.21441v1
images for machine learning research [best of the web]. IEEE Signal Processing Magazine , 29(6):141–142, 2012. [27] Misha Denil, David Matheson, and Nando De Freitas. Narrowing the gap: Random forests in theory and in practice. In Proceedings of the 31st International Conference on Machine Learning , pages 665–673, 2014. [28] Luc Devroye, László Györfi, and Gábor Lugosi. A Probabilistic Theory of Pattern Recognition . Springer- Verlag, New York, 1996. [29] Dheeru Dua and Casey Graff. UCI machine learning repository, 2019. URL http://archive.ics. uci.edu/ml . [30] Ji Feng and Zhi-Hua Zhou. Autoencoder by forest. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence , 2018. [31] Jerome H. Friedman. Greedy function approximation: A gradient boosting machine. Ann. Stat. , 29(5): 1189 – 1232, 2001. [32] Jerome H. Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. , 33(1):1–22, 2010. [33] Nicholas Frosst and Geoffrey Hinton. Distilling a neural network into a soft decision tree. arXiv preprint, 1711.09784, 2017. [34] Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Schölkopf. Kernel measures of conditional dependence. In Advances in Neural Information Processing Systems , volume 20, 2007. [35] Robin Genuer. Variance reduction in purely random forests. J. Nonparametr. Stat. , 24(3):543–562, 2012. [36] Pierre Geurts, Damien Ernst, and Louis Wehenkel. Extremely randomized trees. Mach. Learn. , 63(1): 3–42, 2006. [37] Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. In Advances in Neural Information Processing Systems , volume 32, 2019. [38] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science , 286(5439):531–537, 1999. 11 [39] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning . MIT Press, 2016. http: //www.deeplearningbook.org . [40] Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex Smola. A kernel method for the two-sample-problem. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems , volume 19, 2006. [41] Arthur Gretton, Kenji Fukumizu, Choon Teo, Le Song, Bernhard Schölkopf, and Alex Smola. A kernel statistical test of independence. In Advances in Neural Information Processing Systems , volume 20, 2007. [42] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. J. Mach. Learn. Res. , 13(25):723–773, 2012. [43] Leo Grinsztajn, Edouard Oyallon, and Gael Varoquaux. Why do tree-based models still outperform deep learning on typical tabular data? In 36th Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2022. [44] Peter Grünwald. The minimum description length principle . The MIT Press, Cambridge, MA, 2007. [45] László Györfi, Michael Kohler, Adam Krzy ˙zak, and Harro Walk. A Distribution-Free Theory of Nonpara- metric Regression . Springer-Verlag, New York, 2002. [46] Fredrik Hallgren. Kernel PCA with the Nyström method. arXiv preprint, 2109.05578, 2021. [47] Jihun Ham, Daniel D. Lee, Sebastian Mika, and Bernhard Schölkopf. A kernel view
https://arxiv.org/abs/2505.21441v1
of the dimensionality reduction of manifolds. In Proceedings of the 21st International Conference on Machine Learning , 2004. [48] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-V AE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations , 2017. [49] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint, 1503.02531, 2015. [50] Geoffrey E Hinton and Richard Zemel. Autoencoders, minimum description length and Helmholtz free energy. Advances in neural information processing systems , 6, 1993. [51] Noah Hollmann, Samuel Müller, Lennart Purucker, Arjun Krishnakumar, Max Körfer, Shi Bin Hoo, Robin Tibor Schirrmeister, and Frank Hutter. Accurate predictions on small data with a tabular foundation model. Nature , 637(8045):319–326, 2025. [52] Allison Marie Horst, Alison Presmanes Hill, and Kristen B Gorman. palmerpenguins: Palmer Archipelago (Antarctica) penguin data , 2020. URL https://allisonhorst.github.io/palmerpenguins/ . R package version 0.1.0. [53] M. Hutter. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability . Springer Berlin Heidelberg, Berlin, 2004. [54] Christopher R John, David Watson, Michael R Barnes, Costantino Pitzalis, and Myles J Lewis. Spectrum: fast density-aware spectral clustering for single and multi-omic data. Bioinformatics , 36(4):1159–1166, 2019. [55] I.T. Jolliffe. Principal Component Analysis . Springer, New York, second edition, 2002. [56] Richard M. Karp. Reducibility among Combinatorial Problems , pages 85–103. Springer US, Boston, MA, 1972. [57] Risi Imre Kondor and John D. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In Proceedings of the 19th International Conference on Machine Learning , page 315–322, 2002. [58] Pawel Ladosz, Lilian Weng, Minwoo Kim, and Hyondong Oh. Exploration in deep reinforcement learning: A survey. Information Fusion , 85:1–22, 2022. [59] Balaji Lakshminarayanan, Daniel M Roy, and Yee Whye Teh. Mondrian forests: Efficient online random forests. In Advances in Neural Information Processing Systems , volume 27, 2014. [60] Jeffrey T. Leek, Robert B. Scharpf, Héctor Corrada Bravo, David Simcha, Benjamin Langmead, W. Evan Johnson, Donald Geman, Keith Baggerly, and Rafael A. Irizarry. Tackling the widespread and critical impact of batch effects in high-throughput data. Nat. Rev. Genet. , 11(10):733–739, 2010. 12 [61] Yi Lin and Yongho Jeon. Random forests and adaptive nearest neighbors. J. Am. Stat. Assoc. , 101(474): 578–590, 2006. [62] King’s College London. King’s computational research, engineering and technology environment (create). https://doi.org/10.18742/rnvf-m076 , 2022. Retrieved May 10, 2025. [63] Gábor Lugosi and Andrew Nobel. Consistency of data-driven histogram methods for density estimation and classification. Ann. Stat. , 24(2):687 – 706, 1996. [64] David J.C. MacKay. Information theory, inference, and learning algorithms . Cambridge University Press, Cambridge, 2003. [65] Nicolai Meinshausen. Quantile regression forests. J. Mach. Learn. Res. , 7:983–999, 2006. [66] Defeng Sun Meixia Lin, Yancheng Yuan and Kim-Chuan Toh. A highly efficient algorithm for solving exclusive lasso problems. Optim. Methods Softw. , 39(3):489–518, 2024. [67] Sebastian Mika, Bernhard Schölkopf, Alex Smola, Klaus-Robert Müller, Matthias Scholz, and Gunnar Rätsch. Kernel PCA and de-noising in feature spaces. In Advances in Neural Information Processing Systems , volume 11, 1998. [68] Boaz Nadler, Stéphane Lafon, Ronald
https://arxiv.org/abs/2505.21441v1
R. Coifman, and Ioannis G. Kevrekidis. Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Appl. Comput. Harmon. Anal. , 21(1):113–127, 2006. Special Issue: Diffusion Maps and Wavelets. [69] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems , volume 14. The MIT Press, 2001. [70] Sambit Panda, Cencheng Shen, and Joshua T. V ogelstein. Learning interpretable characteristic kernels via decision forests. arXiv preprint, 1812.00029, 2018. [71] Konstantinos Pliakos and Celine Vens. Mining features for biomedical data using clustering tree ensembles. J. Biomed. Inform. , 85:40–48, 2018. [72] Carl Edward Rasmusssen and Christopher K.I. Williams. Gaussian processes for machine learning . The MIT Press, Cambridge, MA, 2006. [73] Jorma Rissanen. Stochastic complexity in statistical inquiry . World Scientific, Singapore, 1989. [74] Bernhard Schölkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regular- ization, Optimization, and Beyond . The MIT Press, Cambridge, MA, 2001. [75] Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation , 10(5):1299–1319, 1998. [76] Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proc. IEEE , 109(5):612–634, 2021. [77] Erwan Scornet. On the asymptotics of random forests. J. Multivar. Anal. , 146:72–83, 2016. [78] Erwan Scornet. Random forests and kernel methods. IEEE Trans. Inf. Theory , 62(3):1485–1500, 2016. [79] Jianbo Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. , 22(8):888–905, 2000. [80] Tao Shi and Steve Horvath. Unsupervised learning with random forest predictors. J. Comput. Graph. Stat., 15(1):118–138, 2006. [81] Ravid Shwartz-Ziv and Amitai Armon. Tabular data: Deep learning is not all you need. Inf. Fusion , 81: 84–90, 2022. [82] Bharath K. Sriperumbudur, Kenji Fukumizu, and Gert R.G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. J. Mach. Learn. Res. , 12(70):2389–2410, 2011. [83] Ingo Steinwart. On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res. , 2:67–93, March 2002. [84] Charles J. Stone. Consistent nonparametric regression. Ann. Statist. , 5(4):595 – 620, 1977. [85] Cheng Tang, Damien Garreau, and Ulrike von Luxburg. When do random forests fail? In Advances in Neural Information Processing Systems , volume 31, 2018. 13 [86] Bosiljka Tasic, Vilas Menon, Thuc Nghi Nguyen, Tae Kyung Kim, Tim Jarsky, Zizhen Yao, Boaz Levi, Lucas T Gray, Staci A Sorensen, Tim Dolbeare, Darren Bertagnolli, Jeff Goldy, Nadiya Shapovalova, Sheana Parry, Changkyu Lee, Kimberly Smith, Amy Bernard, Linda Madisen, Susan M Sunkin, Michael Hawrylycz, Christof Koch, and Hongkui Zeng. Adult mouse cortical cell taxonomy revealed by single cell transcriptomics. Nat. Neurosci. , 19(2):335–346, 2016. [87] Jakub Tomczak. Deep generative modeling . Springer, New York, 2022. [88] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. J. Mach. Learn. Res. , 9(86): 2579–2605, 2008. [89] Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. OpenML: networked science in machine learning. ACM SIGKDD Explorations Newsletter , 15(2):49–60, 2013. [90] Antonio Vergari, Robert Peharz, Nicola Di
https://arxiv.org/abs/2505.21441v1
Mauro, Alejandro Molina, Kristian Kersting, and Floriana Es- posito. Sum-product autoencoding: Encoding and decoding representations using sum-product networks. InProceedings of the 32nd AAAI Conference on Artificial Intelligence , 2018. [91] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. , 11(110):3371–3408, 2010. [92] Ulrike von Luxburg. A tutorial on spectral clustering. Stat. Comput. , 17(4):395–416, 2007. [93] Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. , 113(523):1228–1242, 2018. [94] David S. Watson, Kristin Blesch, Jan Kapar, and Marvin N. Wright. Adversarial random forests for density estimation and generative modeling. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics , pages 5357–5375, 2023. [95] Marvin N. Wright and Andreas Ziegler. ranger: A fast implementation of random forests for high dimensional data in c++ and r. J. Stat. Softw. , 77(1), 2017. [96] Qinghua Wu and Jin-Kao Hao. A review on algorithms for maximum clique problems. European Journal of Operational Research , 242(3):693–709, 2015. [97] Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Modeling tabular data using conditional GAN. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32, 2019. [98] Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks. In Advances in Neural Infor- mation Processing Systems , volume 33, pages 20554–20565, 2020. [99] Amit Zeisel, Ana B. Muñoz-Manchado, Simone Codeluppi, Peter Lönnerberg, Gioele La Manno, Anna Juréus, Sueli Marques, Hermany Munguba, Liqun He, Christer Betsholtz, Charlotte Rolny, Gonçalo Castelo-Branco, Jens Hjerling-Leffler, and Sten Linnarsson. Cell types in the mouse cortex and hippocam- pus revealed by single-cell RNA-seq. Science , 347(6226):1138–1142, 2015. [100] Hengrui Zhang, Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Xiao Qin, Christos Falout- sos, Huzefa Rangwala, and George Karypis. Mixed-type tabular data synthesis with score-based diffusion in latent space. In The Twelfth International Conference on Learning Representations , 2024. [101] Yang Zhou, Rong Jin, and Steven Chu–Hong Hoi. Exclusive lasso for multi-task feature selection. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics , pages 988–995, 2010. 14 A Proofs Since several of our results rely on RF regularity conditions, we review these here for completeness. We say that a sequence of functions {fn}isuniversally consistent if it converges in probability on any target function. That is, for any f∗∈C(X)and all ϵ >0, we have: lim n→∞P(∥f∗−fn∥∞> ϵ) = 0 . Under certain assumptions, it can be shown that RFs are universally consistent in this sense [ 65,27, 11, 77, 93]. Specifically, we assume: (A1) Training data for each tree is split into two subsets: one to learn split parameters, the other to assign leaf labels. (A2) Trees are grown on subsamples rather than bootstraps, with subsample size n(b)satisfying n(b)→ ∞ , n(b)/n→0asn→ ∞ . (A3) At each internal node, the probability that a tree splits
https://arxiv.org/abs/2505.21441v1
on any given Xjis bounded from below by some ρ >0. (A4) Every split puts at least a fraction γ∈(0,0.5]of the available observations into each child node. (A5) For each tree b∈[B], the total number of leaves d(b) Φsatisfies d(b) Φ→ ∞ , d(b) Φ/n→0as n→ ∞ . Under (A1)-(A5), decision trees satisfy the criteria of Stone’s theorem [ 84] and are therefore univer- sally consistent (see Devroye et al. [28, Thm. 6.1 ]and Györfi et al. [45, Thm. 4.2 ]). The consistency of the ensemble follows from the consistency of the basis functions [ 12]. There is some debate in the literature as to whether these assumptions are necessary for universal consistency—(A1) and (A2) in particular may be overly strong—but they are provably sufficient. See Biau [10, Rmk. 8 ], Wager and Athey [93, Appx. B], and Tang et al. [85] for a discussion. A.1 Proof of Thm. 3.4 (RF kernel properties) This theorem makes three separate claims: that RF kernels are (a) PSD and stochastic; (b) asymptoti- cally universal; and (c) asymptotically characteristic. (a) PSD Take PSD first. It is well known that any convex combination of PSD kernels is PSD [ 74], so to secure part (a) it is sufficient to prove that the standard decision tree kernel is PSD. This is simply a normalized indicator kernel: kDT(x,x′) =k(b)(x,x′)Pn i=1k(b)(x,xi), which either evaluates to zero (if the samples do not colocate) or the reciprocal of the leaf sample size (if they do). To show that kDTis PSD, we take a constructive approach in which we explicitly define the canonical feature map ϕ:X 7→ H , which maps input vectors to an inner product space H. This suffices to establish the PSD property, since for any finite dataset the resulting kernel matrix KDT∈[0,1]n×n is a Gram matrix with entries kDT ij=⟨ϕ(xi), ϕ(xj)⟩. As described in Sect. 4, the DT feature map for tree bis given by: ϕ(b)(x) =π(b)(x)⊙p s(b), where π(b):X 7→ { 0,1}d(b) Φis a standard basis vector indicating which leaf xroutes to in tree b, and s∈ {1/[n−1]}d(b) Φis a vector of corresponding inverse leaf sample sizes. Concatenating these maps overBtrees and taking the inner product for sample pairs, we get an explicit formula for the RF feature map, thereby establishing that kRFis PSD. Part (a) makes an additional claim, however—that kRF nisstochastic . This means that, for any x∈ X, the kernel kRF n(x,xi)defines a probability mass function over the training data as iranges from 1 ton. It is easy to see that the kernel is nonnegative, as all entries are either zero (if samples do not 15 colocate) or some positive fraction representing average inverse leaf sample size across the forest (if they do). All that remains then is to show that the values sum to unity. Consider the sum over all n training points: nX i=1kRF n(x,xi) =nX i=11 BBX b=1 k(b)(x,xi)Pn j=1k(b)(x,xj)! =1 BBX b=1nX i=1 k(b)(x,xi)Pn j=1k(b)(x,xj)! =1 BBX b=1Pn i=1k(b)(x,xi)Pn j=1k(b)(x,xj) =1 BBX b=11 = 1. Since the kernel is symmetric, the training matrix K∈[0,1]n×nof kernel entries is doubly stochastic (i.e., all
https://arxiv.org/abs/2505.21441v1
rows and columns sum to one). (b) Universal Recall that the Moore-Aronszajn theorem tells us that every PSD kernel defines a unique RKHS [ 1]. Given that kRF nis PSD, universality follows if we can show that the associated RKHS His dense in C(X). Of course, this is provably false at any fixed n, asdΦ=o(n)by (A5), and a finite-dimensional Hnecessarily contains “gaps”—i.e., some functions f∗∈C(X)such that ⟨f∗, h⟩= 0for all h∈ H. However, as nanddΦgrow, these gaps begin to vanish. Since these two parameters increase at different rates, and it is the latter that more directly controls function complexity, we interpret the subscript ℓonHℓas indexing the leaf count. (As noted above, we focus on the single tree case, as ensemble consistency follows immediately from this.) The following lemma sets up our asymptotic universality result. Lemma A.1 (RF subalgebra) .LetHℓdenote the set of continuous functions on the compact metric spaceXrepresentable by a tree with ℓleaves, trained under regularity conditions (A1)-(A5). Define: A:=∞[ ℓ=1Hℓ. ThenAis a subalgebra of C(X)that contains the constant function and separates points. Proof. The lemma makes three claims, each of which we verify in turn. (1)Ais a subalgebra of C(X). EachHℓconsists of continuous, piecewise constant functions on X, induced by recursive binary partitions of Xintoℓleaf regions. These spaces are closed under addition, scalar multiplication, and multiplication, as the sum or product of two piecewise constant functions is piecewise constant over the common refinement of their partitions. Since Ais the union of these Hℓ, it is closed under addition, multiplication, and scalar multiplication. (2)Acontains the constant functions. This follows immediately, as any tree (and hence any Hℓ) can represent constant functions— for instance, a trivial tree with no splits assigns the same value to all points. (3)Aseparates points. By regularity conditions (A3) and (A4), every coordinate has a nonzero probability ρ >0 of being selected for splitting at any node, and every split allocates at least a fraction γ∈(0,0.5]of the points to each child node. Meinshausen [65, Lemma 2 ]has shown that, under these conditions, the diameter of each leaf goes to zero in probability. This amount 16 to an asymptotic injectivity guarantee. Since Xis compact, for any pair of distinct points x,x′∈ X, there exists an index ℓsuch that some fℓ∈ Hℓassigns different values to xand x′. In other words, some tree in Ais guaranteed to route xandx′to different leaves. We now invoke the Stone-Weierstrass theorem to conclude the density of A. Theorem A.2 (Stone–Weierstrass) .LetXbe a compact Hausdorff space. If a subalgebra AofC(X) contains the constant functions and separates points, then Ais dense in C(X)with respect to the uniform norm. Combining Lemma A.1 with this classical result, we conclude that the sequence {Hℓ}converges on a universal RKHS. (c) Characteristic Item (c) follows from (b) under our definition of universality. This was first shown by Gretton et al. [40, Thm. 3 ]in the non-asymptotic regime (although Sriperumbudur et al. [82] prove that the properties can come apart under slightly different notions of universality). We adapt the result simply by substituting a sufficiently close approximation in H, where proximity is defined
https://arxiv.org/abs/2505.21441v1