text
string
source
string
19747 294983 5.85×10−4 Yelp2018 31668 38048 1561406 1.30×10−3 Movielens-1M 6940 3706 1000209 3.89×10−2 4.1.2 Prediction Metrics. Following the standard evaluation pro- tocols commonly used for the CF tasks [ 28,36], We employ a full- rank evaluation approach where we rank all uninteracted items alongside the positive items from the test set for each user. We uti- lize two widely used metrics, namely Recall@N and NDCG@N [ 27, 29], to assess the prediction performance of the trained model. 4.1.3 Unlearning Metrics. Following the common validation methods of other unlearning works [ 30,31], we conduct adver- sarial attacks and utilize the Membership Inference (MI) [ 21] as the metric to evaluate the unlearning effectiveness. MI can indi- cate whether the unlearned model leaks information about the unlearned edges, specifically whether the existence of unlearned edges can be inferred from the unlearned model’s embeddings, or remain completely concealed. We use MI-BF to denote the ratio of the average probability for presence of edges in EΔbefore and after unlearning, and MI-NG to measure the ratio between average recommended probability of unlearned edges in model 𝑴𝑢and probability of edges obtained through negative sampling. MI-BF and MI-NG should exceed 1 and the higher, the better. 4.1.4 Adversarial Attacks. Following other unlearning works [ 30, 31], we conduct adversarial attacks to validate our proposed par- adigm. Specifically, we select adversarial edges by randomly sam- pling the least probable edges, with their probability based on a GCN trained on the original dataset. Afterwards, we train the GNN models using the attacked graph that incorporates these adversarial edges. Subsequently, we employ all baseline unlearning methods to unlearn the adversarial edges, and evaluate the unlearning per- formance of each baseline using the aforementioned metrics. 4.1.5 Backbone Models. We employ the following GNN models as our backbone models and the targets for unlearning to evaluate the effectiveness of our paradigm in unlearning. •LightGCN [13] utilizes fundamental GCN architectures to en- hance performance in recommendation tasks ("GCN" for short). •SGL [29] presents multiple techniques for enhancing graph con- trastive learning through graph and feature augmentations •SimGCL [35] is a contrastive learning model with simple feature- level augmentation techniques utilizing random permutation. 4.1.6 Baseline Methods. We compare our UnlearnRec paradigm with the state-of-the-art baselines from various perspectives. •GIF [30] is a state-of-the-art method for graph unlearning that employs the influence function (IF) to estimate the shifts in pa- rameters related to the graph structures needing to be unlearned. •CEU [31] is a state-of-the-art IF-based method designed to expe- dite the unlearning process and providing theoretical guarantee. •GraphEraser [7] is a partition-based approach for unlearning, where the graph is divided into multiple shards based on nodes using embedding clustering and community detection techniques. SIGIR ’25, July 13–18, 2025, Padua, Italy Guoxuan Chen, Lianghao Xia, and Chao Huang These shards are subsequently combined and when doing un- learning, the method just selectively retrains the relevant shards. •RecEraser [4] is a recent partition-based method for recommen- dation unlearning, which differs from GraphEraser by partition- ing the edges instead of nodes into shards to avoid losing edge information. Additionally,
https://arxiv.org/abs/2505.22649v1
it incorporates a more sophisticated attention aggregation method with more parameters. •Retrain . When unlearning requests are received, a full retraining of the entire graph model is carried out with model parameters being re-initialized. This can be regraded as ground truth. 4.1.7 Implementation Settings. We develop our UnlearnRec using PyTorch and employ the Adam optimizer with its default parameters. H0andW𝜂in influence encoder IE(·)are initialized around 0before pre-training and they are fixed during fine-tuning. W𝑙,b𝑙inMLP(·)(Eq. 18) are initialized with identity matrix and 0, respectively and they are non-trainable until fune-tuning phase. The batch size for pretraining is selected from {512, 1024} and the embedding size is 128 for all models. Besides, we use 3 layers for all GNN models, 3 layers for Eq. 15,16, and 2 layers for MLP(·).𝜌∈ {5,15,20}and𝜆𝑢is chosen from {1,0.5,0.1} while 𝜆𝑝is tuned from {1,0.1,0.01,0.005,0.001}. And 𝜆𝑐is selected from{1𝑒−2,1𝑒−3,1𝑒−4}. All the temperatures 𝜏are chosen from {0.1, 1, 10}. We use the released code for baselines with grid search and we conduct all the tests on the same device with an NVIDIA GeForce RTX 3090 GPU. 4.2 Model Utility Analysis (RQ1) We commence the evaluation of the unlearning utility of the pro- posed UnlearnRec paradigm in comparison to the baseline methods. The results are tabulated in Table 2, and the following observations can be made based on the Recall@20 and NDCG@20 results. •Superior performance retention ability : After unlearning the well-trained model, our method demonstrates excellent perfor- mance retention across multiple datasets compared to retraining. It is noteworthy that on dense datasets such as Movielens, all baselines can maintain relatively good predictive performance. The performance retention capability of the IF-based unlearning method is superior to partition-based methods. This is because partitioning the graph inevitably disrupts its structure, which can have a negative impact on predictive performance. •Drawbacks of partition-based methods : Traditional partition- based methods partition the graph and, upon receiving an un- learning request, only retrain the affected shards. This approach, which disrupts the graph structure, undoubtedly impacts the predictive performance of GNN and requires time for unlearn- ing. However, within the partition-based category, RecEraser slightly outperforms GraphEraser in terms of performance re- tention. This is because RecEraser partitions graphs based on edges, which is more reasonable for recommendation systems that prioritize interactive edges. Compared to IF-based methods, partition-based methods may exhibit lower predictive perfor- mance, but offer more controllable unlearning effects. These methods require careful training of the affected shards. •Limits for IF-based methods : IF-based methods calculate un- learning impact through mathematical computations, exhibitingexcellent performance retention as unlearning effects can be min- imal with little embedding distribution change. Their advantage is computational speed through direct IF calculations. However, this end-to-end estimation may not be accurate for SSL-based GNNs with stochastic structures, leading to influence underesti- mation and average unlearning effects in practice. 4.3 Model Efficacy Analysis (RQ2) We proceed to evaluate and analyze the effects of unlearning, with the experimental results presented in Table 2. The following dis- cussions primarily revolve around the MI-BF and MI-NG metrics. •Outstanding unlearning efficacy. : Based on MI-BF and MI- NG metrics,
https://arxiv.org/abs/2505.22649v1
UnlearnRec shows strong unlearning results. Across datasets and backbones, UnlearnRec achieves MI-BF and MI-NG values above 1. MI-BF > 1 shows reduced probability of recom- mending edges marked for unlearning versus before. Similarly, MI-NG > 1 means unlearned edges are less likely to be recom- mended than negative samples, protecting user privacy after unlearning. This shows UnlearnRec effectively simulates edge removal through pre-training and identifies these effects via fine- tuning. UnlearnRec works with various SSL-based GNNs and matches each model’s embedding distribution through unlearn- ing (detailed later). This comes from using L𝑀to capture loss functions, training methods, random processes, and network structure of each GNN, yielding similar embedding distributions. •Limitations of partition-based methods. : As mentioned ear- lier, partitioning the graph disrupts its overall integrity, leading to negative impacts on predictive performance after unlearning. Additionally, many SSL-based methods are designed to address the sparsity issue in recommendation systems, where the inter- action graph is sparse. Consequently, in SSL-based GNNs, when an unlearning request is made, almost all edges and nodes are more or less affected since they are involved in the formation and maintenance of self-supervised labels. Therefore, training only a subset of shards is unlikely to yield satisfactory unlearning results.Specifically, it is challenging to achieve MI-NG > 1.5. •Ineffectiveness of IFs for SSL-based GNNs. : IF-based meth- ods need complex mathematical designs. But training SSL-based GNNs is more random and unpredictable. SSL-based GNNs use random network structures and signals, making gradient esti- mation hard. This makes IF methods less effective for SSL-based GNNs, leading to poor unlearning. Table 2 shows that with SimGCL and SGL backbones, CEU and GIF perform poorly on MI-NG, sometimes below 1. This means that while unlearning re- duces recommendations of learned edges, these rates stay higher than for negative samples. This suggests learned edges were once positive samples, risking user data leaks. 4.4 Ablation Study (RQ3) We conducted the ablation experiments on the Movielens and Gowalla datasets using the SimGCL model as the backbone and made the following findings. The results are shown in Table 3 •-L𝑢: When we remove the loss function L𝑢, we observe that the model’s predictive performance remains relatively unchanged in terms of Recall and NDCG. However, there is a significant Pre-training for Recommendation Unlearning SIGIR ’25, July 13–18, 2025, Padua, Italy Table 2: Overall unlearning utility and efficacy comparison on Gowalla, Movielens-1M, and Yelp2018 datasets. Method Retrain RecEraser GraphEraser CEU GIF Ours Backbone GCN SimGCL SGL GCN SimGCL SGL GCN SimGCL SGL GCN SimGCL SGL GCN SimGCL SGL GCN SimGCL SGL ML-1mRecall 0.2287 0.2336 0.2339 0.2033 0.2106 0.2099 0.2008 0.1995 0.1989 0.2219 0.2239 0.2234 0.2111 0.2117 0.2177 0.2282 0.2336 0.2330 NDCG 0.3209 0.3325 0.3328 0.2807 0.2978 0.2887 0.2709 0.2752 0.2797 0.3152 0.3217 0.3163 0.2912 0.3067 0.3031 0.3206 0.3323 0.3296 MI-BF 9.7865 6.2314 20.984 1.8103 1.8863 1.9968 1.7327 1.9054 2.1676 1.0601 1.8883 1.1281 1.0426 1.5980 1.1063 6.5071 3.1981 14.0285 MI-NG 6.2327 3.9845 11.344 1.6734 1.2572 1.6589 1.3160 1.2202 1.8691 0.7462 1.8018 0.8396 0.9351 1.6629 0.8123 4.2766 2.3575 9.3661 GowallaRecall 0.2407 0.2605 0.2502 0.2211 0.2367 0.2279 0.2191 0.2238 0.2213 0.2390 0.2516 0.2421 0.2391
https://arxiv.org/abs/2505.22649v1
0.2486 0.2394 0.2398 0.2540 0.2477 NDCG 0.1584 0.1703 0.1618 0.1432 0.1453 0.1486 0.1399 0.1453 0.1429 0.1505 0.1618 0.1558 0.1508 0.1599 0.1524 0.1545 0.1670 0.1611 MI-BF 7.6745 15.4290 8.7485 1.4975 1.2150 1.7941 1.5865 1.0238 1.8937 1.1470 1.0322 1.1283 1.0030 1.0432 1.0076 5.3729 4.3520 2.6285 MI-NG 3.9265 7.7154 4.4323 0.9473 0.8916 1.0357 0.8182 0.6511 0.9995 0.5345 0.5644 0.5317 0.5059 0.5883 0.5108 2.8091 1.8075 1.3369 YelpRecall 0.0625 0.0648 0.0635 0.0587 0.0614 0.0599 0.0574 0.0610 0.0578 0.0597 0.0633 0.0621 0.0580 0.0630 0.0624 0.0617 0.0640 0.0633 NDCG 0.0511 0.0531 0.0512 0.0475 0.0501 0.0482 0.0466 0.0478 0.0460 0.0493 0.0516 0.0497 0.0461 0.0513 0.0501 0.0501 0.0519 0.0513 MI-BF 6.2634 11.1054 9.9890 1.5473 1.2490 1.4003 1.4832 1.2837 1.3896 1.1415 1.4724 1.0751 1.0113 1.5555 1.0503 5.6431 6.0197 4.0866 MI-NG 3.3154 5.8891 4.4316 0.9043 0.6777 0.6195 0.8968 0.6873 0.6536 0.6055 0.7977 0.5801 0.5188 0.8539 0.5689 2.9978 2.4956 2.0492 Table 3: Ablation study of UnlearnRec based on SimGCL. Datasets Movielens-1M Gowalla Metrics Recall NDCG MI-BF MI-NG Recall NDCG MI-BF MI-NG -L𝑢 0.2338 0.3331 1.5347 0.5948 0.2543 0.1677 1.3452 0.5639 -L𝑝 0.1863 0.2682 4.8245 3.7239 0.17107 0.1117 6.8021 3.3367 -L𝑐 0.2300 0.3262 3.0013 2.1789 0.2507 0.1633 4.1979 1.6625 - FineTune 0.2211 0.3188 2.2226 1.6998 0.2398 0.1563 3.3232 1.3047 Ours 0.2336 0.3323 3.1981 2.3575 0.2540 0.1670 4.3520 1.8075 drop in the unlearning efficacy, specifically MI-BF and MI-NG. This is becauseL𝑢plays a role in reducing the probability of recommending edges that need to be unlearned. Nevertheless, we found that even without L𝑢, MI-BF still exceeds 1, indicating a decrease in the probability of recommending the unlearned edges. This is due to the presence of the BPR loss in L𝑀, which contributes to reducing the similarity between the two nodes in the interaction pairs that need to be unlearned. However, solely relying on this BPR loss is insufficient to eliminate the effects caused by the unlearning edges. Therefore, MI-NG remains below 1, indicating that the probability of recommending unlearned edges is still higher than that of negative samples. •-L𝑝: The purpose ofL𝑝is to preserve useful information from the original trained model to some extent. Its goal is to prevent the model from deviating too far from the original embedding distribution during the pre-training process, where the edges re- quiring unlearning are randomly simulated. After all, in practice, the edges that need to be unlearned typically constitute a small portion of all edges. Removing L𝑝significantly compromises the predictive performance during the unlearning process, even though the unlearning effects, such as MI-BF and MI-NG, may increase. However, this trade-off is not worth it. •-L𝑐: The influence of L𝑐on the overall outcome, whether it is utility or unlearning efficacy, is relatively small. L𝑐can be considered as a regularization term, primarily used to smooth the randomly simulated unlearning requests during the pre-training process. Therefore, the weight of L𝑐, denoted as 𝜆𝑐, does not need to be very large. To some extent, the smoothing effect of L𝑐on unlearning requests may slightly decrease the probability of recommending the interactions similar to unlearning requests. However, generally speaking, these interactions are still regarded as positive pairs by the recommendation system and are still likely to
https://arxiv.org/abs/2505.22649v1
be recommended like other normal positive instances. •-𝐹𝑖𝑛𝑒𝑇𝑢𝑛𝑒 : In our experiment of adversarial attacks, we found that even without fine-tuning the pre-trained influence encoder(IE), we could still achieve good results (both in utility and ef- ficacy) when unlearning requests were received and directly applied to unlearn using IE. This indicates that our pre-trained IEis capable of effectively identifying and mitigating the impact of adversarial edges on embeddings. In fact, we can consider the Δ¯E0inIEas learnable noises. When unlearning requests arrive, the embeddings of affected nodes are first masked by the noises Δ¯E0inIE, and then fine-tuned to adapt to the specific contextual changes brought by the unlearning requests. To put it simply, the pre-training stage is analogous to the noise injection process of a diffusion model, while the fine-tuning stage is similar to the reconstruction and generation process of diffusion [9, 14, 25]. 4.5 Embedding Distribution Shift Study (RQ4) In this section, we mainly discuss the changes in the embedding distribution of various recommendation system models before and after unlearning, as well as their comparison with the results ob- tained from retraining. The predicted score distribution before un- learning is shown in Fig. 3, while the distribution after unlearning is shown in Fig. 2. The dashed lines in the figures represent the mean predicted scores for the corresponding types of edges. Based on these results, we have the following observations. •Adversarial Edges . Before unlearning, the predicted scores for all adversarial edges in various models are close to positive in- stances, which is reasonable since they were labeled as positive during the training process. These Adversarial Edges are sampled based on a set of items that a trained GCN predicts as the least likely to be interacted by users. Therefore, after removing the ad- versarial edges and retraining, we found that the mean predicted scores for them are lower than regular negative instances. •P-based Methods vs. IF-based Methods . In general, P-based methods demonstrate better controllability and efficacy in terms of unlearning efficacy compared to IF-based methods. From Fig. 2, it can be observed that the mean predicted scores for adversar- ial edges given by GraphEraser are more closer to the mean of negative instances, while in many backbones, GIF still ex- hibits significantly higher predicted scores for adversarial edges compared to negative edges (e.g., GIF for SGL in both datasets). Although GraphEraser outperforms GIF in terms of unlearning efficacy, it still falls short of meeting the requirements, as evident in the picture of the SGL on the Yelp dataset, where the scores for adversarial edges still remain higher than negative instances. This implies that even after unlearning, it is still possible to infer SIGIR ’25, July 13–18, 2025, Padua, Italy Guoxuan Chen, Lianghao Xia, and Chao Huang 4 2 0 2 4 6 8 Link Prediction ScoreProbabilityPositive Edges Negative Edges Adversarial Edges 6 4 2 024681012141618 Link Prediction ScoreProbabilityGIF, GCN, After Unlearning 6 4 2 02468101214 Link Prediction ScoreProbabilityraphEraser, GCN, After Unlearning G 6 4 2 0246810 Link Prediction ScoreProbabilityOurs, GCN, After Unlearning 10 8 6 4 2 024681012 Link Prediction
https://arxiv.org/abs/2505.22649v1
ScoreProbabilityGCN, After Retraining (a) Prediction distribution of GCN after unlearning on Movielens-1M dataset 4 2 0 2 4 6 810 Link Prediction ScoreProbabilityGIF, SGL, After Unlearning 6 4 2 0246810 Link Prediction ScoreProbabilityGraphEraser, SGL, After Unlearning 6 4 2 02468 Link Prediction ScoreProbabilityOurs, SGL, After Unlearning 8 6 4 2 0246810 Link Prediction ScoreProbabilitySGL, After Retraining (b) Prediction distribution of SGL after unlearning on Movielens-1M dataset 6 4 2 02468101214 Link Prediction ScoreProbabilityGIF, SGL, After Unlearning 8 6 4 2 024681012 Link Prediction ScoreProbabilityGraphEraser, SGL, After Unlearning 10 8 6 4 2 0246810 Link Prediction ScoreProbabilityOurs, SGL, After Unlearning 12 10 8 6 4 2 02468101214 Link Prediction ScoreProbabilitySGL, After Retraining (c) Prediction distribution of SGL after unlearning on Yelp2018 dataset 6 3 0369121518 Link Prediction ScoreProbabilityGIF, SimGCL, After Unlearning 6 4 2 0246810121416 Link Prediction ScoreProbabilityGraphEraser, SimGCL, After Unlearning 10 8 6 4 2 02468101214 Link Prediction ScoreProbabilityOurs, SimGCL, After Unlearning 8 6 4 2 02468101214 Link Prediction ScoreProbabilitySimGCL, After Retraining (d) Prediction distribution of SimGCL after unlearning on Yelp2018 dataset Figure 2: Visualization for the predictions of positive, negative, and adversarial edges in the Movielens-1M and Yelp2018 datasets. Four methods with three backbone models are compared, including the IF-based method GIF, the partition-based method GraphEraser, the retraining-based exact unlearning, and our proposed UnlearnRec, on the GCN, SGL, SimGCL backbones. 6 4 2 024681012 Link Prediction ScoreProbabilityGCN, Before Unlearning 4 2 024681012 Link Prediction ScoreProbabilitySGL, Before Unlearning (a) Distribution on ML-1M before unleaning 4 2 02468101214 Link Prediction ScoreProbabilitySGL, Before Unlearning 4 2 0246810121416 Link Prediction ScoreProbabilitySimGCL, Before Unlearning (b) Distribution on Yelp before unleaning Figure 3: Visualization of the prediction distributions for Movielens-1M and Yelp2018 datasets before unlearning. GCN SGL SimGCL Backbone models05000100001500020000GPU memoryRetrain Ours raphEraser CEU GIF G (a) GPU memory usage on Movielens-1M GCN SGL SimGCL Backbone models05000100001500020000GPU memoryRetrain Ours raphEraser CEU GIF G (b) GPU memory usage on Gowalla Figure 4: Comparison of GPU memory usage with different methods on various backbones based on two datasets. the existence of these edges based on the embeddings. •Our UnlearnRec Paradigm . Our method demonstrates good unlearning efficacy on both datasets and across three backbones. In all experiments, after unlearning, the mean predicted scores for adversarial edges are significantly lower than negative instances. Furthermore, the score distribution of adversarial edges is en- veloped by the score distribution of negative instances, ensuring minimal information leakage about these edges after unlearning. In other words, it becomes impossible to infer the existence of these edges based on the embeddings. Visually, the embedding distribution of our model closely resembles that of retraining. 4.6 Unlearning Efficiency Analysis (RQ5) To assess the efficiency of our UnlearnRec paradigm, we conduct a comparative analysis of various models in terms of GPU memory and processing time when unlearning requests come. Fig. 4 shows and GPU memory cost and Tab. 4 shows the processing time, in which Ours(0) means no finetuning and Ours(1) denotes funetuning for 1 epoch. And we have following discussions.Table 4: The time for processing unlearning requests of differ- ent methods based on the SimGCL model and
https://arxiv.org/abs/2505.22649v1
three datasets. Dataset Retrain GraphEraser GIF CEU Ours(0) Ours(1) Ours(2) Ours(3) ML-1M 1208.354 993.064 8.278 6.943 0.312 7.034 14.221 21.517 Yelp2018 3029.606 1834.413 146.384 52.216 0.468 19.71 39.201 56.023 Gowalla 839.07 795.995 131.753 39.089 0.432 3.37 6.734 10.026 •High Memory Cost for IF-based Methods . IF methods typi- cally require second-order gradients (Hessian matrix) to estimate unlearning effects. This requires storing computation graphs and using the first-order graph to calculate Hessian, consuming sig- nificant GPU memory. With sufficient GPU memory for the full dataset’s second-order graph, unlearning is fast (like GIF/CEU on Movielens). Without sufficient memory, batching becomes necessary, increasing time and reducing IF accuracy. CEU uti- lizes HVP [ 22] and CG [ 26] to estimate inverse Hessian, saving some computational resources. However, it still requires storing graphs, which are complex due to SSL-based GNNs’ stochastic structures. This increases computational complexity and reduces estimation accuracy of IF-based unlearning methods. •High Time Cost for P-based Methods . P-based methods re- quire retraining the shards affected by unlearning requests and then aggregating all the shards. This process takes time, espe- cially when unlearning requests involve a large number of shards. P-based methods are relatively efficient in terms of GPU memory consumption. Their GPU consumption is comparable to retrain- ing, or sometimes even lower, as they only need to retrain a portion of the graph. P-based methods are relatively controllable, and with patient training, they can achieve good unlearning ef- ficacy. However, due to the disruption of the graph’s topology, they may suffer from poor utility performance. Moreover, the prolonged retraining of multiple shards reduces the practicality. •UnlearnRec’Advantages . Our method strikes a good balance between memory and time. The GPU memory required by our paradigm is comparable to that of P-based methods and retrain- ing, and significantly smaller than that of IF-based methods. The processing time is also much shorter compared to P-based meth- ods and retraining, while being comparable to IF on relatively large datasets. In our adversarial attack experiments, we achieve Pre-training for Recommendation Unlearning SIGIR ’25, July 13–18, 2025, Padua, Italy 0 2 4 6 Pre-training time0.00.51.01.52.0Unlearning Efficacy 0.000.020.040.06 Performance GCN, Pre-training MI-BF MI-NG Recall NDCG 0 2 4 6 Pre-training time01234Unlearning Efficacy 0.000.020.040.06 Performance SimGCL, Pre-training MI-BF MI-NG Recall NDCG (a) Unlearning efficacy and performance changes during pre-training 0 1 2 3 Fine-tuning time0123456Unlearning Efficacy 0.000.020.040.06 Performance GCN, Fine-tuning MI-BF MI-NG Recall NDCG 0 1 2 3 Fine-tuning time0123456Unlearning Efficacy 0.000.020.040.06 Performance SimGCL, Fine-tuning MI-BF MI-NG Recall NDCG (b) Unlearning efficacy and performance changes during fine-tuning Figure 5: Unlearning efficacy and performance v.s. pre- training and fine-tuning processes of UnlearnRec. V8716.85 6.13 5.82 V1672V1387U493 Japanese Cuisine Unlearn V8710.91 5.35 5.42 V1672V1387U493 Japanese Cuisine V20817.01 7.136.12 V2068V2077U216 Bubble TeaUnlearn V20811.21 6.885.67 V2068V2077U216 Bubble Tea Figure 6: Case study on the unlearning effects for user-item interactions of similar categories based on Yelp2018 dataset. satisfactory pretrained IEin around 10-15 pretraining epochs, and when specific unlearning requests arise, fine-tuning can be completed in 3-5 epochs. These can be found in Fig. 5. 4.7 Unlearning Case Study (RQ6) Here, we provide some examples of
https://arxiv.org/abs/2505.22649v1
unlearning cases in Fig. 6. It can be observed that when a particular interaction is unlearned, its prediction score is significantly reduced (comparable to that of a negative interaction edge), and the prediction scores of items similar to it also decrease slightly. However, the scores for these similar items remain sufficiently high for them to be considered positive instances by the recommendation system and potentially recommended. When a user actively unlearns a significant number of similar items, the probability of recommended items from that category might also drop noticeably. This phenomenon aligns with our common sense: typically, when a user repeatedly removes in- teractions with the same type of items, it usually implies that they wish to no longer be recommended similar items. This demonstrates our method’s strong generalization capability in unlearning tasks, along with its sophisticated modeling of complex dependencies and mutual influences between multiple unlearning requests. 4.8 Unlearning Ratio Study Table 5 summarizes the unlearning results of removing varying pro- portions of edges from the original dataset, evaluated through both model utility preservation and unlearning effectiveness metrics. In reality, unlearning requests typically only account for a smallTable 5: Utility and efficacy w.r.t. different unlearning ratios. Unlearn Ratios 0.10% 0.30% 0.50% 0.75% 1.00% 1.50% 2.00% Recall 0.2564 0.2543 0.2504 0.2462 0.2411 0.2321 0.2302 NDCG 0.1682 0.1653 0.1641 0.1634 0.1615 0.1598 0.1557 MI-BF 4.8497 5.8521 6.1612 7.0485 7.8845 8.4836 9.7556 MI-NG 1.1943 1.1670 1.0715 1.0401 1.0283 1.0306 1.0288 fraction of the dataset. From Tab. 5, it can be observed that as the proportion of unlearning increases, the predictive performance of the recommendation system slightly decreases in order to maintain a high level of unlearning efficacy. And our method maintains con- sistently high unlearning efficacy (MI-NG > 1 across all tests) when processing requests to remove varying proportions of user-item interactions. This demonstrates the robustness of our approach. 5 Related Work Partition-Based Approaches to Machine Unlearning . Machine unlearning [ 2,3,8] has garnered significant attention in recent years, with the goal of removing the influence of specific training data from model parameters while avoiding full retraining. Several approaches address these challenges across different domains. The SISA concept [ 7] partitions models into multiple shards and per- forms unlearning by retraining only the relevant shards. Similarly, [17] enhances collaborative filtering recommendation through op- timized sequential training and hypergraph-based user clustering, enabling efficient recommendation unlearning. Influence Function Methods for Unlearning . Other efforts im- prove time efficiency by eliminating the retraining process entirely. Influence Function approaches represent a prominent direction, using mathematical methods to quickly estimate unlearning impact and directly modify model parameters with minimal computational overhead. For instance, [ 12] introduces certified data removal using influence functions, while [ 15] proposes approximate unlearning based on influence analysis for linear and logistic models. [ 38] lever- ages influence functions for recommendation unlearning, achieving substantial speedup compared to retraining. A recent survey [ 18] systematizes design principles and taxonomies while highlighting challenges like dynamic unlearning and interpretability, under- scoring the unique complexities of recommendation unlearning compared to traditional unlearning paradigms. 6 Conclusion In this work, we
https://arxiv.org/abs/2505.22649v1
introduce UnlearnRec: a novel pretraining para- digm for recommendation unlearning that is both model-agnostic and learnable. Our approach effectively addresses the complex chal- lenges of unlearning in modern recommender systems, particularly excelling in advanced SSL-based architectures where traditional methods fall short. Through a strategic two-phase process of com- prehensive pre-training followed by targeted fine-tuning with care- fully designed optimization objectives, we demonstrate through extensive empirical evaluation that UnlearnRec achieves a good bal- ance. Our experiments across multiple benchmark datasets confirm that our method delivers exceptional unlearning performance while maintaining computational efficiency and memory utilization. SIGIR ’25, July 13–18, 2025, Padua, Italy Guoxuan Chen, Lianghao Xia, and Chao Huang References [1]Privacy Act. 2000. Personal information protection and electronic documents act. Department of Justice, Canada. Full text available at http://laws. justice. gc. ca/en/P-8.6/text. html (2000), 4356–4364. [2]Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hen- grui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP) . IEEE, 141–159. [3]Yinzhi Cao and Junfeng Yang. 2015. Towards making systems forget with machine unlearning. In IEEE Symposium on Security and Privacy (S&P) . IEEE, 463–480. [4]Chong Chen, Fei Sun, Min Zhang, and Bolin Ding. 2022. Recommendation unlearning. In Proceedings of the ACM Web Conference 2022 . 2768–2777. [5]Guoxuan Chen, Lianghao Xia, and Chao Huang. 2025. LightGNN: Simple Graph Neural Network for Recommendation. In International Conference on Web Search and Data Mining (WSDM) . 549–558. [6]Lei Chen, Le Wu, Richang Hong, Kun Zhang, and Meng Wang. 2020. Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach. In AAAI Conference on Artificial Intelligence (AAAI) , Vol. 34. 27–34. [7]Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. 2022. Graph unlearning. In ACM SIGSAC Conference on Computer and Communications Security (CCS) . 499–513. [8]Weilin Cong and Mehrdad Mahdavi. 2022. GraphEditor: An Efficient Graph Representation Learning and Unlearning Approach. (2022). [9]Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. 2023. Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2023). [10] Yonatan Dukler, Benjamin Bowman, Alessandro Achille, Aditya Golatkar, Ashwin Swaminathan, and Stefano Soatto. 2023. Safe: Machine unlearning with shard graphs. In IEEE/CVF International Conference on Computer Vision (ICCV) . 17108– 17118. [11] Chen Gao, Yu Zheng, Nian Li, Yinfeng Li, Yingrong Qin, Jinghua Piao, Yuhan Quan, Jianxin Chang, Depeng Jin, Xiangnan He, et al .2023. A survey of graph neural networks for recommender systems: Challenges, methods, and directions. ACM Transactions on Recommender Systems (TRS) 1, 1 (2023), 1–51. [12] Chuan Guo, Tom Goldstein, Awni Hannun, and Laurens Van Der Maaten. 2019. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030 (2019). [13] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In International ACM SIGIR conference on research and development in Information Retrieval (SIGIR) . 639–648. [14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in Neural Informationf Processing Systems (NeurIPS) 33 (2020), 6840–6851.
https://arxiv.org/abs/2505.22649v1
[15] Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, and James Zou. 2021. Approximate data deletion from machine learning models. In International Con- ference on Artificial Intelligence and Statistics (AISTATS) . PMLR, 2008–2016. [16] Chanhee Kwak, Junyeong Lee, Kyuhong Park, and Heeseok Lee. 2017. Let machines unlearn–machine unlearning and the right to be forgotten. (2017). [17] Yuyuan Li, Chaochao Chen, Xiaolin Zheng, Junlin Liu, and Jun Wang. 2024. Making recommender systems forget: Learning and unlearning for erasable recommendation. Knowledge-Based Systems 283 (2024), 111124. [18] Yuyuan Li, Xiaohua Feng, Chaochao Chen, and Qiang Yang. 2024. A Survey on Recommendation Unlearning: Fundamentals, Taxonomy, Evaluation, and Open Questions. arXiv preprint arXiv:2412.12836 (2024). [19] Zongwei Li, Lianghao Xia, and Chao Huang. 2024. Recdiff: diffusion model for social recommendation. In International Conference on Information and Knowledge Management (CIKM) . 1346–1355.[20] Zihan Lin, Changxin Tian, Yupeng Hou, and Wayne Xin Zhao. 2022. Improving graph collaborative filtering with neighborhood-enriched contrastive learning. InACM Web Conference (WWW) . 2320–2329. [21] Iyiola E Olatunji, Wolfgang Nejdl, and Megha Khosla. 2021. Membership inference attack on graph neural networks. In 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA) . IEEE, 11–20. [22] Barak A Pearlmutter. 1994. Fast exact multiplication by the Hessian. Neural computation 6, 1 (1994), 147–160. [23] Formerly Data Protection. 2018. General data protection regulation (GDPR). Intersoft Consulting, Accessed in October 24, 1 (2018). [24] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012). [25] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML) . PMLR, 2256–2265. [26] Trond Steihaug. 1983. The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal. 20, 3 (1983), 626–637. [27] Wenjie Wang, Yiyan Xu, Fuli Feng, Xinyu Lin, Xiangnan He, and Tat-Seng Chua. 2023. Diffusion Recommender Model. International ACM SIGIR conference on research and development in Information Retrieval (SIGIR) (2023). [28] Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. In ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) . 165–174. [29] Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised graph learning for recommendation. In International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) . 726–735. [30] Jiancan Wu, Yi Yang, Yuchun Qian, Yongduo Sui, Xiang Wang, and Xiangnan He. 2023. GIF: A General Graph Unlearning Strategy via Influence Function. In ACM Web Conference (WWW) . 651–661. [31] Kun Wu, Jie Shen, Yue Ning, Ting Wang, and Wendy Hui Wang. 2023. Certi- fied edge unlearning for graph neural networks. In International Conference on Knowledge Discovery and Data Mining (KDD) . 2606–2617. [32] Kun Wu, Jie Shen, Yue Ning, and Wendy Hui Wang. 2022. Fast Yet Effective Graph Unlearning through Influence Analysis. (2022). [33] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. 2020. A Comprehensive Survey on Graph Neural Networks. IEEE
https://arxiv.org/abs/2505.22649v1
Transactions on Neural Networks and Learning Systems (TNNLS) 32, 1 (2020), 4–24. [34] Lianghao Xia, Chao Huang, Chunzhen Huang, Kangyi Lin, Tao Yu, and Ben Kao. 2023. Automated Self-Supervised Learning for Recommendation. In The ACM Web Conference (WWW) . 992–1002. [35] Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, and Quoc Viet Hung Nguyen. 2022. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) . 1294–1303. [36] An Zhang, Wenchang Ma, Xiang Wang, and Tat-Seng Chua. 2022. Incorporating bias-aware margins into contrastive loss for collaborative filtering. Advances in Neural Information Processing Systems (NeurIPS) 35 (2022), 7866–7878. [37] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based recom- mender system: A survey and new perspectives. ACM computing surveys (CSUR) 52, 1 (2019), 1–38. [38] Yang Zhang, Zhiyu Hu, Yimeng Bai, Jiancan Wu, Qifan Wang, and Fuli Feng. 2024. Recommendation unlearning via influence function. ACM Transactions on Recommender Systems 3, 2 (2024), 1–23. [39] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph neural networks: A review of methods and applications. AI open 1 (2020), 57–81.
https://arxiv.org/abs/2505.22649v1
arXiv:2505.22655v1 [cs.LG] 28 May 2025Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents Michael Kirchhof1Gjergji Kasneci2Enkelejda Kasneci2 Abstract Large-language models (LLMs) and chatbot agents are known to provide wrong outputs at times, and it was recently found that this can never be fully prevented. Hence, uncertainty quantifi- cation plays a crucial role, aiming to quantify the level of ambiguity in either one overall num- ber or two numbers for aleatoric and epistemic uncertainty. This position paper argues that this traditional dichotomy of uncertainties is too lim- ited for the open and interactive setup that LLM agents operate in when communicating with a user, and that we need to research avenues that enrich uncertainties in this novel scenario. We re- view the literature and find that popular definitions of aleatoric and epistemic uncertainties directly contradict each other and lose their meaning in in- teractive LLM agent settings. Hence, we propose three novel research directions that focus on un- certainties in such human-computer interactions: Underspecification uncertainties , for when users do not provide all information or define the ex- act task at the first go, interactive learning , to ask follow-up questions and reduce the uncertainty about the current context, and output uncertain- ties, to utilize the rich language and speech space to express uncertainties as more than mere num- bers. We expect that these new ways of dealing with and communicating uncertainties will lead to LLM agent interactions that are more transpar- ent, trustworthy, and intuitive. 1. Introduction Large language model (LLM) agents such as chatbots are notorious for hallucinating at times (Bang et al., 2023; Guer- 1University of T ¨ubingen. Now at Apple.2Technical University of Munich.. Correspondence to: Michael Kirchhof <mail address see website >. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). Apple Confidential–Internal Use Only2Epistemic Uncertainty Uncertainty due to the model not knowing something, which can be reduced.Aleatoric Uncertainty Uncertainty due to the data being fundamentally unclear, which cannot be reduced.Figure 1. The traditional view on uncertainties suggests a clear black-and-white dichotomy between aleatoric and epistemic un- certainty. We argue that recent developments show this dichotomy is not that simple, and not helpful for developing LLM agents. reiro et al., 2023), that is, to make up a response that is incorrect. Recent research has shown that this behaviour is rooted in their very generative nature, such that we can not expect LLM agents to stop hallucinating in the future (Banerjee et al., 2024; Kalai & Vempala, 2024; Xu et al., 2024b). Instead, there are numerous approaches to quan- tify the uncertainty that an LLM agent has in each of its statement, in order to bring transparency to which responses can be trusted and which require further investigation (Ka- davath et al., 2022; Kapoor et al., 2024). Such uncertainty quantification methods either output one total uncertainty or, more recently, attempt to output individual values for aleatoric and epistemic uncertainty (Wimmer et al., 2023; H¨ullermeier & Waegeman, 2021). Epistemic uncertainty is reducible uncertainty, such as when an agent could be trained
https://arxiv.org/abs/2505.22655v1
with more data from new regions of the input mani- fold to produce more definite outputs. Aleatoric uncertainty is irreducible uncertainty, when the data itself is too noisy or lacks features to make predictions that come without a risk of error, regardless of how good the model is. While these uncertainty quantification approaches that assign numerical aleatoric and epistemic uncertainty scalars to each output have been useful more structured tasks such as classifica- tion (Murphy, 2012), we argue that they fail to capture the nuanced, multi-turn, and interactive nature of LLM-agent uncertainty in real-world applications. LLM agents can and must handle uncertainties in a more advanced way. This is because they leave traditional well- 1 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents School of Thought Main Principle Epistemic uncertainty as number of possible models (Wim- mer et al., 2023)Epistemic uncertainty is how many models a learner be- lieves to be fitting for the data. Epistemic uncertainty via disagreement (Houlsby et al., 2011; Gal et al., 2017; Kirsch, 2024)Epistemic uncertainty is how much the possible models disagree about the outputs. Epistemic uncertainty via density (Mukhoti et al., 2023; Charpentier et al., 2022; Heiss et al., 2022; Liu et al., 2020; Van Amersfoort et al., 2020)Epistemic uncertainty is high if we are far from seen exam- ples and low within the train dataset. Epistemic uncertainty as leftover uncertainty (Kotelevskii & Panov, 2024; Kotelevskii et al., 2022; Lahlou et al., 2021; Depeweg et al., 2018)Epistemic uncertainty is the (estimated) overall uncertainty minus the (estimated) aleatoric uncertainty. Aleatoric uncertainty as Bayes-optimal model (Schweighofer et al., 2024; Apostolakis, 1990; Helton, 1997; Bengs et al., 2022)Aleatoric uncertainty is the risk that the best model inside a model class still has, assuming infinite data. Aleatoric uncertainty as pointwise ground-truth variance (Lahlou et al., 2021)Aleatoric uncertainty is the variance that the output variable has on each input point, and errors because the model class is too simple is notpart of it. Aleatoric and epistemic as labels of the practitioner (Der Ki- ureghian & Ditlevsen, 2009; Faber, 2005)Aleatoric and epistemic are just terms with which practition- ers communicate which uncertainties they intend to reduce and which not. Table 1. Overview of prominent schools of thought on aleatoric and epistemic uncertainties and their (conflicting) main principles. structured setups with fixed-length inputs (one image, one question, or one vector of features) and fixed output formats (one segmentation map (Baumgartner et al., 2019), one an- swer string (Kwiatkowski et al., 2019b), a single vector of class probabilities (Murphy, 2012)), and are instead applied in much more open environments. In a chat interaction with a user where the user’s questions is underspecified and am- biguous, an LLM agent should not only output a numeri- cal uncertainty score, but interact and ask clarification ques- tions. If it detects that it lacks knowledge, it can use retrieval to gather additional information (Lewis et al., 2020), and if there are still remaining uncertainties, it can communicate its uncertainty not only as a number but explain why it is uncertain, which options there are, and what further
https://arxiv.org/abs/2505.22655v1
infor- mation can help resolve the uncertainty. This is better suited to the dynamic multi-turn nature of chat interactions where, as we show below, what initially appears as epistemic uncer- tainty (e.g., lack of knowledge) can become aleatoric uncer- tainty if additional information fails to reduce the ambiguity, and aleatoric uncertainty (e.g., an underspecified question) can become a reducible epistemic uncertainty by enabling to ask clarifying questions. This motivates our position that dichotomic views on aleatoric and epistemic uncertainty are inapplicable to modern LLM agent interactions; in- stead, we need to research how uncertainties in user in- teractions are detected, handled, and communicated. We support this position by contributing three perspectives that aim to inspire creative rethinking of uncertainty quan- tification in the era of LLM agents:1.Section 2 provides an in-depth review of the recent developments in aleatoric and epistemic uncertainty disentanglement and finds that they are fundamentally conflicting , already in toy-examples. Even if future re- search could find non-conflicting definitions and decor- related estimates, we argue that they are not applica- ble to LLM agent setting, because in a multi-turn ex- change between a user and an LLM agent it becomes blurry and ultimately subjective which uncertainties are reducible and which stay irreducible. 2.Section 3 proposes three novel research directions specifically for LLM agent interactions: (1) LLM agent interactions experience strong underspecification un- certainties , because not only is much information miss- ing at first, but also even the task itself might be un- clear at the start of a conversation, (2) interactive learn- ingcan help reduce these underspecification uncertain- ties by interacting with the user, and (3) when it finally comes to communicating the uncertainty, we argue that LLM agents can utilize are more advanced output un- certainties than mere answer probabilities. 3.Section 4 takes the counter-position and delineates in which areas traditional epistemic and aleatoric uncer- tainties and numerical uncertainties remain useful. We believe that this position, and its counter-position, can help summarize the recent trends in uncertainty quantifica- tion and spark a discussion in the larger community. 2 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents 0.0 0.2 0.4 0.6 0.8 1.0 θ01020304050607080Q(θ) Figure 2. In a binary prediction, the learner may have a belief that the Bernoulli probability is either high or low. Some schools of thought see this as a case of maximum epistemic uncertainty whereas other see it as nearly minimal epistemic uncertainty. 2. Where Traditional Uncertainties Fail This section gives an introduction to the traditional research on uncertainties. We critically review the popular dichotomy of aleatoric and epistemic uncertainty and its various real- izations, summarized in Table 1. We show that this strict dichotomy has inherent definition conflicts even in simplis- tic examples (Sections 2.1 to 2.3) and that it breaks apart in modern interactive LLM agent setups (Section 2.4). These arguments support the first part of our position, namely that the traditional terms and methods in uncertainty quantifica- tion are unsuitable for LLM agents. 2.1. Epistemic Uncertainty: Maximal or Minimal? Understanding how to quantify epistemic uncertainty is cru- cial for making
https://arxiv.org/abs/2505.22655v1
reliable predictions, but definitions vary dra- matically. This definition conflict that can be seen directly in a simple example. Suppose a learner is parametrized by θ and models a binary classification problem. In this section, we focus on only one input sample x∈ X, so the learner is simply tasked to estimate the probability p∈[0,1]of a Bernoulli distribution y|x∼Ber(p)with the parameter θ∈[0,1]. We train the learner with some data {yn}N n=1, yn∈ Y={0,1}, so that it forms a second-order distribu- tionQ(θ)that tells which parameters it finds plausible for the data. In Bayesian terms, the parameter θis a random variable Θitself. Suppose that after training, the learner concludes that there are only two possible models left that could fit the data, either θ= 0orθ= 1, i.e.,Qis a mixture of two Diracs, as in Figure 2. Does this reflect a state of maximal or minimal epistemic uncertainty? There are multiple, equally grounded answers to this ques- tion. On the one hand, one can define epistemic uncertainty as a form of disagreement . For example, epistemic uncer-tainty is often defined from a mutual information perspective asIP(y,θ|x)(y;θ)Houlsby et al. (2011); Gal et al. (2017); Kirsch (2024). The mutual information tells how much the variance in Ycan be reduced by reducing the variance in Θ. In other words, this epistemic uncertainty formula models how much the possible parameters θ∼Θdisagree in their prediction about Y. It follows that the two beliefs θ= 0and θ= 1of the learner maximally disagree, and the epistemic uncertainty is maximal. On the other hand, epistemic uncertainty can be defined based on the number of plausible models that could explain the data. For instance, Wimmer et al. (2023) propose ax- iomatic definitions of epistemic uncertainty, where the un- certainty decreases as the set of possible models shrinks. Regardless of which specific epistemic uncertainty formula ones derives from them, the axiomatic requirements imply that the epistemic uncertainty must be (close to) zero in our example, because the number of possible models has already been reduced to only two Diracs. In their axiom system, the epistemic uncertainty would be maximal if Q was a uniform distribution. The authors discuss this exam- ple in their paper, and, interestingly, there is also a public discussion between the disagreement and the axiomatic par- ties (Kirsch et al., 2024), which we encourage the curious reader to explore. We also note that being split between θ= 0andθ= 1is an extreme example for demonstration purposes, but the example holds for any split belief between two points versus a belief over their convex hull. Besides these two conflicting schools of thought, there is a third one that relates epistemic uncertainty to how well the training data supports the model’s predictions. Under this perspective, epistemic uncertainty does not hinge sim- ply on disagreement or the number of plausible models, but rather on how far we are from well-established data regions. (Mukhoti et al., 2023; Charpentier et al., 2022; Heiss et al., 2022; Liu et al., 2020; Van Amersfoort et al., 2020) define epistemic uncertainty as the (latent) density of the training data. This definition
https://arxiv.org/abs/2505.22655v1
has neither a maximal nor minimal uncertainty, since the density values depend on the normal- ization and prior over the whole space X(or, analogously, X × Y ). Hence, in the above example, latent density esti- mators would answer neither with maximum nor minimum uncertainty but rather ’it depends’, namely on how much training data was observed on or close to xin relative com- parison to the remaining areas in X, and on the prior that de- fines how fast and to which value the epistemic uncertainty grows with the distance to the train data. This shows that epistemic uncertainty is not a universally agreed-upon concept. Different equally well-grounded theo- retical foundations lead to contrasting conclusions, even in the above simplistic example, which is both entirely theoret- ical (leaving estimation errors of the epistemic estimators 3 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents aside) and inside one fixed context (the input and output spaces X,Yare fixed, and the model class covers all pos- sible data-generating processes). Understanding this diver- sity of views is essential for navigating real-world scenarios. We will see next that these conflicts do not only occur with epistemic uncertainty. 2.2. Aleatoric Uncertainty: Reducible Irreducibility Building on the definitional conflicts observed in epistemic uncertainty, we now turn to aleatoric uncertainty. Let us expand the above example. We now regard different in- putsx∈[−1,3], and use a linear model that estimates f(x, θ) =p(Y= 1|X=x, θ). Recall that aleatoric uncer- tainty is often vaguely mentioned as the irreducible uncer- tainty that, even with infinite data, one cannot remove. But what does irreducible mean? There are two major schools of thought: (1) Bayes-optimality proponents who see aleatoric uncertainty as all residual uncertainty within a chosen model class, and (2) data-uncertainty proponents who argue that changing the model class (using more complex functions) can reduce some uncertainties previously deemed aleatoric. This debate is not just philosophical. If a practitioner la- bels certain uncertainties as aleatoric (and thus, not worth investing in reducing), they may miss opportunities to im- prove predictions by considering richer model classes or additional data sources. More precisely, Bayes-optimality proponents formalize aleatoric uncertainty as the uncertainty that even the Bayes- optimal model has (H¨ullermeier & Waegeman, 2021). How- ever, a Bayes-optimal model is always only optimal within its model class. To quote Schweighofer et al. (2024): ”[t]his [definition of aleatoric uncertainty] assumes that the cho- sen model class can accurately represent the true predic- tive distribution” . In our example, this would be the class of linear models. If the data-generating process was non- linear, like in Figure 3, this would create leftover risk, called model bias.1This is a simple mathematical fact that all theoreticians can agree on, but the question is: Is this irre- ducible? Bayes-optimality proponents would answer yes; even with infinite data the model bias can not be reduced further, and as irreducible uncertainty, it should be counted towards aleatoric uncertainty. They define aleatoric un- certainty inside the given model class as ”the uncertainty that arises due to predicting with the selected probabilistic model” (Schweighofer et
https://arxiv.org/abs/2505.22655v1
al., 2024; and similarly Aposto- lakis, 1990; Helton, 1997). This is also a corollary of ax- iomatic views that dictate that ”in the limit, i.e., if the sam- ple size goes to infinity, all epistemic uncertainty should dis- appear” (Bengs et al., 2022) so that model bias could not be part of the epistemic uncertainty and needs to be counted 1Despite its name, model bias is an uncertainty. It is sometimes referred to as structural uncertainty. −1.0−0.50.0 0.5 1.0 1.5 2.0 2.5 3.0 x−4−202468y Data (quadratic) Linear model 95% Prediction IntervalFigure 3. Using a too simple model class, like a linear model to fit quadratic data, leads to wide uncertainty estimates. The ques- tion is whether this is irreducible, and thus aleatoric uncertainty. Bayes-optimality schools of thoughts would argue that yes, it is irreducible within the model class and thus aleatoric, whereas data- uncertainty schools of thought would argue that it is reducible when choosing a better-suited model class , hence it is not aleatoric. towards aleatoric uncertainty. However, as (H ¨ullermeier & Waegeman, 2021) point out, the choice of a stronger model class may also be considered a means to reduce uncertainty. Hence, the model bias would be a part of the epistemic un- certainty, and aleatoric uncertainty would only be that which no possible model could reduce because the data Xlacks the features to make predictions about Y. In short, aleatoric uncertainty would be defined as data-uncertainty (thepoint- wise Bayes-risk, like in Lahlou et al., 2021), which is not the same as irreducible uncertainty (Bayes-optimal within its model class) (H ¨ullermeier et al., 2022). Because many frameworks define epistemic uncertainty as whatever remains after accounting for aleatoric uncertainty (Kotelevskii & Panov, 2024; Kotelevskii et al., 2022; Lahlou et al., 2021; Depeweg et al., 2018), the boundary we draw for aleatoric uncertainty (and predictive uncertainty), as well as its estimation, directly shapes our understanding of epis- temic uncertainty. Drawing a border on the fuzzy cloud of aleatoric uncertainty directly determines what is consid- ered epistemic uncertainty. This is a consequence of adopt- ing a dichotomous view of uncertainty, where epistemic un- certainty encompasses everything that aleatoric uncertainty does not, without additional categories for factors such as model bias. In short, whether model bias and other forms of reducible uncertainty are classified as aleatoric or epis- temic depends on the chosen definitions and model classes, blurring the once-clear line between ‘irreducible’ and ‘re- ducible’ sources of uncertainty. 4 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents 2.3. Aleatoric and Epistemic Uncertainty Interplay Having explored the definitional conflicts in epistemic and aleatoric uncertainties, we now examine what happens when these uncertainties are treated as separate entities. We find that they interplay, which challenges the notion that they can be disentangled. If aleatoric and epistemic uncertainty were distinct, or- thogonal categories (and there were no further categories), one might hope to divide total predictive uncertainty into distinct parts: one reflecting inherent randomness in the data (aleatoric) and another reflecting gaps in our model’s knowledge (epistemic). This is proposed by information- theoretical decompositions (Depeweg
https://arxiv.org/abs/2505.22655v1
et al., 2018; Mukhoti et al., 2023; Wimmer et al., 2023), Bregman decomposi- tions (Pfau, 2013; Gupta et al., 2022; Gruber et al., 2023), or logit decompositions (Kendall & Gal, 2017). But does this clean division hold in practice? For example, Depeweg et al. (2018) define HP(y|x)(y)|{z} predictive=EQ(θ|x) HP(y|θ,x)(y) | {z } aleatoric+IP(y,θ|x)(y;θ)|{z} epistemic. (1) At the first look, the two summands resemble aleatoric uncer- tainty (average entropy of the prediction) and epistemic un- certainty (disagreement between ensemble members). How- ever, recent empirical studies challenge this clear separa- tion. For example, Mucs ´anyi et al. (2024) demonstrate that, across a wide range of methods from deep ensembles over Gaussian processes to evidential deep learning, aleatoric and epistemic estimators produce values that are almost per- fectly correlated, with rank correlations by between 0.8 and 0.999, see Figure 4. This lack of independence means that what we label as ‘aleatoric’ uncertainty may still serve as a reliable signal for tasks previously thought to be purely ‘epistemic,’ and vice versa. In practical terms, uncertainty measures intended for one purpose can end up performing well in another domain, blurring the boundaries of their in- tended roles. Consequently, they observe that the aleatoric uncertainty estimators are about as predictive for out-of- distribution detection (classically considered an epistemic task) as epistemic estimators, and the epistemic uncertainty estimators are as predictive of human annotator noise (an aleatoric task) as aleatoric estimators. Similar observations are made by de Jong et al. (2024) and Bouvier et al. (2022). One may argue that these experimental observations are due to confounded approximation errors and that additive disen- tanglement is still possible in theory. However, Gruber et al. (2023) assess the formula of a prediction interval of a lin- ear model and denote that ”even in this very simple model one cannot additively decompose the total [predictive] un- certainty into aleatoric and estimation uncertainty” as the aleatoric (here: observation noise) and epistemic uncertainty Figure 4. When estimating aleatoric and epistemic uncertainties, they can often not be disentangled. This plot is reproduced with permission from Mucs ´anyi et al. (2024), where Equation (1) was used to split aleatoric and epistemic uncertainty of a deep ensemble trained on ImageNet-1k. The estimates end up being nearly per- fectly correlated, thus capturing the same uncertainty in practice. (here: approximation error) terms interact non-linearly. The entanglement of the approximation error and the observa- tion noise estimators go further. As H ¨ullermeier et al. (2022) point out, ”if an agent is epistemically uncertain, it is also uncertain about the (ground-truth) aleatoric uncertainty” . This is observed in practice by Valdenegro-Toro & Mori (2022) who report that ”aleatoric uncertainty estimation is unreliable in out-of-distribution settings, particularly for regression, with constant aleatoric variances being output by a model. [...] [A]leatoric and epistemic uncertainties interact with each other, which is unexpected and partially violates the definitions of each kind of uncertainty. ” .2 These practical and theoretical observations lead to the same conclusion, namely, that aleatoric and epistemic uncertainty cannot be split exactly. Most evidence on this is on addi- tive splits, but
https://arxiv.org/abs/2505.22655v1
the latter arguments on epistemic approxi- mation uncertainty about the aleatoric uncertainty estimator (H¨ullermeier et al., 2022; Valdenegro-Toro & Mori, 2022) also hold in more generality. To account for these dependen- cies between aleatoric and epistemic uncertainty estimators, recent methods (Mukhoti et al., 2023) propose to combine multiple estimators. They first gauge if an input point is too far from the training data. They then compute the un- 2Note that this is not in conflict with Mucs ´anyi et al.’s (2024) findings: Mucs ´anyi et al. find that the aleatoric estimators work well for OOD detection, because on OOD data the aleatoric estima- tor outputs more flat and thus constantly lower class probabilities, which is similar to what Valdenegro-Toro & Mori (2022) observe in regression. 5 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents certainty of the softmax classifier. Each uncertainty has the right to veto and abstain from prediction. This goes to show that often, the actual goal is not to have aleatoric and epis- temic uncertainties. Rather, there is a practical task at hand, like abstention, and thinking from this task first and then us- ing different uncertainty estimators, as tools, can solve this task without necessarily labeling one estimator aleatoric and another epistemic. 2.4. From Epistemic to Aleatoric and Back: Uncertainties and Chatbots The concepts of aleatoric and epistemic uncertainty become even more blurred when we go towards agents that interact with the real world. A chatbot is able to ask follow-up questions, which changes the features xresponsible for the answer y. Let us denote a conversation up to a certain time point t∈Nas some (concatenated) string xt, and let us assume, for simplicity, that the question of the conversation remains the same, so that the true answer distribution P(Y) does not change with t. Now that the information that the chatbot gathered in a conversation xtis dynamic in t, is the uncertainty about Yaleatoric or epistemic? One can argue to only look at fixed time points tin the con- versation, where the information xtcollected up to this point poses an irreducible uncertainty for predicting y, hence the agent experiences aleatoric uncertainty. Its reduction via follow-up questions would just be a paradoxical illusion as the point xtin the input space Xfor which we calculate the (possibly lower) aleatoric uncertainty changes. How- ever, one can equally argue that – even when still only look- ing at one fixed point xt– it is possible to gain more infor- mation in future time stepsby further questions or retrieval augmentation (Lewis et al., 2020), so this uncertainty is reducible and epistemic. An argument made by Der Ki- ureghian & Ditlevsen (2009) (following Faber (2005)), not for chatbots but for sequential modeling in engineering3, is that the uncertainty may be considered reducible and epis- temic until a certain point twhen the agent decides to stop asking follow-up questions, which is when it becomes irre- ducible and aleatoric. That is of course only until the agent finds a new follow-up question to ask and ”the character of the aleatory uncertainty ’transforms’ into epistemic uncer-
https://arxiv.org/abs/2505.22655v1
tainty” (Der Kiureghian & Ditlevsen, 2009). Der Kiureghian & Ditlevsen (2009) conclude that calling an uncertainty aleatoric or epistemic is ultimately a subjective choice made by the modeler that just serves to communicate 3We change the example of Der Kiureghian & Ditlevsen (2009) from tabular data to chatbots, because in tabular data adding fea- tures changes the input space, so one could argue that it is no sur- prise that aleatoric and epistemic uncertainty change (H ¨ullermeier & Waegeman, 2021). In chatbots, the input space is the space of all strings of some finite length and remains the same, and only the input point changes with the timestep.which uncertainties they attempt to reduce and which not, rather than there being a true aleatoric and epistemic dis- tinction. Similar uncertainties arising from unobserved vari- ables have recently been further studied in the broad sense by Gruber et al. (2023). In the particular sense of natural language processing, these unobserved information para- doxes have lead researchers to propose more general uncer- tainty frameworks that are ”more informative and faithful than the popular aleatoric/epistemic dichotomy” because ”[t]he boundary between the two is not always clear cut” (Baan et al., 2023). 3. New research avenues The previous section shows that estimators for epistemic and aleatoric uncertainty, even when allowing for their ambigu- ous meanings, can not handle problems that modern LLM agents or chatbots face. In this section, we detail the sec- ond part of our position and argue that research is required in three new research directions to handle uncertainties that arise in these novel interaction environments. We group them into three phases of the interaction, namely underspec- ification uncertainties that arise because the input data and the demanded task are not entirely defined by the user, inter- active learning which allows the chatbot to reduce the under- specification and thus its uncertainties by asking follow-up questions, and lastly communicating output uncertainties in its answers that go beyond traditional probability values and utilize the rich expressions that language and speech offer. 3.1. Underspecification Uncertainties In the previous decades of machine learning research, mod- els were defined to solve a specific task, such as classify- ing a tabular attribute vector with a certain number of fea- tures into a finite number of pre-defined categories, or out- putting a segmentation map of an image. In any case, the task was fixed and known. Large-language models mark the first multi-purpose tools. They are meant to be general- ists, capable of responding to various tasks, which we de- note as a finite or infinite set T. The challenge is that it is unknown which task a user has in mind, especially at the start of a conversation, which introduces a first form of task- underspecification uncertainty . Mathematically, the distri- bution over the possible tasks tinfluences the overall uncer- tainty over the next token ygiven the current context xvia P(y|x) =Z t∈TP(y|t)P(t|x)dt . (2) This shows that the unknown task is a new source of un- certainty, which is, similar to the example in Section 2.4, neither strictly aleatoric
https://arxiv.org/abs/2505.22655v1
nor epistemic. Still, we need to dis- tinguish the task-underspecification uncertainty from other uncertainties in the next-token distribution P(y|x), for ex- 6 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents ample those that arise from lack of knowledge or from se- mantic equivalences in the token distribution, which all re- quire a different treatment. A second form of underspecification uncertainty is that due to missing input information (Rajpurkar et al., 2018; Zhang et al., 2024b). For example, a user may request ”When did the first Harry Potter movie come out?” , but without knowing key information such as the country, the answer distribution is highly uncertain. Min et al. (2020) show that ambiguities like this appear in 56% of the test questions in Natural Questions (Kwiatkowski et al., 2019a), a dataset of Google queries. This context-underspecification uncer- tainty is a new variant of the missing variables problems, with the additional edge that there is not just a finite number of columns that could be missing like in a tabular problem, but an infinite number of possible additional context infor- mation that could be relevant to the problem. It is up to the LLM agent to select which underspecification uncertainty to tackle, which we describe in the next section. Notably, these context-underspecifaction uncertainies can already arise at a sentence level, when relations between words are vague or sentences can have multiple possible meanings or cultural expectations (Berry & Kamsties, 2004; Kolagar & Zarcone, 2024; Ulmer, 2024). Current systems are incapable of dealing with these uncer- tainties, with Zhang et al. (2024c) finding recently that even the best benchmarked model, GPT-3.5-Turbo-16k, can only detect ambiguous questions with 57% accuracy, where 50% is random performance, and human annotators rate only 53% of the follow-up questions as helpful in resolving the ambiguity. This shows that there is a large research gap in how to treat both task- and context-underspecification un- certainties for future interactive LLM agents. Importantly, this problem cannot be ”trained away” by relying on a large- enough knowledge base that includes the answer to any question. These uncertainties arise at inference-time, and are due to the user providing insufficient information. This means that even if a future LLM agent was trained on large- enough data to answer any task correctly, it will inevitably face these uncertainties and research is required on how to detect, quantify, and handle them. We go into one research avenue to attempt to handle underspecification uncertainties in the next section. 3.2. Interactive Learning A key characteristic that distinguishes LLM agents from tra- ditional machine learning problems is that LLM agents can interact with the users to learn more about a problem. This could either be to learn information that the LLM has not been trained on (for example, events that happened after its knowledge-cutoff, or information that is private to the user) or information that the LLM knows already, but requiresfurther information to choose from, because there are under- specification uncertainties. In resemblence to active learn- ing (Settles, 2009), we call this avenue interactive learning , where an agent
https://arxiv.org/abs/2505.22655v1
chooses follow-up questions to be able to provide a better answer to the current user interaction. There are two key characteristics that distinguish interactive learning from active learning: First, the learning is only in order to better solve the current problem x, rather than learning about other inputs to improve the overall model θas in active learning. Second, in interactive learning, a user is queried for the information rather than a database, which opens research questions in user modeling and human- computer interaction. First, the agent needs to take into account which infor- mation the user can provide. For example, most task- underspecification uncertainties can be resolved by asking the user to clarify their intentions (Zhang et al., 2024a), but missing information that causes context-underspecification uncertainty may also be unknown to the user. Second, we require human-computer interaction research to judge which follow-up questions to ask and how long to keep asking them (Zhang & Choi, 2023; Pang et al., 2024). As extreme-examples, the LLM may 1) ask too many ques- tions, so that the user loses interest, 2) ask no questions and output a vague or very long answer that covers all possi- ble uncertainties, or 3) impute missing information by the agent’s priors and provide an answer implicitly depending on these unknown assumptions. Clearly, neither strategy is optimal. The future question for human-computer inter- actions thus will be to find the ideal middle ground, that reduces the output uncertainty via follow-up questions with- out derailing the user interaction. Even if the LLM asked these questions not to a user but to a retrieval system, the ideal trade-off in terms of computational efficiency and la- tency would remain up for debate. One side-challenge is that a default LLM agent may ask unnecessary questions, because it has learned in its training data to ask certain questions in certain contexts, although internally it already has the required information. Similar as in active learning, interactive learning approaches may thus incorporate (estimated) mutual-information reductions to choose the best questions to ask, and which answers can already be predicted from the given context with a high- enough certainty. Interactive learning clearly helps to reduce uncertainties and thus improve the accuracy and personalization (Andukuri et al., 2024) of the overall system. But even besides this obvious advantage, there hides a second important objec- tive, which is accessibility. While a versatile user may pro- vide a clear description of a task with all mandatory side- information to solve a problem, such as a computer scien- 7 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents tist providing a clear description of a specified function to add to a given codebase, a less-versatile user, like an elderly person or a young student that uses an LLM agent for ed- ucation (Kasneci et al., 2023), may require more guidance by the LLM agent. We need not only more research on the optimal strategy to interact with these users, but also more datasets of these user groups, which are under-represented in the current benchmarks that focus mostly on clearly de- fined
https://arxiv.org/abs/2505.22655v1
one-answer interactions. 3.3. Output Uncertainties Once the LLM agent has determined its underspecification uncertainties, and possibly reduced some of it via interac- tive learning, it is tasked to communicate all leftover uncer- tainty in its answer. In its most popular traditional form, this would be a probability shown next to the answer (Lin et al., 2022; Band et al., 2024), or any transformation of it like a binary flag when the LLM is uncertain or a verbalized out- put that puts the number into pre-defined words (Yona et al., 2024). These questions of calibration and numerical uncer- tainty scores have attained much attention in early LLM re- search (Huang et al., 2024; Band et al., 2024) because of their well-founded and well-researched roots in fields like classification. However, we argue that LLM agents have the potential to communicate uncertainties in a much richer way, because they can utilize the whole space of strings rather than the confined space of a single scalar. We label the research that searches for the best ways to communicate leftover uncertainties to a user output uncertainties . A major new opportunity is to not only communicate the overall level of uncertainty, but which competing possibili- ties there are, why the LLM is uncertain between them, and what could reduce the uncertainty. This can be thought of as an extension of conformal sets and credible intervals (Lee, 1989; Angelopoulos et al., 2023; Kirchhof et al., 2023), with the added challenge of not only outputting a set of answers, but one coherent answer that comprises the possible an- swers, and with the added opportunity to integrate explain- ability on why the LLM agent is uncertain between the pos- sibilities (Xu et al., 2024a). Similar to the issue raised in the previous section, it has to be made sure that these pos- siblities faithfully reflect the actual internal belief state of the LLM, and not just common lists of possibilities that the LLM encountered in similar examples in its training data. To ensure this, we encourage to find metrics similar to those in conformal prediction that measure whether the output answer reflects a set of possibilities that is as small as pos- sible but at the same time large enough to cover all likely possibilities. Since the submission of the first draft of this paper, metrics like SelfReflect (Kirchhof et al., 2025) have been proposed for research avenue, along with new methods to generate such summarization strings (Zhang & Zhang, 2025; Yang et al., 2025; Yoon et al., 2025). 1996 2000 2004 2008 2012 2016 2020 2024 Year050100150200250300Number of PapersFigure 5. ArXiv preprints in computer science, statistics, and math that include the terms ”aleatoric” or ”epistemic” in their title or abstract. The usage is at an all-time high, with roughly one paper being published each day in 2024. An LLM agent must also communicate which possibility it finds most likely and which less likely. One avenue to communicate such characteristics is to use verbalized uncer- tainties such as ”most likely”, or ”perhaps” (Chaudhry et al., 2024; Wang et al., 2024).
https://arxiv.org/abs/2505.22655v1
This is mostly a question of fine- tuning the LLM, but also a question on aligning the meaning of those words to what humans interpret them as, necessi- tating human-computer interaction research (Van Der Bles et al., 2019; Belem et al., 2024; Steyvers et al., 2024; Bel ´em et al., 2024). When the LLM agent does not communicate with the user via text but via speech, it has even further ways to explicate its uncertainty. The communication of uncertain- ties via phonetic features (Ulmer, 2024) gives a promising path to utter uncertainties subtly and intuitively to the user. Overall, LLM agents can benefit substantially from the medium of their outputs, either strings or speech, to bet- ter communicate uncertainties. As research on this topic is limited, we encourage the field to find benchmarks and met- rics that go beyond scalar uncertainties, so that once these benchmarks are established, the development of uncertainty output methods can be pursued in a quantitative way. 4. Alternative Views Finally, we discuss counter-positions to our arguments and recommendations. We first discuss how aleatoric and epis- temic uncertainty are still valuable research avenues (Sec- tion 4.1), even for the (pre-)training of LLM agents, how interactive learning could be seen as a normal next-token prediction problem as opposed to a new research avenue (Section 4.2), and in which cases it is beneficial to stick to probabilities to quantify uncertainties, rather than outputting strings that explain the possible answers the agent is uncer- tain about (Section 4.3). 8 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents 4.1. Aleatoric and Epistemic Uncertainty are Still Valid One can argue that despite their conflicting definitions, ’aleatoric uncertainty’ and ’epistemic uncertainty’ still have value as terms and that we should not abandon research on them or using them as labels. We indeed agree with this po- sition: Aleatoric and epistemic uncertainty are terms that are wide-spread, with Figure 5 showing that roughly one preprint is published on arXiv each day that mentions them in the title or abstract. Further, while the quantification of aleatoric and epistemic uncertainty may be less applicable in interactive chat examples (Section 2.4), they still have an importance in the training phase, inside and outside of LLMs, and in particular in choosing which points to query in active learning. We agree that the two terms do allow to easily communicate the rough idea or intention behind an uncertainty quantification approach. Still, when using them, we encourage to follow-up by defining what exactly one in- tends to use an aleatoric or epistemic uncertainty estimator for and how exactly one defines aleatoric and epistemic un- certainty, to circumvent the naming conflicts in Table 1. 4.2.Interactive Learning Can Be Solved By Training on Interaction Data A counter-position to conducting research on how to learn interactively from a user (Section 3.2) is that interaction be- havior could also be learned as a classical next-token predic- tion task, on interaction datasets (Aliannejadi et al., 2021). For example, one can imagine customer service interactions where the agent demands certain data from the user to fill
https://arxiv.org/abs/2505.22655v1
in a form. Since these interactions are standardized, there should be plenty of training data similar to or even equal to the current context, so that the LLM agent knows which follow-up questions to ask just by reciting past interactions. There would be no need for researching interactive learning. In this specific scenario, where we have large amounts of interaction training data, we agree that the problem can be mostly addressed by next-token prediction. However, it is still mandatory to prevent the agent from following a trained pattern blindly. For example, a question the LLM is about to ask the user just because it is often asked in this context may have already been answered by the user. The paramount task here remains that the agent’s questions need to honestly reflect its internal knowledge. To this date, we lack metrics to capture this, so research on whether the problem is solvable by next-token prediction alone is still required. Further, the human-computer interaction research questions remain: Even if we have datasets of past user interactions, we must ensure that these interactions represent optimal human-computer interaction behavior.4.3. Uncertainties Should be Output as Numbers In Section 3.3, we make the case that LLM agents need to learn to outline their uncertainties in text rather than in numbers. However, there are also situations in which well- calibrated numbers are preferable. Namely, when the agent is not interacting with a human user but another automated system. For tasks such as abstained prediction, it is simpler to define a threshold value on a numerical predicted uncer- tainty than to re-interpret what an uttered uncertainty expla- nation string may indicate. We believe that the two systems, uncertainty as a number and uncertainty as a string, can co- exist, since they are intended for different environments. In a human-computer interaction setup, we expect that an un- certainty that is outlined in text, along with its different pos- sibilities and the reasoning behind them, will provide a bet- ter information base to a human decision maker than a mere number, where users may blindly trust outputs when the cer- tainty is high enough. A ”blind trust” behavior like this is re- ported in user testimonies in Kapoor et al. (2024, App. G.3). 5. Conclusion This position paper critically assesses the recent litera- ture in aleatoric and epistemic uncertainty decompositions. Through our examples and references to quantitative and theoretical findings in the literature, we have shown that binarizing uncertainties into either aleatoric or epistemic can create conflicts, and particularly is not supportive for many future applications related to large language model agents. Instead, we expect that research on underspecifica- tion uncertainties, interactive learning, and output uncertain- ties will lead to more transparent, trustworthy, and accessi- ble LLM agents. We encourage the field to take first steps into these directions to build LLM agents that are honest and predictable in their outputs, even when facing compli- cated contexts with missing data as they are common when interacting with users and the outside world. Acknowledgements The authors would like to thank Kajetan Schweighofer and
https://arxiv.org/abs/2505.22655v1
B´alint Mucs ´anyi. The exchanges on what the true nature of aleatoric and epistemic uncertainty is, if there is any at all, have motivated and shaped this work. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. We have highlighted potential ben- efits in transparency and accessibility if our research rec- ommendations are followed in Section 3.2. There are many more potential societal consequences of our work, none which we feel must be specifically highlighted. 9 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents References Mohammad Aliannejadi, Julia Kiseleva, A. Chuklin, Jeffrey Dalton, and M. Burtsev. Building and evaluating open- domain dialogue corpora with clarifying questions, 2021. Chinmaya Andukuri, Jan-Philipp Fr ¨anken, Tobias Gersten- berg, and Noah D. Goodman. Star-gate: Teaching lan- guage models to ask clarifying questions, 2024. Anastasios N Angelopoulos, Stephen Bates, et al. Confor- mal prediction: A gentle introduction. Foundations and Trends® in Machine Learning , 16(4):494–591, 2023. George Apostolakis. The concept of probability in safety as- sessments of technological systems. Science , 250(4986): 1359–1364, 1990. Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau- Sing Li, Raquel Fern ´andez, Barbara Plank, Rico Sennrich, Chrysoula Zerva, and Wilker Aziz. Uncertainty in natural language generation: From theory to applications. arXiv preprint arXiv:2307.15703 , 2023. Neil Band, Xuechen Li, Tengyu Ma, and Tatsunori Hashimoto. Linguistic calibration of long-form genera- tions, 2024. Sourav Banerjee, Ayushi Agarwal, and Saloni Singla. Llms will always hallucinate, and we need to live with this. arXiv preprint arXiv:2409.05746 , 2024. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. A multitask, mul- tilingual, multimodal evaluation of chatgpt on reason- ing, hallucination, and interactivity. arXiv preprint arXiv:2302.04023 , 2023. Christian F Baumgartner, Kerem C Tezcan, Krishna Chai- tanya, Andreas M H ¨otker, Urs J Muehlematter, Khoschy Schawkat, Anton S Becker, Olivio Donati, and Ender Konukoglu. Phiseg: Capturing uncertainty in medical image segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd In- ternational Conference, Shenzhen, China, October 13– 17, 2019, Proceedings, Part II 22 , pp. 119–127. Springer, 2019. Catarina G Belem, Markelle Kelly, Mark Steyvers, Sameer Singh, and Padhraic Smyth. Perceptions of linguistic un- certainty by language models and humans. arXiv preprint arXiv:2407.15814 , 2024. Catarina Bel ´em, Markelle Kelly, M. Steyvers, Sameer Singh, and P. Smyth. Perceptions of linguistic uncertainty by language models and humans, 2024.Viktor Bengs, Eyke H ¨ullermeier, and Willem Waegeman. Pitfalls of epistemic uncertainty quantification through loss minimisation. Advances in Neural Information Pro- cessing Systems , 35:29205–29216, 2022. Daniel M. Berry and Erik Kamsties. Ambiguity in Require- ments Specification , pp. 7–44. Springer US, 2004. Victor Bouvier, Simona Maggio, Alexandre Abraham, and L´eo Dreyfus-Schmidt. Towards clear expectations for uncertainty estimation. arXiv preprint arXiv:2207.13341 , 2022. Bertrand Charpentier, Oliver Borchert, Daniel Z ¨ugner, Si- mon Geisler, and Stephan G ¨unnemann. Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Ex- ponential Family Distributions. In International Confer- ence on Learning Representations (ICLR) , 2022. Arslan Chaudhry, Sridhar Thiagarajan, and Dilan G ¨or¨ur. Finetuning language
https://arxiv.org/abs/2505.22655v1
models to emit linguistic expressions of uncertainty, 2024. Ivo Pascal de Jong, Andreea Ioana Sburlea, and Matias Valdenegro-Toro. How disentangled are your classifi- cation uncertainties? arXiv preprint arXiv:2408.12175 , 2024. Stefan Depeweg, Jose-Miguel Hernandez-Lobato, Finale Doshi-Velez, and Steffen Udluft. Decomposition of un- certainty in bayesian deep learning for efficient and risk- sensitive learning. In International conference on ma- chine learning , pp. 1184–1193. PMLR, 2018. Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? Does it matter? Structural safety , 31(2):105– 112, 2009. Michael Havbro Faber. On the treatment of uncertainties and probabilities in engineering decision analysis. Journal of Offshore Mechanics and Arctic Engineering , 127(3): 243–248, 2005. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International conference on machine learning , pp. 1183–1192. PMLR, 2017. Cornelia Gruber, Patrick Oliver Schenk, Malte Schierholz, Frauke Kreuter, and G ¨oran Kauermann. Sources of un- certainty in machine learning–a statisticians’ view. arXiv preprint arXiv:2305.16703 , 2023. Nuno M Guerreiro, Duarte M Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, Pierre Colombo, and Andr ´e FT Martins. Hallucinations in large multilingual translation models. Transactions of the Association for Computational Linguistics , 11:1500–1517, 2023. 10 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents Neha Gupta, Jamie Smith, Ben Adlam, and Zelda Mariet. Ensembling over classifiers: a bias-variance perspective. arXiv preprint arXiv:2206.10566 , 2022. Jakob M Heiss, Jakob Weissteiner, Hanna S Wutte, Sven Seuken, and Josef Teichmann. NOMU: Neural optimization-based model uncertainty. In International Conference on Machine Learning (ICML) , 2022. Jon C Helton. Uncertainty and sensitivity analysis in the presence of stochastic and subjective uncertainty. journal of statistical computation and simulation , 57(1-4):3–76, 1997. Neil Houlsby, Ferenc Husz ´ar, Zoubin Ghahramani, and M´at´e Lengyel. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745 , 2011. Yukun Huang, Yixin Liu, Raghuveer Thirukovalluru, Arman Cohan, and Bhuwan Dhingra. Calibrating long-form generations from large language models, 2024. Eyke H ¨ullermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduc- tion to concepts and methods. Machine Learning , 110: 457–506, 2021. Eyke H ¨ullermeier, S ´ebastien Destercke, and Moham- mad Hossein Shaker. Quantification of credal uncertainty in machine learning: A critical analysis and empirical comparison. In Uncertainty in Artificial Intelligence , pp. 548–557. PMLR, 2022. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson El- hage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bow- man, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Ka- plan. Language models (mostly) know what they know. arXiv , 2022. Adam Tauman Kalai and Santosh S Vempala. Calibrated language models must hallucinate. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing , pp. 160–171, 2024. Sanyam Kapoor, Nate Gruver, Manley Roberts, Katherine Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, and Andrew
https://arxiv.org/abs/2505.22655v1
Gordon Wilson. Large language models must be taught to know what they don’t know. arXiv preprint arXiv:2406.08391 , 2024.Enkelejda Kasneci, Kathrin Seßler, Stefan K ¨uchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan G ¨unnemann, Eyke H¨ullermeier, et al. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and individual differences , 103:102274, 2023. Alex Kendall and Yarin Gal. What uncertainties do we need in Bayesian deep learning for computer vision? Advances in Neural Information Processing Systems (NeurIPS) , 30, 2017. Michael Kirchhof, Enkelejda Kasneci, and Seong Joon Oh. Probabilistic contrastive learning recovers the correct aleatoric uncertainty of ambiguous inputs. International Conference on Machine Learning (ICML) , 2023. Michael Kirchhof, Luca F ¨uger, Adam Goli ´nski, Ee- shan Gunesh Dhekane, Arno Blaas, and Sinead Williamson. Self-reflective uncertainties: Do llms know their internal answer distribution?, 2025. URL https: //arxiv.org/abs/2505.20295 . Andreas Kirsch. (implicit) ensembles of ensembles: Epis- temic uncertainty collapse in large models. arXiv preprint arXiv:2409.02628 , 2024. Andreas Kirsch, Lisa Wimmer, and David Holzm ¨uller. Twit- ter discussion on epistemic and aleatoric uncertainties., 2024. URL https://twitter.com/BlackHC/ status/1817556167687569605 . Accessed on 04.08.2024. Zahra Kolagar and Alessandra Zarcone. Aligning uncer- tainty: Leveraging llms to analyze uncertainty transfer in text summarization. In Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024) , pp. 41– 61, 2024. Nikita Kotelevskii and Maxim Panov. Predictive uncertainty quantification via risk decompositions for strictly proper scoring rules. arXiv preprint arXiv:2402.10727 , 2024. Nikita Kotelevskii, Aleksandr Artemenkov, Kirill Fedyanin, Fedor Noskov, Alexander Fishkov, Artem Shelmanov, Artem Vazhentsev, Aleksandr Petiushko, and Maxim Panov. Nonparametric uncertainty quantification for sin- gle deterministic neural network. Advances in Neural Information Processing Systems , 35:36308–36323, 2022. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming- Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Associ- ation for Computational Linguistics , 7, 2019a. 11 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computa- tional Linguistics , 7:453–466, 2019b. Salem Lahlou, Moksh Jain, Hadi Nekoei, Victor Ion Butoi, Paul Bertin, Jarrid Rector-Brooks, Maksym Korablyov, and Yoshua Bengio. Deup: Direct epistemic uncertainty prediction. arXiv preprint arXiv:2102.08501 , 2021. Peter M Lee. Bayesian statistics . Oxford University Press London, 1989. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt ¨aschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems , 33:9459–9474, 2020. Stephanie Lin, Jacob Hilton, and Owain Evans. Teach- ing models to express their uncertainty in words. arXiv preprint arXiv:2205.14334 , 2022. Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax Weiss, and Balaji Lakshminarayanan. Simple and principled uncertainty estimation with deterministic deep learning via distance
https://arxiv.org/abs/2505.22655v1
awareness. Advances in Neural Information Processing Systems (NeurIPS) , 2020. Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. AmbigQA: Answering ambiguous open- domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2020. B´alint Mucs ´anyi, Michael Kirchhof, and Seong Joon Oh. Benchmarking uncertainty disentanglement: Specialized uncertainties for specialized tasks. Advances in Neural Information Processing Systems (NeurIPS) , 2024. Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. Deep deterministic un- certainty: A new simple baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 24384–24394, 2023. Kevin P Murphy. Machine learning: a probabilistic per- spective . MIT press, 2012. Jing-Cheng Pang, Heng-Bo Fan, Pengyuan Wang, Jia-Hao Xiao, Nan Tang, Si-Hang Yang, Chengxing Jia, Sheng- Jun Huang, and Yang Yu. Empowering language models with active inquiry for deeper understanding, 2024.David Pfau. A generalized bias-variance decomposition for bregman divergences. Unpublished Manuscript , 2013. Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822 , 2018. Kajetan Schweighofer, Lukas Aichberger, Mykyta Ielan- skyi, and Sepp Hochreiter. On information-theoretic measures of predictive uncertainty. arXiv preprint arXiv:2410.10786 , 2024. Burr Settles. Active learning literature survey. 2009. M. Steyvers, Heliodoro Tejeda Lemus, Aakriti Kumar, Cata- rina Bel ´em, Sheer Karny, Xinyue Hu, Lukas Mayer, and P. Smyth. The calibration gap between model and human confidence in large language models, 2024. Dennis Ulmer. On uncertainty in natural language process- ing. arXiv preprint arXiv:2410.03446 , 2024. Matias Valdenegro-Toro and Daniel Saromo Mori. A deeper look into aleatoric and epistemic uncertainty disentangle- ment. In Computer Vision and Pattern Recognition Work- shops (CVPRW) , 2022. Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty estimation using a single deep deterministic neural network. In International Conference on Machine Learning (ICML) . PMLR, 2020. Anne Marthe Van Der Bles, Sander Van Der Linden, Alexan- dra LJ Freeman, James Mitchell, Ana B Galvao, Lisa Zaval, and David J Spiegelhalter. Communicating uncer- tainty about facts, numbers and science. Royal Society open science , 6(5):181870, 2019. Peiqi Wang, Barbara D. Lam, Yingcheng Liu, Ameneh Asgari-Targhi, Rameswar Panda, William M. Wells, Tina Kapur, and Polina Golland. Calibrating expressions of certainty, 2024. Lisa Wimmer, Yusuf Sale, Paul Hofman, Bernd Bischl, and Eyke H ¨ullermeier. Quantifying aleatoric and epistemic uncertainty in machine learning: Are conditional entropy and mutual information appropriate measures? In Uncer- tainty in Artificial Intelligence (UAI) , 2023. Tianyang Xu, Shujin Wu, Shizhe Diao, Xiaoze Liu, Xingyao Wang, Yangyi Chen, and Jing Gao. Sayself: Teach- ing llms to express confidence with self-reflective ra- tionales, 2024a. URL https://arxiv.org/abs/ 2405.20974 . Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. Hallucina- tion is inevitable: An innate limitation of large language models. arXiv preprint arXiv:2401.11817 , 2024b. 12 Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents Ruihan Yang, Caiqi Zhang, Zhisong Zhang, Xinting Huang, Dong Yu, Nigel Collier, and Deqing Yang. UNCLE: Uncertainty expressions in long-form generation, 2025. URL https://arxiv.org/abs/2505.16922 . Gal Yona, Roee Aharoni, and Mor Geva. Can large language models
https://arxiv.org/abs/2505.22655v1
faithfully express their intrinsic uncertainty in words? In Yaser Al-Onaizan, Mohit Bansal, and Yun- Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , 2024. Dongkeun Yoon, Seungone Kim, Sohee Yang, Sunkyoung Kim, Soyeon Kim, Yongil Kim, Eunbi Choi, Yireun Kim, and Minjoon Seo. Reasoning models better express their confidence, 2025. URL https://arxiv.org/abs/ 2505.14489 . Boxuan Zhang and Ruqi Zhang. CoT-UQ: Improving response-wise uncertainty quantification in llms with chain-of-thought, 2025. URL https://arxiv.org/ abs/2502.17214 . Michael J.Q. Zhang and Eunsol Choi. Clarify when neces- sary: Resolving ambiguity through interaction with lms, 2023. Michael J.Q. Zhang, W. B. Knox, and Eunsol Choi. Model- ing future conversation turns to teach llms to ask clarify- ing questions, 2024a. Tong Zhang, Peixin Qin, Yang Deng, Chen Huang, Wen- qiang Lei, Junhong Liu, Dingnan Jin, Hongru Liang, and Tat-Seng Chua. Clamber: A benchmark of identifying and clarifying ambiguous information needs in large lan- guage models, 2024b. Tong Zhang, Peixin Qin, Yang Deng, Chen Huang, Wen- qiang Lei, Junhong Liu, Dingnan Jin, Hongru Liang, and Tat-Seng Chua. CLAMBER: A benchmark of identify- ing and clarifying ambiguous information needs in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2024c. 13
https://arxiv.org/abs/2505.22655v1
3DLLM-Mem: Long-Term Spatial-Temporal Memory for Embodied 3D Large Language Model Wenbo Hu1BYining Hong1Yanjun Wang1Leison Gao1Zibu Wei1 Xingcheng Yao1Nanyun Peng1Yonatan Bitton2Idan Szpektor2Kai-Wei Chang1 1University of California, Los Angeles,2Google Research https://3dllm-mem.github.io Prepare a most suitable present box for the teddy bear. ❶<Explore the environmen t> Found the teddy bear. <Pick up the teddy bear > Prepare the most suitable gift box for the teddy bear. ❷<Navigate to the bedroom and search for a gift bo x> <Try placing the teddy bear into the gift box > Too small. ❸<Navigate to the living room and search for a gift box > <Try placing the teddy bear into the gift box > It fits, but it's a bit loose. ❹<Navigate to the kitchen and search for a gift box > <Try placing the teddy bear into the gift box > It fits, but too large. Recall: the living room box is the most suitable. ❺<Navigate to the gift box in the living room > <Place the teddy bear into the gift box> Task complete. Figure 1: We propose 3DLLM-M EM, a memory-enhanced 3D embodied agent that explores and incorporates feedback from the environment, interacts with objects, and incrementally builds and maintains a task-relevant long-term memory throughout its trajectory. For illustration purposes, agents from multiple time steps are shown simultaneously. Abstract Humans excel at performing complex tasks by leveraging long-term memory across temporal and spatial experiences. In contrast, current Large Language Models (LLMs) struggle to effectively plan and act in dynamic, multi-room 3D environments. We posit that part of this limitation is due to the lack of proper 3D spatial-temporal memory modeling in LLMs. To address this, we first introduce 3DM EM-BENCH , a comprehensive benchmark comprising over 26,000 trajecto- ries and 2,892 embodied tasks, question-answering and captioning, designed to evaluate an agent’s ability to reason over long-term memory in 3D environments. Second, we propose 3DLLM-M EM, a novel dynamic memory management and fusion model for embodied spatial-temporal reasoning and actions in LLMs. Our model uses working memory tokens, which represents current observations, as queries to selectively attend to and fuse the most useful spatial and temporal fea- tures from episodic memory , which stores past observations and interactions. Our approach allows the agent to focus on task-relevant information while maintaining memory efficiency in complex, long-horizon environments. Experimental results BContact at whu@cs.ucla.edu. Preprint.arXiv:2505.22657v1 [cs.CV] 28 May 2025 demonstrate that 3DLLM-M EMachieves state-of-the-art performance across var- ious tasks, outperforming the strongest baselines by 16.5% in success rate on 3DM EM-BENCH ’s most challenging in-the-wild embodied tasks. 1 Introduction Picture yourself traversing an unfamiliar home, as illustrated in Figure 1, on a mission to explore multiple rooms and evaluate various gift boxes to find the most suitable one for wrapping a teddy bear. As you navigate from room to room, your brain instinctively creates a 3D cognitive map of the environment, maintains a working memory of objects you’ve encountered, forms episodic memories that link observations across space and time, and plans efficient actions. This seamless integration of 3D spatial understanding, long-term memory encoding and retrieval, fluid switching between working
https://arxiv.org/abs/2505.22657v1
and episodic memory, and purposeful action planning — cognitive processes that humans take for granted — remain formidable challenges for embodied AI systems today. Recent extensions of Large Language Models (LLMs) to 3D environments have birthed 3D- LLMs (Hong et al., 2023b; Guo et al., 2023; Gu et al., 2024; Huang et al., 2024b; Xu et al., 2025a) that can perceive and reason about 3D spaces, while 3D Vision-Language-Action models (Zhen et al., 2024; Zhao et al., 2025; Intelligence et al., 2025) further incorporate the ability to plan and act within these environments. Despite these advances, several critical limitations persist that prevent models from performing the kinds of tasks described above. First, current models struggle to main- tain long-term memory chains when performing complex tasks that unfold across multiple visual scenarios, such as several rooms in a house, and extended time frames. Real-world 3D physical scenes are remarkably vast and information-dense, where every detail can matter for long-horizon embodied tasks — for instance, in Figure 1, finding the most suitable gift box requires remembering all the gift boxes encountered along the way and their characteristics and interaction with teddy bear. Dense 3D representations are particularly valuable as they capture comprehensive spatial information, preserving intricate geometric relationships and environmental details that sparse or object-centric approaches might miss. However, how to accurately and efficiently store dense 3D memory remains a fundamental challenge - retrieving the entire history would overwhelm the model’s context limits, while selective retrieval (Xie et al., 2024; Wang et al., 2024; Yang et al., 2025b) risks omitting critical information needed for accurate reasoning and decision-making. The second challenge resides in the entanglement of spatial and temporal memory — agents must track not only where objects are, but how they change over time through exploration and interaction. As environments evolve, maintaining coherent representations of previously seen spaces while incorporating new information continues to exceed the capabilities of current embodied AI models. Our efforts at solving this challenge are two-fold. First, we introduce a novel benchmark for reason- ing, planning and acting with long-term spatial-temporal memory in embodied environments. Our benchmark, 3DM EM-BENCH , encompasses multi-room 3D scenes from the Habitat environment, augmented with interactive objects to enable manipulation tasks across extended spatial-temporal horizons. Notably, we define fine-grained embodied tasks across varying levels of difficulty—from simple to hard—enabling deeper insight into model performance, which we believe is not addressed in prior benchmarks as shown in Table 1. Our task set spans a wide range of complexities, from straightforward object collection to challenging comparative reasoning tasks that require integrating observations across multiple rooms and time steps. Additionally, we include in-the-wild challenge tasks to evaluate the model’s generalization capabilities beyond seen environments. The benchmark includes three evaluation categories: (1) embodied tasks requiring extended action sequences across multiple rooms, (2) spatial-temporal embodied question answering (EQA) that evaluates understand- ing of spatial relationships over time, and (3) long-term scene captioning that tests memorization of previously observed environments. Our dataset includes 26,000+ trajectory examples spanning 182+ unique scenes with an average of 18 rooms per scene. Second, we
https://arxiv.org/abs/2505.22657v1
introduce 3DLLM-M EM, a 3D embodied LLM with dynamic memory management capabilities designed specifically for embodied spatial-temporal reasoning, planning and acting. To our knowledge, we are among the first to explore dense 3D representations as memory for embodied 3D LLMs — addressing a significant gap in current research as noted in recent 3D memory studies (Yang et al., 2025b). Unlike standard approaches that rely solely on context windows (Hong et al., 2023b; Huang et al., 2024b; Zhu et al., 2024), 3DLLM-M EMimplements a dual-memory system: a limited-capacity working memory for current observations and an expandable episodic 2 Benchmark #Test Tasks #Train Trajectories Long-term Memory Fine-grained complexity EQA Captioning ALFWorld (Shridhar et al., 2021) 274 3,553 × × NA NA Behavior-1K (Li et al., 2024a) 1,000 NA × × NA NA VisualAgentBench (Liu et al., 2024) 746 4,482 × × NA NA EmbodiedBench (Yang et al., 2025a) 1,128 NA × × NA NA 3DM EM-BENCH (ours) 1,860 26,276 ✓ ✓ 865 167 Table 1: Comparison with related benchmarks. 3DM EM-BENCH focus on spatial-temporal memory through fine-grained embodied tasks and EQA that span multiple “pieces” of long-term memory, distinguishing it from prior benchmarks that typically target single-step or short-horizon reasoning. Fine-grained complexity indicates our embodied task spans from simple to medium to hard. memory that stores past spatial-temporal information as dense 3D representations. The key innovation is our memory fusion module that actively integrates information from both memory systems based on task relevance and spatial-temporal relationships. This allows the model to leverage the benefits of dense 3D representations while mitigating their computational demands, maintaining coherent spatial-temporal understanding across extended task horizons. The fusion process preserves critical spatial relationships while accounting for their evolvement through agent interactions over time. We evaluate popular 3D-LLMs and memory mechanisms on 3DM EM-BENCH . Experimental results demonstrate 3DLLM-M EMsignificantly outperforms all existing approaches in both in-domain and in-the-wild embodied tasks. Notably, while the performance of other methods drops sharply in the challenging in-the-wild setting, our method remains robust, achieving an average success rate of 32.1%—demonstrating strong generalization capabilities. As task complexity increases from simple to hard, all existing approaches degrade significantly, achieving only ∼5% success rate in hard in-the- wild tasks. In contrast, 3DLLM-M EMmaintains a strong performance of 27.8%, demonstrating its scalability and effectiveness in managing longer-term memory representations. Our contributions can be summarized as below: •We propose a novel task that requires agents to execute action chains while maintaining and utilizing long-term spatial-temporal memory. •We construct 3DM EM-BENCH , a comprehensive benchmark comprising over 26,000 trajectories and 1,860 fine-grained long-term memory embodied tasks—ranging from simple to hard—along with question-answering tasks that target memory changes across time and space, and captioning tasks in complex 3D environments. •We propose 3DLLM-M EM, an embodied 3D LLM with a novel memory fusion module for spatial- temporal reasoning, planning, and acting-which utilizes working memory tokens as queries to selectively fuse relevant features from episodic memory for efficient, task-aware decision-making. •Experimental results on embodied tasks, question-answering, and captioning demonstrate that 3DLLM-M EMoutperforms baselines by a large margin. 2 The Embodied 3D Long-Term Spatial-Temporal
https://arxiv.org/abs/2505.22657v1
Memory Benchmark 2.1 Overview of 3DM EM-BENCH Design principles Long-term memory (Camina and Güell, 2017; Friedman et al., 2018; Zlotnik and Vansintjan, 2019) can be categorized into explicit memory andimplicit memory . Explicit memory includes semantic memory , which stores general knowledge and facts about the world, and episodic memory , which consists of personal experiences that are time-stamped and context-specific. In contrast, implicit memory primarily involves procedural memory , such as learned skills and habits. To comprehensively evaluate 3D long-term memory for real-world applications, we design 3DM EM- BENCH following three core task categories: embodied tasks, long-term memory EQA, and captioning. As illustrated in Figure 2, embodied tasks require an embodied agent to solve realistic indoor environ- ment challenges by leveraging both implicit and explicit long-term memory. Long-term memory EQA tests the agent’s ability to answer complex embodied questions using spatial-temporal memory. This task includes five subcategories: spatial reasoning questions, long-term object navigation, compara- tive reasoning, multi-room layout understanding, and semantic object counting. Captioning tasks involve summarizing the agent’s episodic memory to highlight shared and distinctive features across experiences, enabling more informed decision-making under the current task context. 3 Long-term Memory Embodied Tasks Long-term Memory EQA Captioning Instruction: Find an ideal container to store all the cookie dough. Instruction: Preparing a simple breakfast. The breakfast includes toast with butter and jam, and a cup of coffee. Fine-grained comparison Instruction: Collect all cookie dough and then rearrange them on the table of living room in descending order of size. + In-the-wild Challenges Force Finish Instruction: Stop exploring additional rooms and serve the breakfast now. Task Reschedule I need to find a toaster, coffee machine, butter, jam, bread, coffee powder…… I was searching for the toaster but I remembered seeing the microwave in the kitchen. <Navigate to kitchen> No coffee powder found anywhere… From my memory , the tea bag is on the counter and water kettle on the table. <Navigate to kitchen> <Pick up tea bag> <Put tea bag into the mug> …… <Navigate to the dining room> Serve the hot tea instead. Task complete. Q: How many ceiling lamps are there after the task finished ? A: There are 8 ceiling lamps in total. Q:After task execution , if you're standing at table(0), facing chair(1), is the dresser(0) to your left, right, behind? A:The dresser(0) is to your right. Q: In room 8, what is the relative layout of the refrigerator(0), dining table(0), and stove(0) before task execution ? A: The refrigerator(0) is located on one side, dining table(0) in the middle, and the stove(0) on the other side of the kitchen. Q: If you stand next to the plant(0) facing window(0) in room 15, how to navigate to kitchen cabinet(0)? A: You need to first turn left and go straight to see stovetop(0), tray(0) and then the kitchen cabinet(0) is right on your front. Q: Which carpet has larger area now , carpet(0) in room 0 or carpet(1) in room 4? A: Carpet(0) in room 0 is larger. Q: Give a caption covering the summary of the common features across
https://arxiv.org/abs/2505.22657v1
previous exploration and the difference that matters for your current observation. A: The previous rooms contained storage cabinets, beds, dressers, and kitchen counters …… This room uniquely features a wardrobe extending along an entire wall and does not contain any pictures, lamps ……. In-domain Tasks In-the-wild Tasks Object Counting Comparative Object Navigation Spatial Relation Room Layout Episodic Memory Captioning Figure 2: Overview of 3DM EM-BENCH . For long-term memory embodied tasks, we further incorporate in-the-wild challenges to test 3D agent’s generalization abilities. Text inside < > indicates high-level action tokens. For complete embodied task trajectories, please refer to Appendix C. 2.2 Data Collection Base environment construction We build our scenes on top of the Habitat-Matterport 3D (HM3D) semantics dataset (Ramakrishnan et al., 2021), which has 1000 3D spaces and 10,600 rooms within those spaces. Pre-processing for the axis-aligned bounding box and using valid semantic label annotation, we filter to 182 3D spaces and 2,602 rooms. However, existing objects in HM3D scene are not interactive in Habitat-sim (Szot et al., 2021). To expand our task diversity and enable embodied tasks, we add interactive objects from Objaverse (Deitke et al., 2023) which consists of 800K 3D objects spanning rich categories. More environment construction details are illustrated in Appendix B. Generating task trajectories Following Hong et al. (2023b, 2024), we adopt box-demonstration- instruction-based prompting, which utilizes the axis-aligned bounding boxes (AABB) of both rooms and objects within the 3D scenes to prompt Gemini (Team et al., 2023) to generate diverse tasks. We further prompt Gemini to incorporate interactive objects based on task requirements and their appropriateness within indoor environments. Detailed prompt instructions and few-shot demonstration examples are provided in Appendix E. To ensure the validity of the generated trajectories, we develop a trajectory simulation pipeline that verifies each trajectory step-by-step. At every step, the simulator checks: (1) the correctness of the agent’s location, (2) the existence and validity of referenced objects, and (3) the correctness of pick-up and put-down actions. Finally, we ensure that high-level actions can be executed in the simulator, following (Szot et al., 2024; Yang et al., 2025a). Details of this implementation are in Appendix F.1. On average, our filtering process yields a validation rate of approximately 24%, ensuring the correctness and feasibility of the generated trajectories. Embodied data collection In our task settings, an embodied agent first performs random ex- ploration within the environment to collect RGB-D observations and corresponding camera poses. 4 Encoder 3D Position Embeddings 2D Patch Tokens 3D Patch Tokens 3DLLM-Mem Sample & Projection Memory Tokens Memory Bank Save to Memory Bank Memory Fusion Working Memory Fused Episodic Memory Language Instruction <Navigate to kitchen> Fused Memory Fused Memory Memory t=1 <Pick up bread> <Put in toaster> …… <Navigate to dining room> Thinking: No coffee found. Tea bag and boiling kettle are in memory. <Navigate to kitchen> <Pick up tea bag> <Put into tea mug> …… Task Complete. Memory t=2 Memory t=T-1 Memory t=n Memory Tokens Memory Fusion Working Memory Fused Episodic Memory Encoder Memory Fusion Fused Episodic Memory Memory Bank LLM 3D Inputs Memory t=T (a) (b) Figure
https://arxiv.org/abs/2505.22657v1
3: (a) We propose 3DLLM-M EM, a memory-enhanced 3D embodied agent that gradually form its long-term memory while executing tasks. Multiple timesteps are shown together but in different colors, with each timestep’s memory including the prior one. The task is “prepare a simple breakfast” as shown in Figure 2. (b) Overview of our memory fusion mechanism. Then the agent follows the task trajectory, incrementally exploring new environments, executing interaction actions, and receiving feedback with new RGB-D observation data. All interaction results are recorded and the reconstructed point cloud data is precomputed and stored locally to enable faster loading during both training and inference. 2.3 Data Curation As mentioned previously, we collect embodied data by prompting Gemini. To enable a fine-grained analysis of long-term memory capacity, we divide the tasks into three subcategories: simple ,medium , andhard, comprising of 3, 5 and 10 multi-room scene settings respectively. In total, we collect 51K trajectories, with 31K in the simple setting, 10K in the medium, and 10K in the hard. To construct in-domain evaluation sets, we first remove training tasks and filter for instances that never shown in the agent’s working memory. For the in-the-wild evaluation set, we apply additional filtering to assess the agent’s generalization capabilities. Specifically, we select instances involving unseen objects and entirely unseen memory context, and we introduce novel in-the-wild challenges that differ from those encountered during training, as illustrated in Figure 2. For EQA data curation, we extract complete trajectories explored by agents and then prompt Gemini to generate question-answer pairs. The questions are categorized into spatial reasoning, long-term object navigation, comparative reasoning, multi-room layout understanding, and semantic object counting. As shown in Figure 2, these questions evaluate models on spatial-temporal changes in memory during embodied task execution. For long-term memory captioning, which primarily targets semantic episodic memory, we collect data across multiple rooms before and after the execution of each trajectory, enabling comparison and summarization of memory-relevant experiences. Quality control After constructing the entire benchmark, we implement two quality control procedures: automatic validation using trajectory simulation rules and a manual review of each benchmark instance. The automatic check involves re-running the trajectory simulation validation pipeline, as described in §2.2, particularly for the in-the-wild tasks. For human validation, four student experts in the field manually inspect each benchmark example. We render multi-view images of the entire scene using the simulator and verify whether the benchmark annotations accurately correspond to the simulated environment. More details are in Appendix F.2. 3 3D Long-Term Spatial-Temporal Memory Model (3DLLM-M EM) 3.1 Preliminary Recent work on 3D Large Language Models (3D-LLMs) has showcased robust capabilities. We choose LLaV A-3D (Zhu et al., 2024) as the base model to build our long-term memory 3D-LLM. 5 LLaV A-3D directly builds on 2D-LLM with multi-view images as input and utilizing the 3D position embeddings to bring the 2D patches within a 3D spatial context to construct 3D patches. For each frame image, a CLIP encoder splits the image X∈R3×W×Hinto patches at the patch size P. For each 3D scene, Vmulti-view image patch features are encoded and then projected
https://arxiv.org/abs/2505.22657v1
into LLM space asXp∈RV×d×w×h, where h=H P , w=W P , anddrepresents LLM’s hidden dimension. The 3D positions in the 3D world are obtained with known depth image, camera intrinsic and extrinsic parameters and are further encoded into 3D position embeddings P∈RV×d×w×h. These are directly added to the 2D patch visual tokens Xp, resulting in pixel-aligned 3D patches X3D∈RV×d×w×h. To reduce redundancy in 3D patches, we adopt the Farthest Point Sampling (FPS) strategy to downsample the 3D features to a fixed number of tokens, resulting in X3D Feat∈RN×d. 3.2 3DLLM-M EMMemory Module A 3D embodied agent gradually explores the environment by collecting observations and interacting with surrounding environments. For humans, current observations are held in working memory , while longer-term observations and experiences are stored in episodic memory . Inspired by human cognitive structure, 3DLLM-M EMis designed with a similar paradigm as illustrated in Figure 3. The current observation at time step t=i, denoted as X[t=i]∈RN×d, remains within the context window and serves as the agent’s working memory . As the agent accumulates more experiences, past observations from time steps 1toT, represented as X[t=1:T]∈RT×N×d, are stored as part of its episodic memory , where Tdenotes the total number of timesteps. Episodic memory To manage episodic memory, we propose the use of a memory feature bank. For each observation at time step j, where 1≤j≤T, we first apply a multi-layer perceptron (MLP) layer to project the observation into a memory-specific feature space, which is then stored in the memory bank for future retrieval. To further enhance the temporal understanding of the agent’s exploration, we incorporate sinusoidal positional embeddings to encode each time step t=j, and then directly added to the corresponding memory feature representations. Memory fusion Our motivation is that an agent should leverage its current observations to recall the most relevant information from its episodic memory in order to complete the current task. To achieve this, we propose a mechanism called 3D memory fusion . Specifically, we encode the 3D features from the working memory into a shared memory space and use this representation as the query feature, denoted as fQ t∈RN×M, where Mis the dimensionality of the memory feature space. The episodic memory bank stores the corresponding key and value features from past observations: fK∈RT×N×MandfV∈RT×N×M, respectively. Here, Tis the number of past timesteps and N is the number of memory tokens per timestep. This structure allows the agent to retrieve task-relevant information through memory-query attention. The fused memory feature is then concatenated with the working memory feature to produce the final memory-enhanced representation fMfor the agent: fQ fuse= Softmax fQ t(fK)⊤ √ C! fV, fM= Concath fQ fuse;fQ ti (1) Memory update The working memory is dynamic and updated online. As the agent interacts with the environment, changes in the environment are immediately reflected in the working memory through updated 3D representations. When the agent moves to a new environment, the previous working memory is transferred to the episodic memory bank. If the corresponding environment already exists in the memory bank and has been modified by the agent, the memory entry
https://arxiv.org/abs/2505.22657v1
is updated accordingly. Thus, the memory bank remains dynamic and reflects the latest state of the explored environments. As described in §2.2, environment changes and corresponding observations are pre-collected and stored locally to facilitate efficient data loading during both training and inference. 4 Experiments In this section, we first introduce the experimental setup and existing memory management baselines in§4.1. Then, we benchmark existing approaches on 3DM EM-BENCH , and present comprehensive results on embodied tasks, EQA, and captioning tasks to demonstrate the effectiveness of our 6 ModelSimple Medium Hard Average In-domain In-the-wild In-domain In-the-wild In-domain In-the-wild In-domain In-the-wild SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR 3D-LLM (Finetuned) 10.4 20.3 9.1 18.5 - - - - - - - - - - - - Everything in Context 35.5 63.9 32.4 45.2 - - - - - - - - - - - - Most Recent Memory 32.8 62.3 23.4 38.6 20.1 34.8 12.4 25.3 10.4 20.7 5.4 12.1 21.1 39.3 13.7 25.3 Retrieval-Augmented Memory 34.2 63.0 28.3 46.2 21.8 40.2 13.7 28.0 10.8 21.6 4.8 10.6 22.3 41.6 15.6 28.3 3DLLM-M EM(Ours) 45.5 73.4 37.0 65.4 36.8 67.8 31.6 57.4 30.5 46.2 27.8 42.1 37.6 62.5 32.1 55.0 (a) Results on 3DM EM-BENCH embodied tasks. SR stands for success rate. Sub-SR stands for sub-success rate. Our model outperforms existing approaches by a large margin. ModelEmbodied Task Embodied Question Answering (EQA) Captioning In-domain In-the-wild Spatial Nav. Comparative Layout Count BLEU1 BLEU4 METEOR 3D-LLM (Finetuned) - - 2.9 5.8 0.0 7.7 0.0 42.3 12.0 30.6 3D-Mem (GPT4-o) - - 39.9 11.0 25.8 19.1 7.8 41.7 4.7 31.8 3D-Mem (Gemini-2.5-Flash) - - 41.6 18.2 37.6 30.2 12.7 42.8 4.8 29.6 3D-Mem (Gemini-2.5-Pro) - - 39.7 27.7 36.0 35.2 16.4 41.5 3.0 28.6 Most Recent Memory 21.1 13.7 27.5 30.2 24.3 20.1 10.5 32.4 10.1 25.6 Retrieval-Augmented Memory 22.3 15.6 38.0 33.4 31.8 29.7 15.6 40.8 11.5 29.3 3DLLM-M EM(Ours) 37.6 32.1 62.8 40.6 41.4 39.9 26.3 58.2 18.8 37.3 (b) Results on all tasks in 3DM EM-BENCH . Average success rate is reported for embodied tasks. Nav. stands for long-term object navigation. We report accuracy score for open-ended EQA evaluation and follow the standard LLM-as-judge evaluation protocol by prompting Gemini. Evaluation details are provided in Appendix E. Table 2: Comparison with 3D memory models and standard memory management approaches. Our model, 3DLLM-M EM, achieves the best performance across embodied, EQA and captioning tasks. 3DLLM-M EMin§4.2, along with qualitative results. Finally, in §4.3, we conduct an ablation study of key design choices in 3DLLM-M EM, demonstrating the effectiveness of our proposed memory fusion mechanism. 4.1 Experimental Setup Implementation details We implement our model based on LLaV A-3D (Zhu et al., 2024), modify- ing it to be compatible with Google TPUs with PyTorch/XLA frameworks (Paszke et al., 2019; team, 2017–2025) . We first expand the model’s context window to 8192 tokens to accommodate long-term memory inputs. We then fine-tune our proposed memory module along with the LLM decoder using our training split, initializing from LLaV A-3D’s pretrained weights. Training
https://arxiv.org/abs/2505.22657v1
is conducted on 8 Google Cloud TPU v5p cores with a batch size of 256. Our model is trained using supervised fine-tuning (SFT) with a standard language modeling loss. More details are provided in Appendix D. Baselines We compare 3DLLM-M EMagainst a broad range of memory management approaches: •Everything in Context. For a small subset of scenes, it is feasible to fit all observations directly into the model’s context window. •Most Recent Memory. Since retaining all observations in context is infeasible, we keep only the most recent observations, assuming they are most relevant to the current task. •Retrieval-Augmented Memory. Inspired by retrieval-based techniques, we adopt a memory bank that stores past observations. During inference, the most relevant memory entries are retrieved and appended before the working memory to augment reasoning. •3D-LLM (Hong et al., 2023b). A popular 3D LLM recognized by the community. We finetune it on our training data and report its performance using the “everything in context” strategy with the longest context window supported. Further details are provided in Appendix G. •3D-Mem (Yang et al., 2025b). A framework designed for 3D scene memory in embodied exploration and reasoning. However, this method does not support embodied interaction or action execution. 4.2 Experimental Results Results on embodied tasks As shown in Table 2a, 3DLLM-M EMsignificantly outperforms all existing approaches on both in-domain and in-the-wild embodied tasks. Notably, while the performance of other methods drops sharply in the in-the-wild setting, our method demonstrates strong generalization capabilities with a average success rate of 32.1%. 3D-LLM showcases the lowest 7 (4) (5) (6) (7) (8) (9) Prepare a cozy reading nook in the living room with two books and a teacup. ❶ ❷ ❸ ❹ ❺ ❻ ❼ ❽ ❾Figure 4: Qualitative example of 3DLLM-M EM, which maintains and utilizes a long-term memory to complete the task. Detailed task execution trajectory can be found in Figure 6. performance even under simple task settings, highlighting the necessity of incorporating an explicit memory module. Both the Most Recent Memory andRetrieval-Augmented Memory (RAG) baselines perform poorly in this setting, with RAG showing only a slight improvement, highlighting the challenges of retrieving relevant episodic memory. Interestingly, the Everything in Context baseline performs better than both recent memory and RAG approaches, suggesting that when all information can fit within the context window, the model can effectively utilize it. However, 3DLLM-M EM still outperforms Everything in Context , indicating the benefits of selectively fusing task-relevant memory features to better guide embodied reasoning and execution. As task complexity increases from simple to hard, all existing approaches degrade significantly, achieving only ∼5% success rate in hard in-the-wild tasks. In contrast, 3DLLM-M EMmaintains a strong performance of 27.8%, demonstrating its scalability and effectiveness in managing longer-term memory representations. Results on long-term EQA and captioning As shown in Table 2b, 3DLLM-M EMconsistently outperforms all existing approaches across all tasks in our benchmark. Notably, 3D-LLM achieves the second-best performance on the captioning task, highlighting its strong ability to summarize object-centric semantic memory. However, due to limited context length, it performs poorly on the EQA tasks, which
https://arxiv.org/abs/2505.22657v1
require long-term spatial-temporal reasoning. 3D-Mem demonstrates improved performance in EQA over other baseline approaches. However, it falls short on spatial relation, navigation and object counting tasks, indicating the limitation of relying solely on aggregated image- centric memories. 3DLLM-M EMsignificantly outperforms both Most Recent Memory andRAG Memory , which further demonstrates the effectiveness of our memory fusion technique. Qualitative results We provide qualitative examples in Figure 4 and a more detailed version with explanations in Figure 6 (Appendix H), demonstrating that 3DLLM-M EMis capable of maintaining long-term memory and executing complex tasks in embodied environments. 4.3 Ablation Study Our approach initializes the fused memory using working memory features, aiming to fuse the most relevant memories for the current task. We ablate several design choices for initializing the fusion query, as shown in Table 3. When using either the most recent episodic memory or learnable zero parameters, performance degrades compared to our proposed method. Interestingly, using the most recent memory outperforms zero initialization in the simple setting but underperforms in the hard setting. One possible explanation is that recent memory initialization encourages fusion with nearby observations, which may be sufficient for simple tasks and leads to faster convergence. In contrast, zero initialization is guided solely by training supervision to learn which memories are most useful. In summary, the ablation results demonstrate that initializing fusion queries with working memory tokens provides the most effective and robust design choice for long-term memory fusion. 5 Related Works 3D Large Language Models 3D Large Language Models (3D-LLMs) have demonstrated promising results across a wide variety of tasks, including 3D scene understanding, object detection, and segmentation (Hong et al., 2023b; Zhou et al., 2024; Huang et al., 2024a; Chen et al., 2024b; Xu et al., 2025a). In parallel, 3D embodied agents have expanded these capabilities to planning and action in interactive environments (Brohan et al., 2023; Huang et al., 2024b; Chen et al., 2024a; Black et al., 8 ModelSimple Medium Hard Average In-domain In-the-wild In-domain In-the-wild In-domain In-the-wild In-domain In-the-wild SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR SR Sub-SR 3DLLM-M EM 45.5 73.4 37.0 65.4 36.8 67.8 31.6 57.4 30.5 46.2 27.8 42.1 37.6 62.5 32.1 55.0 Init with Most Recent Episodic Memory 42.3 69.4 28.6 50.7 32.4 58.6 23.7 45.1 22.6 37.8 15.3 31.4 32.4 55.3 22.5 42.4 Init with Learnable Zero Parameters 41.4 67.2 27.9 50.0 33.0 59.2 23.4 45.8 24.2 40.4 18.6 35.6 32.9 55.6 23.3 43.8 Table 3: Ablation study of query initialization designs in our memory fusion module. 2024). Yet, existing models face significant challenges when performing long-horizon embodied tasks in densely populated 3D environments that require reasoning over long-term spatial-temporal memory. To address this, we propose an explicit memory module inspired by the structure of human implicit and explicit memory. Our model employs a memory fusion mechanism that efficiently retrieves and learns task-relevant information, resulting in enhanced performance on complex embodied tasks. Long-term Embodied Trajectories Embodied AI simulators (Chang et al., 2017; Kolve et al., 2017; Szot et al., 2021; Shen et al., 2021) have fostered
https://arxiv.org/abs/2505.22657v1
the development of embodied AI agents. Grounded in these environments, some existing benchmarks focus on high-level planning tasks, typically involving short trajectories that can often be completed within single-room settings, thereby requiring minimal spatial-temporal memory (Shridhar et al., 2020, 2021; Li et al., 2024a; Szot et al., 2024; Li et al., 2024b; Yang et al., 2025a). Other benchmarks emphasize long-term scene exploration with extended trajectories, but are primarily centered around navigation tasks and often lack embodied interaction support (Deitke et al., 2020; Ramakrishnan et al., 2021; Krantz et al., 2022; Khanna et al., 2024). To bridge this gap, we introduce 3DM EM-BENCH , a benchmark specifically designed to evaluate long-horizon task execution that requires rich spatial-temporal memory and full embodied task support, as summarized in Table 1. Embodied Question Answering Benchmark Embodied Question Answering (EQA) bench- marks (Das et al., 2018; Wijmans et al., 2019; Yu et al., 2019) have been developed to advance goal-driven agents that can perceive their environment. Some EQA benchmarks also include em- bodied memory QA evaluation, such as OpenEQA (Majumdar et al., 2024), which includes an episodic memory QA split, and Yang et al. (2024), which focuses on spatial memory QA. In contrast, our benchmark, 3DM EM-BENCH jointly targets both spatial and episodic memory, especially their changes over time, while also supporting embodied action tasks, EQA and captioning. For specific comparison on EQA, our long-term memory EQA tasks are designed to require reasoning over multiple “pieces” of memory and their changes across time and space. Additionally, we consider the agent’s location in the scene at the moment of answering each question during evaluation. Memory System Memory is a fundamental component of AI systems, with early work in the context of LLM agents that utilize memory for decision-making in web-based and sandbox environ- ments (Shinn et al., 2023; Zhang et al., 2023; Packer et al., 2023; Zhang et al., 2024). Most existing approaches construct an experience pool or memory bank and focus on improving the retrieval of useful past information (Zhao et al., 2024; Gao et al., 2024; Xu et al., 2025b). In the computer vision domain, temporal memory has been studied extensively in video understanding and generation tasks (Wang et al., 2021; Diao et al., 2025), while spatial memory has been applied to scene-level visual understanding and 3D reconstruction (Wang and Agapito, 2024; Zou et al., 2025). Recent work such as 3D-Mem (Yang et al., 2025b) has investigated 3D scene memory for exploration and reasoning by prompting vision-language models. In contrast, our work focuses on dense 3D memory representations that are critical for real-world embodied scenarios, where task execution depends heavily on maintaining and reasoning over long-term spatial-temporal memory. 6 Conclusion In this work, we introduce 3DM EM-BENCH , a comprehensive benchmark containing fine-grained long-term memory embodied tasks—ranging from simple to hard—along with question-answering tasks that target memory changes across time and space, and captioning task in complex 3D envi- ronments. We propose 3DLLM-M EM, an embodied 3D-LLM with novel memory fusion approach for spatial-temporal reasoning, planning, and acting. One limitation of our model is that currently
https://arxiv.org/abs/2505.22657v1
3DLLM-M EMdoes not involve low-level navigation and control policy, but utilizes high-level pre- 9 defined policies in simulator for carrying out the actions. We think that such aspects are orthogonal to our study, and could be explored and seamlessly integrated into our framework in the future. References Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Lucy Xiaoyang Shi, James Tanner, Quan Vuong, Anna Walling, Haohuan Wang, and Ury Zhilinsky. 2024. π0: A vision-language-action flow model for general robot control. 8 Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gon- zalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alex Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. 2023. Rt-2: Vision-language-action models transfer web knowledge to robotic control. volume abs/2307.15818. 8 Eduardo Camina and Francisco Güell. 2017. The neuroanatomical, neurophysiological and psycho- logical basis of memory: Current models and their origins. Frontiers in Pharmacology , 8:438. 3 Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3d: Learning from rgb-d data in indoor environments. International Conference on 3D Vision (3DV) . 9 Boyuan Chen, Zhuo Xu, Sean Kirmani, Brian Ichter, Dorsa Sadigh, Leonidas J. Guibas, and Fei Xia. 2024a. Spatialvlm: Endowing vision-language models with spatial reasoning capabilities. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 14455–14465. IEEE. 8 Yilun Chen, Shuai Yang, Haifeng Huang, Tai Wang, Ruiyuan Lyu, Runsen Xu, Dahua Lin, and Jiangmiao Pang. 2024b. Grounded 3d-llm with referent tokens. ArXiv preprint , abs/2405.10370. 8 Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. ArXiv preprint , abs/2210.11416. 19 Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. 2018. Embodied question answering. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018 , pages 1–10. IEEE Computer Society. 9 Matt Deitke, Winson Han, Alvaro Herrasti, Aniruddha Kembhavi, Eric Kolve, Roozbeh Mottaghi, Jordi Salvador, Dustin Schwenk, Eli VanderBilt, Matthew Wallingford, Luca Weihs, Mark Yatskar, and Ali Farhadi. 2020. Robothor: An open simulation-to-real embodied AI platform. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020 , pages 3161–3171. IEEE. 9 Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca
https://arxiv.org/abs/2505.22657v1
Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. 2023. Objaverse: A universe of annotated 3d objects. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023 , pages 13142–13153. IEEE. 4 Xingjian Diao, Chunhui Zhang, Weiyi Wu, Zhongyu Ouyang, Peijun Qing, Ming Cheng, Soroush V osoughi, and Jiang Gui. 2025. Temporal working memory: Query-guided segment refinement for enhanced multimodal understanding. 9 10 Gary N Friedman, Luke Johnson, and Zachary M Williams. 2018. Long-term visual memory and its role in learning suppression. Frontiers in Psychology , 9:1896. 3 Jinglong Gao, Xiao Ding, Yiming Cui, Jianbai Zhao, Hepeng Wang, Ting Liu, and Bing Qin. 2024. Self-evolving gpt: A lifelong autonomous experiential learner. 9 Qiao Gu, Ali Kuwajerwala, Sacha Morin, Krishna Murthy Jatavallabhula, Bipasha Sen, Aditya Agarwal, Corban Rivera, William Paul, Kirsty Ellis, Rama Chellappa, et al. 2024. Conceptgraphs: Open-vocabulary 3d scene graphs for perception and planning. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 5021–5028. IEEE. 2 Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma, Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, and Pheng-Ann Heng. 2023. Point-bind & point-llm: Align- ing point cloud with multi-modality for 3d understanding, generation, and instruction following. 2 Yining Hong, Chunru Lin, Yilun Du, Zhenfang Chen, Joshua B. Tenenbaum, and Chuang Gan. 2023a. 3d concept learning and reasoning from multi-view images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023 , pages 9202–9212. IEEE. 15 Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan. 2023b. 3d-llm: Injecting the 3d world into large language models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . 2, 4, 7, 8, 19 Yining Hong, Zishuo Zheng, Peihao Chen, Yian Wang, Junyan Li, and Chuang Gan. 2024. Multiply: A multisensory object-centric embodied large language model in 3d world. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 26396–26406. IEEE. 4 Haifeng Huang, Yilun Chen, Zehan Wang, Rongjie Huang, Runsen Xu, Tai Wang, Luping Liu, Xize Cheng, Yang Zhao, Jiangmiao Pang, and Zhou Zhao. 2024a. Chat-scene: Bridging 3d scene and large language models with object identifiers. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 . 8 Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. 2024b. An embodied generalist agent in 3d world. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net. 2, 8 Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Manuel Y . Galliker, Dibya Ghosh, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Devin
https://arxiv.org/abs/2505.22657v1
LeBlanc, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Allen Z. Ren, Lucy Xiaoyang Shi, Laura Smith, Jost Tobias Springenberg, Kyle Stachowicz, James Tanner, Quan Vuong, Homer Walke, Anna Walling, Haohuan Wang, Lili Yu, and Ury Zhilinsky. 2025. π0.5: a vision-language-action model with open-world generalization. 2 Mukul Khanna, Ram Ramrakhya, Gunjan Chhablani, Sriram Yenamandra, Théophile Gervet, Matthew Chang, Zsolt Kira, Devendra Singh Chaplot, Dhruv Batra, and Roozbeh Mottaghi. 2024. Goat-bench: A benchmark for multi-modal lifelong navigation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 16373–16383. IEEE. 9 Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. 2017. AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv . 9 Jacob Krantz, Stefan Lee, Jitendra Malik, Dhruv Batra, and Devendra Singh Chaplot. 2022. Instance- specific image goal navigation: Training embodied agents to find object instances. 9 11 Chengshu Li, Ruohan Zhang, Josiah Wong, Cem Gokmen, Sanjana Srivastava, Roberto Martín- Martín, Chen Wang, Gabrael Levine, Wensi Ai, Benjamin Martinez, Hang Yin, Michael Lingelbach, Minjune Hwang, Ayano Hiranaka, Sujay Garlanka, Arman Aydin, Sharon Lee, Jiankai Sun, Mona Anvari, Manasi Sharma, Dhruva Bansal, Samuel Hunter, Kyu-Young Kim, Alan Lou, Caleb R Matthews, Ivan Villa-Renteria, Jerry Huayang Tang, Claire Tang, Fei Xia, Yunzhu Li, Silvio Savarese, Hyowon Gweon, C. Karen Liu, Jiajun Wu, and Li Fei-Fei. 2024a. Behavior-1k: A human-centered, embodied ai benchmark with 1,000 everyday activities and realistic simulation. 3, 9 Manling Li, Shiyu Zhao, Qineng Wang, Kangrui Wang, Yu Zhou, Sanjana Srivastava, Cem Gokmen, Tony Lee, Li Erran Li, Ruohan Zhang, Weiyu Liu, Percy Liang, Li Fei-Fei, Jiayuan Mao, and Jiajun Wu. 2024b. Embodied agent interface: Benchmarking llms for embodied decision making. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 . 9 Xiao Liu, Tianjie Zhang, Yu Gu, Iat Long Iong, Yifan Xu, Xixuan Song, Shudan Zhang, Hanyu Lai, Xinyi Liu, Hanlin Zhao, et al. 2024. Visualagentbench: Towards large multimodal models as visual foundation agents. ArXiv preprint , abs/2408.06327. 3 Arjun Majumdar, Anurag Ajay, Xiaohan Zhang, Pranav Putta, Sriram Yenamandra, Mikael Henaff, Sneha Silwal, Paul McVay, Oleksandr Maksymets, Sergio Arnaud, Karmesh Yadav, Qiyang Li, Ben Newman, Mohit Sharma, Vincent-Pierre Berges, Shiqi Zhang, Pulkit Agrawal, Yonatan Bisk, Dhruv Batra, Mrinal Kalakrishnan, Franziska Meier, Chris Paxton, Alexander Sax, and Aravind Rajeswaran. 2024. Openeqa: Embodied question answering in the era of foundation models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024 , pages 16488–16498. IEEE. 9 Charles Packer, Sarah Wooders, Kevin Lin, Vivian Fang, Shishir G. Patil, Ion Stoica, and Joseph E. Gonzalez. 2023. Memgpt: Towards llms as operating systems. 9 Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Florian Gribonval, Rafal Jozefowicz, et al. 2019. Pytorch. https:// pytorch.org/ . 7, 17 Santhosh Kumar Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alexander Clegg, John M Turner, Eric Undersander, Wojciech
https://arxiv.org/abs/2505.22657v1
Galuba, Andrew Westbury, Angel X Chang, Manolis Savva, Yili Zhao, and Dhruv Batra. 2021. Habitat-matterport 3d dataset (HM3d): 1000 large-scale 3d environments for embodied AI. volume abs/2109.08238. 4, 9 Bokui Shen, Fei Xia, Chengshu Li, Roberto Martín-Martín, Linxi Fan, Guanzhi Wang, Claudia Pérez-D’Arpino, Shyamal Buch, Sanjana Srivastava, Lyne P. Tchapmi, Micael E. Tchapmi, Kent Vainio, Josiah Wong, Li Fei-Fei, and Silvio Savarese. 2021. igibson 1.0: a simulation environment for interactive tasks in large realistic scenes. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , page accepted. IEEE. 9 Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: language agents with verbal reinforcement learning. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . 9 Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020 , pages 10737–10746. IEEE. 9 Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew J. Hausknecht. 2021. Alfworld: Aligning text and embodied environments for interactive learning. In9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net. 3, 9 Andrew Szot, Alexander Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Singh Chaplot, Oleksandr Maksymets, Aaron Gokaslan, 12 Vladimir V ondrus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel X. Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, and Dhruv Batra. 2021. Habitat 2.0: Training home assistants to rearrange their habitat. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual , pages 251–266. 4, 9 Andrew Szot, Max Schwarzer, Harsh Agrawal, Bogdan Mazoure, Rin Metcalf, Walter Talbott, Natalie Mackraz, R. Devon Hjelm, and Alexander T. Toshev. 2024. Large language models as generalizable policies for embodied tasks. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. 4, 9 Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. ArXiv preprint , abs/2312.11805. 4 XLA team. 2017–2025. Xla: Optimizing compiler for machine learning. https://www. tensorflow.org/xla . 7, 17 Hao Wang, Weining Wang, and Jing Liu. 2021. Temporal memory attention for video semantic segmentation. In 2021 IEEE International Conference on Image Processing (ICIP) , pages 2254– 2258. IEEE. 9 Hengyi Wang and Lourdes Agapito. 2024. 3d reconstruction with spatial memory. ArXiv preprint , abs/2408.16061. 9 Zixuan Wang, Bo Yu, Junzhe Zhao, Wenhao Sun, Sai Hou, Shuai Liang, Xing Hu, Yinhe Han, and Yiming Gan. 2024. Karma: Augmenting embodied ai agents with long-and-short term memory systems. 2 Erik Wijmans, Samyak Datta, Oleksandr Maksymets, Abhishek Das, Georgia Gkioxari, Stefan Lee, Irfan Essa, Devi Parikh, and Dhruv
https://arxiv.org/abs/2505.22657v1
Batra. 2019. Embodied question answering in photorealistic environments with point cloud perception. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019 , pages 6659–6668. Computer Vision Foundation / IEEE. 9 Quanting Xie, So Yeon Min, Pengliang Ji, Yue Yang, Tianyi Zhang, Aarav Bajaj, Ruslan Salakhutdi- nov, Matthew Johnson-Roberson, and Yonatan Bisk. 2024. Embodied-rag: General non-parametric embodied memory for retrieval and generation. 2 Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. 2025a. Pointllm: Empowering large language models to understand point clouds. In Computer Vision – ECCV 2024 , pages 131–147. Springer Nature Switzerland. 2, 8 Wujiang Xu, Zujie Liang, Kai Mei, Hang Gao, Juntao Tan, and Yongfeng Zhang. 2025b. A-mem: Agentic memory for llm agents. ArXiv preprint , abs/2502.12110. 9 Jihan Yang, Shusheng Yang, Anjali W. Gupta, Rilyn Han, Li Fei-Fei, and Saining Xie. 2024. Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces. ArXiv preprint , abs/2412.14171. 9 Rui Yang, Hanyang Chen, Junyu Zhang, Mark Zhao, Cheng Qian, Kangrui Wang, Qineng Wang, Teja Venkat Koripella, Marziyeh Movahedi, Manling Li, Heng Ji, Huan Zhang, and Tong Zhang. 2025a. Embodiedbench: Comprehensive benchmarking multi-modal large language models for vision-driven embodied agents. 3, 4, 9 Yuncong Yang, Han Yang, Jiachen Zhou, Peihao Chen, Hongxin Zhang, Yilun Du, and Chuang Gan. 2025b. 3d-mem: 3d scene memory for embodied exploration and reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . 2, 7, 9, 20 Licheng Yu, Xinlei Chen, Georgia Gkioxari, Mohit Bansal, Tamara L. Berg, and Dhruv Batra. 2019. Multi-target embodied question answering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019 , pages 6309–6318. Computer Vision Foundation / IEEE. 9 13 Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, and Kai Yu. 2023. Large language model is semi-parametric reinforcement learning agent. ArXiv preprint , abs/2306.07929. 9 Zeyu Zhang, Xiaohe Bo, Chen Ma, Rui Li, Xu Chen, Quanyu Dai, Jieming Zhu, Zhenhua Dong, and Ji-Rong Wen. 2024. A survey on the memory mechanism of large language model based agents. 9 Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang. 2024. Expel: LLM agents are experiential learners. In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada , pages 19632–19642. AAAI Press. 9 Qingqing Zhao, Yao Lu, Moo Jin Kim, Zipeng Fu, Zhuoyang Zhang, Yecheng Wu, Zhaoshuo Li, Qianli Ma, Song Han, Chelsea Finn, Ankur Handa, Ming-Yu Liu, Donglai Xiang, Gordon Wetzstein, and Tsung-Yi Lin. 2025. Cot-vla: Visual chain-of-thought reasoning for vision-language- action models. 2 Haoyu Zhen, Xiaowen Qiu, Peihao Chen, Jincheng Yang, Xin Yan, Yilun Du, Yining Hong, and Chuang Gan. 2024. 3d-vla: 3d vision-language-action generative world model. ArXiv preprint , abs/2403.09631. 2 Junsheng Zhou, Jinsheng Wang, Baorui Ma, Yu-Shen Liu, Tiejun Huang, and Xinlong Wang. 2024. Uni3d: Exploring unified 3d representation at scale. In
https://arxiv.org/abs/2505.22657v1
The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. 8 Chenming Zhu, Tai Wang, Wenwei Zhang, Jiangmiao Pang, and Xihui Liu. 2024. Llava-3d: A simple yet effective pathway to empowering lmms with 3d-awareness. ArXiv preprint , abs/2409.18125. 2, 5, 7, 17 Guillermo Zlotnik and Aaron Vansintjan. 2019. Memory: An extended definition. Frontiers in Psychology , 10:2523. 3 Xueyan Zou, Yuchen Song, Ri-Zhao Qiu, Xuanbin Peng, Jianglong Ye, Sifei Liu, and Xiaolong Wang. 2025. M3: 3d-spatial multimodal memory. In ICLR . 9 A Broader Impact The deployment and release of 3DLLM-M EMcarry both potential benefits and risks. These considerations include visual aspects as well as common issues found in existing LLMs like Alpaca and Vicuna. Since 3DLLM-M EMis built on LLaMA, Vicuna, and CLIP, it inherits certain challenges associated with LLMs and vision encoders. Below, we outline the risks and the mitigation strategies implemented for the release of this model. Hallucination Similar to other LLMs, 3DLLM-M EMmight produce outputs that are not based on factual information or input data. This raises concerns about the accuracy of inferences, particularly in critical applications such as medical fields. Biases Biases present in the base models can be brought to 3DLLM-M EM, stemming from both the vision encoder (CLIP) and the language decoder (LLaMA / Vicuna). This may result in biased outcomes or unfair representations of diverse content. Energy Consumption We train our model on our training data split which contains about 26K trajectories. The training time only takes less than one day, which makes energy consumption not a primary concern. 14 B Environment Construction To support navigation -centric interaction, the agent requires precise knowledge of two things: the traversable layout of each scene and the exact locations of all movable objects. Following 3D-CLR (Hong et al., 2023a), we build this spatial substrate from HM3D’s richly annotated indoor scans. We rely on the semantic surface mesh that accompanies each scene to calculate the room and objects’ locations. The mesh scan has a unique (24 -bit) hexadecimal color for every surface triangle that provides a semantic table that links each color to a surface label (e.g., floor or ceiling) and a room identifier. We first derive an axis -aligned bounding box for every room in each HM3D scene. We query the semantic table to retrieve, for every room identifier, the hex colors assigned to its floor and ceiling surfaces. Before processing individual rooms we aggregate the minimum heights of all floor surfaces in the entire scan, producing a global set of candidate floor elevations. For each room we then load the point clouds of its floor and ceiling. When both are available, the vertical bounds are given by the floor’s lowest point and the ceiling’s highest point. If the floor is missing, the lower bound is set to the highest global floor elevation that still lies below the room’s ceiling; if the ceiling is missing, the upper bound defaults to the highest point in the available cloud (i.e., the surface immediately above the floor). Rooms for which both surfaces are absent
https://arxiv.org/abs/2505.22657v1
are discarded because no reliable vertical evidence is present. Horizontal limits are simply the minimum and maximum coordinates of the room’s floor and ceiling points, and together these six coordinates constitute the room’s axis -aligned bounding box. Based on the bounding boxes for each room, we calculate the bounding boxes of objects in that room by gathering the corresponding vertices and computing the bounding boxes of the object by taking the minimum and maximum coordinates along the three axes. We merge room -level and object -level bounding boxes by their shared room index. After the aforementioned processing, the dataset comprises 182 distinct 3-D scenes covering a total of 2,602 rooms. C Embodied Task Data Examples In this section, we provide complete embodied task trajectories. Here is an example of in-the-wild task: Task : Rearrange desk items by size. The items to be rearranged are: vase, box, and printer. Trajectory : "I’m in room 10. I will start exploring the room for the items I need.", "I see a printer(0) on the desk(0). I also see a box(0) on the cabinet(0).", "I need to remember their locations and sizes for comparison later.", "<GO TO NEW ROOM>", "Now I’m in room 11. I’m exploring this room for the items.", "I don’t see a vase in the room, but I see a printer(0) on a rack(1).", "I will remember these objects and their locations.", "<GO TO NEW ROOM>", "Now I’m in room 6. I’m exploring this room.", "I do not see any of the items I am looking for.", "<GO TO NEW ROOM>", "I’m in room 8. I am exploring this room.", "I see flower vase(0) on the floor. I will remember the object and its location.", "<GO TO NEW ROOM>", "I’m in room 7. I am exploring this room.", "I do not see any of the items I am looking for.", "<GO TO NEW ROOM>", "I’m in room 9. I am exploring this room.", "I do not see any of the items I am looking for.", "<GO TO NEW ROOM>", "Now I am in room 5. I am exploring this room.", "I do not see any of the items I am looking for.", "<GO TO NEW ROOM>", "Now I am in room 2. I am exploring this room.", "I see a box(0) on the armchair(1). I will remember these objects and their locations.", "<GO TO NEW ROOM>", "I’m in room 1. I do not see any items that are part of the rearrangement.", "<GO TO NEW ROOM>", "Now I’m in room 12. I will start exploring this room.", "I think I’ve seen the vase, printer, and box 15 in other rooms, so I’ll head back to room 8 to pick up the vase first.", "<GO TO ROOM(8)>", "I need to pick up flower vase(0) in this room first. I pick up flower vase(0).", "<PICK UP flower vase(0) from room(8) in room(8)>", "I’m carrying flower vase(0) to room 10 where I saw printer and box.", "<GO TO ROOM(10)>", "Now I am in room 10. I will put flower vase(0) on the desk(0) for now so
https://arxiv.org/abs/2505.22657v1
I can pick up box(0).", "<PUT DOWN flower vase(0) from room(8) on desk(0) in room(10)>", "Now I will pick up box(0) from cabinet(0).", "<PICK UP box(0) from room(10) in room(10)>", "Now I will put box(0) on the desk(0) so I can pick up printer(0).", "<PUT DOWN box(0) from room(10) on desk(0) in room(10)>", "Now I will pick up printer(0) from beside the desk(0).", "<PICK UP printer(0) from room(10) in room(10)>", "I need to compare the sizes of the vase, box and printer now to determine the rearranging order.", "I recall that printer(0) in this room had the largest bounding box dimensions based on my earlier observation. I need to put down printer(0).", "<PUT DOWN printer(0) from room(10) on desk(0) in room(10)>", "Now I will pick up box(0).", "<PICK UP box(0) from room(10) in room(10)>", "I remember that box(0) from cabinet(0) had medium bounding box dimensions, so I will put it on the desk.", "<PUT DOWN box(0) from room(10) on desk(0) in room(10)>", "Now I will pick up flower vase(0).", "<PICK UP flower vase(0) from room(8) in room(10)>", "I remember that flower vase(0) had the smallest bounding box dimensions, so I will put it on the desk.", "<PUT DOWN flower vase(0) from room(8) on desk(0) in room(10)>", "I will now arrange the objects from small to large: flower vase(0), box(0) and printer(0) on the desk(0).", "Now I pick up the medium object, box(0).", "<PICK UP box(0) from room(10) in room(10)>", "I will put the medium object on the left of the largest object, printer(0).", "<PUT DOWN box(0) from room(10) on desk(0) in room(10)>", "Now I pick up the smallest object, flower vase(0).", "<PICK UP flower vase(0) from room(8) in room(10)>", "I will put the smallest object on the left of the medium object, box(0).", "<PUT DOWN flower vase(0) from room(8) on desk(0) in room(10)>", "The objects are now arranged from small to large: flower vase(0), box(0) and printer(0) on the desk(0). Task Complete." Here is an example of in-domain task: Task : Preparing for Cooking Stir-fried Tomato and Scrambled Eggs. You need to prepare the ingredients, seasonings and cooking tools. Trajectory : "I’m at room 4. It seems to be living room, I need to prepare ingredients and seasonings and cooking tools and setup them in the kitchen", "There is seasonings on the table.", "<PICK UP seasonings(0) from room(4) in room(4)>", "No other things found for my task", "<GO TO NEW ROOM>", "I’m at room 5. It seems to be dining room. I saw an apron on dining table.", "I need the apron but I have seasonings(0) on my hand. Lets find kitchen to put the seasonings first and go back for apron", "<GO TO NEW ROOM>", "I’m at room 8. It seems to be kitchen with two cabinets. I found tomatoes and eggs on the cabinet(1) that is far away from me and cooking pan in the sink.", "There are two countertops in this room. countertop(1) is closer to stove(0). I should setup on countertop(1).", "<PUT DOWN seasonings(0) from room(4) on countertop(1) in room(8)>", "<PICK UP tomatoes(0) from room(8) in room(8)>", "<PUT
https://arxiv.org/abs/2505.22657v1
DOWN tomatoes(0) from room(8) on countertop(1) in room(8)>", "<PICK UP eggs(0) from room(8) in room(8)>", 16 "<PUT DOWN eggs(0) from room(8) on countertop(1) in room(8)>", "<PICK UP cooking pan(0) from room(8) in room(8)>", "<PUT DOWN cooking pan(0) from room(8) on stove(0) in room(8)>", "I remember I saw apron in room 5. I need to find it back.", "<GO TO ROOM(5)>", "<PICK UP apron(0) from room(5) in room(5)>", "<GO TO ROOM(8)>", "<PUT DOWN apron(0) from room(5) on countertop(1) in room(8)>", "The setup in kitchen has been prepared. Task Complete." D Implementation Details We implement our model based on LLaV A-3D (Zhu et al., 2024), modifying it to be compatible with Google TPUs with PyTorch/XLA frameworks (Paszke et al., 2019; team, 2017–2025) . We first expand the model’s context window to 8192 tokens to accommodate long-term memory inputs. We then fine-tune our proposed memory module along with the LLM decoder using our training split, initializing from LLaV A-3D’s pretrained weights. Training is conducted on 8 Google Cloud TPU v5p cores with a batch size of 256 for 1000 steps, which takes about 1 day to complete. We use Adam optimizer with learning rate of 2e-5 with no weight decay. Additionally, we apply a linear warmup of the learning rate during the initial 3% steps, increasing from 10−8to10−5, followed by a cosine decay scheduler. E Prompts for Gemini As mentioned in §2.2, we prompt Gemini to generate the long-term trajectories as illustrated in Table 4, generate the question-answering tasks as shown in Table 5, and generate caption tasks as shown in Table 6. For open-ended QA evaluation, we followed standard LLM-as-judge protocol by prompting Gemini as illustrated in Table 7. F Data Validation F.1 Trajectory Validation We implement a trajectory simulation pipeline driven by the commands listed in Table 4. For each command, the simulator records the agent’s current room and the full set of objects it is holding, then updates the set of objects in each room to reflect pick-up and put-down actions. A pick-up removes the specified object (along with any nested items) from the room the agent occupies and adds it to the agent’s hand; a put-down removes the object from the agent’s hand and places it into the designated room. The pipeline validates each command based on these criteria: (1) the agent’s location; (2) the referenced object and (3) the correctness of pick-up and put-down actions. For location validation, a command is marked as invalid if the agent attempts to pick up an object from a room that does not match its current room, or tries to drop an object into a room other than the one it currently occupies. Additionally, if the agent tries to visit a room that does not exist in the scene, or attempts to enter a new room when all rooms have already been explored, the trajectory is also considered invalid. For object validation, a pick-up command is invalid if the target object does not exist in the current room, and a put-down command is invalid if the agent is not currently holding the specified object.
https://arxiv.org/abs/2505.22657v1
For pick-up and put-down validation, the agent is allowed to hold only one object at a time. A command is considered invalid if the agent attempts to pick up an object while already holding one, or tries to put down an object when its hand is empty. Finally, after all commands have been executed, if the trajectory ends with the agent still holding an object that was never put down, the entire trajectory is marked as invalid. F.2 Human Validation As mentioned in §2.3 After automatic trajectory validation, we further conduct human validation, in which four student experts in the field manually inspect each benchmark example. We render multi- 17 System message You are an AI assistant and task generator for a 3D embodied agent operating in a multi-room environment. The environment provides detailed object instance information, including bounding boxes and IDs. Your goal is to generate a complex task that requires the agent to explore multiple rooms, navigate, and crucially use long-term memory to recall details observed earlier. Prompt 1. Environment and Object Information Object Representation: Each object is given with a bounding box in the format: “<ob- ject_name>(num)”: [x_min, y_min, z_min], [x_max, y_max, z_max] Here, (num) indicates the ID, with (0) being the closest to the origin [0,0,0]. IDs reset for each room (e.g., sofa(0) in room 2 and sofa(0) in room 4 if each room has one sofa). Actions Available: <GO TO ROOM(id)>: Navigate to a room that has already been visited. <GO TO NEW ROOM>: Navigate to a new, unexplored room (and unlock its objects). Do not use this for rooms that have been visited before. <PICK UP object_name(id) from room(id) in room(id)>: Pick up an object that originally belongs to a specific room while in that same room. <PUT DOWN object_name(id) from room(id) on object_name(id) in room(id)>: Place an object (that originally belongs to a room) onto another object (such as a table or floor) in a room. New Objects: You can add extra objects to diversify the task. Important: Use only object names from the provided new_objects_name_list. If a room already has an object with the same name, the new object should have a new ID (e.g., if lamp(0) exists, the added one should be lamp(1)). These extra objects are only for task design; the agent’s trajectory should not mention adding them. 2. Task Design Requirements Multi-Room Exploration: Design a task that spans several rooms. The room order (given in a Room Order list) should be chosen so that necessary items are distributed across rooms. The agent should explore every room in the specified order. Long-Term Memory and Implicit Cues: Do not simply list all items as a checklist at the start. Instead: Provide a vague overall goal (e.g., “prepare a meal”). Later in the trajectory, have the agent recall these earlier observations when the need arises. Ensure the agent must remember something seen long ago rather than simply following an explicit list. Update Memory and make new decision based on your current observations: The agent originall planned to use one object for completing its task,
https://arxiv.org/abs/2505.22657v1
but couldn’t find it after exploration of rooms. It has to change to a another similar object to complete its task. Inventory and Action Constraints: The agent can only hold one item at a time. Never perform consecutive PICK UP or PUT DOWN actions. If the agent holds an item, it must put it down before picking up another. When temporarily storing an object (e.g., on a table), include a “thought” explaining why the object is being set down and later recalled. 3.Reasoning and Object Comparisons: If your task requires choosing a specific object instance (e.g., selecting table(1) because it is bigger than table(0)), compare their bounding boxes and explain your choice in the trajectory. For clarity, consider these examples: {In-context examples} ——————- Here is the scene information: {Input scene information} Table 4: Prompt template for generating task trajectories. {In-context examples} are in-context examples. {Input scene information} are scene, room and object semantics along with their bounding boxes. view images of the entire scene using the simulator and verify whether the benchmark annotations accurately correspond to the simulated environment as illustrated in Figure 5. 18 Prompt You are an AI assistant / task generator in the room. All object instances in this 3D scene are given, along with their bounding boxes and ids." Each object’s bounding boxes are represented by a 3D coordinate ’<obj_name>(num)’: [x min, y min, z min],[x max, y max, z max]’ with units of meters, and each represents left-bottom corner and the right-top corner coordinate. You will also receive a trajectory composed of the following tokens and reasoning chains. <GO TO ROOM(id)>: which navigates back to a specific room (id). This can only be done if the agent already go to this room. <PICK UP object_name(id) from room(id) in room(id)>: Pick up an object that originally belongs to a specific room while in that same room. <PUT DOWN object_name(id) from room(id) on object_name(id) in room(id)>: Place an object (that originally belongs to a room) onto another object (such as a table or floor) in a room. <GO TO NEW ROOM>: which navigates to a new room you haven’t explored and unlocks objects there. This trajectory is what the agent have executed over the past. You need to propose several questions and answers that focused on the reasoning abilities of the long-term memory of the agent. These reasoning questions should focus on what have changed temporally or spatially in this agent’s memory. It’s important that this change challenged the agent’s memory. For example the questions should contain object counting, spatial relation, comparison between objects across rooms, long-term multi-room room layout, long-term multi-room object navigation. Remember spatial memory is important, you should design questions that asked about the 3D object spatial relation and layout in the room that need the agent to perform a hard reasoning for the final answer. For clarity, consider these examples: {In-context examples} ——————- Here is the scene information: {Input scene information} Here is the agent’s trajectory: {Input agent’s trajectory} Table 5: Prompt template for generate QA data. {In-context examples} are in-context examples. {Input scene information}
https://arxiv.org/abs/2505.22657v1
are scene, room and object semantics along with their bounding boxes. {Input agent’s trajectory} is the 3D agent’s explored trajectories and action chains. G Evaluation Setup Details 3D-LLM Similar to the 3D-LLM work (Hong et al., 2023b), we use their direct reconstruction method to extract the 3D features from each scene in our training data. To process our long-term memory data, which requires multi-scene input across each task, we feed each room in the task through the 3D-LLm Q-Former head independently to get separate 32-token dense representation of each room with per-room 3d positional embeddings injected into the features. Then we concatenate the representations before feeding the input into the frozen t5-flanxl (Chung et al., 2022) backbone like the original work. The 3D-LLM model also included learned location tokens used to describe certain locations within each room in the scene. To fit 3D-LLM to our task data, we substitute the location tokens with our specific interaction tokens (eg. <GO TO ROOM> used by all models in our experiments) and train the model to learn the new tokens to stay consistent with our higher level interaction used across our training data. Analysis of the 3D-LLM model evaluation output, indicated the primary struggle for the model was retaining long term memory of semantic observations in the scene, so we prioritized aligning 3D-LLM with the high level long-term memory representation in our data over low level spatial understanding of the scene. Our longer task data input also required truncation to fit within the 512 token context length of 3D- LLM’s t5-flanxl backbone. We retain the task description and move the question to the beginning of the prompt for the QA data to ensure the model still receives the information necessary to understand its tasks. The longer trajectory of past events is then the only information that gets truncated before fed into the t5 encoder. 19 Prompt You are provided with a scene description containing multiple rooms. Each room includes a list of objects along with their positions in the room, represented by bounding boxes. Each object’s bounding box is defined by a 3D coordinate in the format: <object_name>(num): [x min, y min, z min],[x max, y max, z max] with units in meters (defining the left-bottom and right-top corners). Your task is to generate an object caption for each room in the form of a coherent, descriptive paragraph that conveys the 3D spatial arrangement and relative positions of all objects within that room. Then, you will receive the object descriptions and caption for the current 3D room you are in. You will also be provided with the previous rooms’ captions as well. Your task is to generate new captions covering the summarization of the common features across all rooms based on your current room and important difference based on your current room. The reasons of generating the new caption is to help the agent to remind of what are in previous rooms memories can help the agent in this current room. The past objects and observations should be related to current room by examining the
https://arxiv.org/abs/2505.22657v1
summarization of common things and differences. For clarity, consider these examples: {In-context examples} ——————- Here is the scene information: {Input scene information} Here is current room you are in and previous rooms you went: {Input agent’s location} Table 6: Prompt template for generate QA data. {In-context examples} are in-context examples. {Input scene information} are scene, room and object semantics along with their bounding boxes. {Input agent’s location} is the location for current room in the scene and the past explored rooms. For finetuning on our data, we use the hyperparameters provided by 3D-LLM and finetune until model loss stops decreasing. Due to compute limitations, we trained on captioning task for 15 epochs, question-answering task for 20 epochs, and allocated most of the compute time on the embodied task, which we trained on for 75 epochs. 3D-Mem We benchmark 3D-Mem (Yang et al., 2025b) on the question-answering and caption- ing splits of 3DM EM-BENCH . 3D-Mem is a snapshot-based 3D memory architecture originally developed for embodied exploration and reasoning; it keeps two complementary stores—memory snapshots, a compact set of multi-view RGB-D frames with per-object bounding boxes summarizing the areas the agent has inspected, and frontier snapshots, boundary views that suggest where useful new information may be found next. In its native setting the agent navigates an unfamiliar scene by selecting the frontier view most likely to advance its task and then answers visual questions using the most relevant memory snapshots. Because our evaluation focuses on post-exploration reasoning rather than active exploration, we disable the frontier component and retain only the memory snapshots. For these two tasks, the system will capture memory snapshots in each room from the room center, and finish the QA and captioning base on the memory snapshots of all the explored rooms. H Qualitative Examples We provide qualitative examples as shown in Figure 6. It demonstrates that 3DLLM-M EMcan maintain a long-term memory and perform complex tasks in the embodied environments. More examples can be found in the supplementary materials . 20 System message Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question. Your evaluation should consider correctness and helpfulness. You will be given a reference answer and the assistant’s answer. You evaluation should focus on the assistant’s answer to the second question. Begin your evaluation by comparing the assistant’s answer with the reference answer. Identify and correct any mistakes. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: "[[rating]]", for example: "Rating: [[5]]". Prompt <|The Start of Reference Answer|> ### User: question_1 ### Reference answer: ref_answer_1 ### User: question_2 ### Reference answer: ref_answer_2 <|The End of Reference Answer|> <|The Start of Assistant A’s Conversation with User|> ### User: question_1 ### Assistant A: answer_1 ### User: question_2 ### Assistant A: answer_2 <|The End of Assistant A’s Conversation with User|> Table 7: Prompt template for open-ended QA evaluation following standard LLM-as-judge protocol. 21 Object Captioning Room 8 contains distinct sleeping, working,
https://arxiv.org/abs/2505.22657v1
and lounging zones. Bed(0) extends along one side, with pillow (0) near its head and cabinet (1) beside it, which supports decoration (0) and is flanked by book (0) and book (1). Near the foot of bed(0), yoga mat (0), bag(0), and box(0) form a small activity area. At the far end, wardrobe (0) stands near wall(4), next to beanbag chair (0) and exercise ball (0), with clothes (0) draped nearby, creating a relaxed corner. The office area, positioned across from bed(0), includes desk (0), office chair (0), and cabinet (0), with trashcan (0) nearby and ceiling lamp (0) overhead. Window (0) on the parapet (0) brings in natural light. The arrangement ensures functional use of space while maintaining clear separation between different activities. Question Answering Object Counting ●Q: How many chairs are there in Room 8? ●A: There are 2 chairs in total: one office chair(0) and one beanbag chair(0). Spatial Relation ●Q: If you are sitting on the office chair(0) working at the desk(0) in Room 8, where is the window(0) relative to you? ●A: The window(0) is behind you to the left. Comparative QA ●Q: Which room has more books, Room 8 or Room 9? ●A: Room 8 has more books. It contains two books, book(0) and book(1), both placed on top of the cabinet(0) near the desk(0). Object Navigation ●Q: If you are sitting on the beanbag chair(0) in Room 8, how do you get to the books? ●A: You need to stand up, turn left, and walk across the room toward the cabinet(0) near the desk(0), where the books are placed. Room Layout ●Q: In Room 8, what is the relative layout of the bed(0), desk(0), and office chair(0)? Which one is closest to the window(0)? ●A: The bed(0) is placed along the side wall near the window(0), the desk(0) is located in the far corner of the room along wall(5), and the office chair(0) is positioned directly in front of the desk(0). Among these, the bed(0) is closest to the window(0), which is embedded in the wall beside it. Room 8 Room 9 Figure 5: Example of human annotators manually check the data quality on QA and captioning tasks through multiple rendered multi-view images from each room. 22 (1) (2) (3) (4) (5) (6) (7) (8) (9)Figure 6: Qualitative example of 3DLLM-M EM. The task instruction is: Prepare a cozy reading nook in the living room with two books and a teacup . In images (1) and (2), the agent explores the environment randomly, forming an initial memory of the scene. After receiving the task instruction, it recalls its memory and navigates to the bedroom to pick up a book from the cabinet, as shown in images (3) and (4). The agent then returns to the living room and places the book on the table in front of the sofa (image 5). Unable to recall any additional books, the agent resumes exploration and finds a second book on the bed, which it picks up (image 6) and stacks on top of the first book (image 7).
https://arxiv.org/abs/2505.22657v1
arXiv:2505.22660v1 [cs.LG] 28 May 2025Maximizing Confidence Alone Improves Reasoning Mihir Prabhudesai∗ Carnegie Mellon UniversityLili Chen* Carnegie Mellon UniversityAlex Ippolitin* Carnegie Mellon University Katerina Fragkiadaki Carnegie Mellon UniversityHao Liu Carnegie Mellon UniversityDeepak Pathak Carnegie Mellon University Abstract Reinforcement learning (RL) has enabled machine learning models to achieve significant advances in many fields. Most recently, RL has empowered frontier language models to solve challenging math, science, and coding problems. How- ever, central to any RL algorithm is the reward function, and reward engineering is a notoriously difficult problem in any domain. In this paper, we propose RENT: Reinforcement Learning via Entropy Minimization – a fully unsupervised RL method that requires no external reward or ground-truth answers, and instead uses the model’s entropy of its underlying distribution as an intrinsic reward. We find that by reinforcing the chains of thought that yield high model confidence on its generated answers, the model improves its reasoning ability. In our experiments, we showcase these improvements on an extensive suite of commonly-used reason- ing benchmarks, including GSM8K, MATH500, AMC, AIME, and GPQA, and models of varying sizes from the Qwen and Mistral families. The generality of our unsupervised learning method lends itself to applicability in a wide range of domains where external supervision is limited or unavailable. 1 Introduction Imagine you’re taking an exam. Once it begins, no new information is available and no external help can be sought. With only your own reasoning to rely on, how do you tackle a difficult problem? You might make an initial attempt, assess your confidence in the answer, and revise your reasoning until you feel sufficiently certain. Of course, confidence is not a guarantee of correctness – but in the absence of feedback, it is often the only intrinsic signal we have to guide further thought. In such settings, humans tend to optimize for confidence, or equivalently, to reduce uncertainty. In machine language model0.3 promptresponse 1 response 2 response kconfidence 0.1 0.7 ...Reinforcement Learning via Entropy Minimization Reinforcement Learning rewardreward reward Figure 1: Overview of RENT: Reinforcement Learning via Entropy Minimization. For each response, we use the model’s underlying confidence (negative entropy) as a reward for reinforcement learning. This enables the model to learn without any external reward or ground-truth answers. ∗Equal contribution. Preprint. learning, uncertainty is commonly quantified via entropy – a measure of how peaked or diffuse a probability distribution is. Language models output distributions over tokens, and the entropy of these distributions reflects the model’s confidence: lower entropy implies more confident predictions. Yet despite the growing use of language models in reasoning tasks, current approaches to improvement still rely heavily on external supervision, rewarding models based on correctness with respect to ground-truth labels [ 10,37]. This dependence is often impractical, particularly in real-world or open-ended scenarios where supervision is scarce or unavailable. To address this, we propose RENT: R einforcement Learning via Entropy Minimization – a fully unsupervised reinforcement learning method that improves reasoning performance by using the model’s own confidence as a reward. Specifically, we define the reward as the negative entropy of the model’s predicted token
https://arxiv.org/abs/2505.22660v1
distributions. This signal is dense, general, and easy to compute, requiring no ground-truth answers. Importantly, not all parts of the response contribute equally to final performance. Through empirical analysis, we find that minimizing entropy over tokens near the end of the reasoning chain, especially those corresponding to the final answer, correlates most strongly with improved accuracy. In contrast, early tokens in the response show little correlation. This suggests that as the model approaches its final answer, it increasingly relies on its own confidence to guide reasoning, so encouraging confidence in these final steps is key to improving overall performance. We demonstrate RENT’s effectiveness across diverse reasoning benchmarks, including GSM8K [ 6], MATH500 [ 14,25], AMC and AIME [ 23], and GPQA [ 33]. Our method scales across model families (Qwen and Mistral) and sizes and consistently improves performance. 2 Related Work 2.1 Reinforcement Learning for Reasoning Initially, reinforcement learning (RL) for language models was mostly used for learning from human preferences [ 5] and, traditionally, the RL optimization was done with algorithms such as PPO [ 36]. With the capabilities of language models continuing to improve, researchers have begun to explore the possibility of using RL to improve the performance of language models on reasoning tasks such as math [ 6,14,23], science [ 13,33], or coding [ 24,4] problems. In these settings, the model is prompted to generate a chain-of-thought [ 48] and final answer, and receives a reward based on how closely its final answer matches the ground-truth answer. These efforts present RL as an alternative to search-based approaches to chain-of-thought reasoning such as Tree of Thoughts [53] and Graph of Thoughts [ 2]. Related lines of work include training a reward model to give feedback for every step in the chain of thought, and training RL models to encourage self-correcting behaviors in language models. Examples of RL methods in this space include Zelikman et al. [56], Singh et al. [39], Kumar et al. [21], Qu et al. [31], Uesato et al. [43], Lightman et al. [25], Wang et al. [47]. At scale, DeepSeek [ 10,37] proposed an open-source model that showed OpenAI o1 [ 16]-level reasoning by performing RL in this manner, using a new algorithm GRPO [37]. 2.2 Confidence and Calibration Confidence measures quantify how certain a model is that its generated output is correct [ 55,40]. In order to evaluate the confidence of machine learning models, it is necessary also to discuss calibration [19,45] - i.e., how aligned those confidences are with actual correctness. As language models are increasingly trusted to make important decisions, providing users with a reliable confidence measure would be useful in many situations [ 8,27,44,15,17,12,40]. As such, researchers have developed various confidence metrics for modern deep learning models and studied the extent to which they are calibrated. These include both methods that assume access to the model’s weights [ 11, 18,51,40] and methods that estimate confidence via prompting alone [ 49,8,42,50,52]. In our paper, we use the model’s confidence to iteratively improve its own performance via reinforcement learning. 2.3 Test-Time Adaptation Test-time adaptation is
https://arxiv.org/abs/2505.22660v1
where a model is updated using data from the test distribution, without access to ground-truth labels. The goal is to improve performance in scenarios where there is a distribution shift between training and testing environments. Methods for adapting without labels include normalization techniques that recalibrate feature statistics at test time [ 32,41,28,35,29]. The 2 most relevant work to ours is Tent [ 46], which performs entropy minimization on model predictions during test time. This approach assumes that predictions on test data should be low in entropy if the model is well-adapted to the new distribution. Tent builds on earlier work that uses entropy minimization as a regularization strategy in semi-supervised learning [ 9,22,1] and domain adaptation contexts [ 28,38,34], where encouraging confident predictions has proven effective for improving generalization. Recently, TTRL [ 57] proposed test-time reinforcement learning using majority voting as a reward. Compared to entropy, majority voting is a sparse reward and much less general; for example, it cannot be applied to long-form free-response questions. 2.4 Unsupervised Reinforcement Learning Unsupervised RL trains agents using intrinsic rewards like novelty, entropy, or mutual information, enabling skill acquisition without extrinsic feedback. Prior methods include ICM and RND for prediction error [ 30,3], APT [ 26] and ProtoRL [ 54] for entropy maximization, and DIAYN, APS, and SMM for skill discovery via mutual information [ 7,26,20]. An interesting observation is that while exploration methods primarily maximize entropy, we instead minimize it by reinforcing high-confidence outputs, and find that for language models, this leads to better reasoning performance without any external supervision. 3 Method 3.1 Reinforcement Learning for Language Models The goal of reinforcement learning (RL) is to train a policy which generates actions that maximize the cumulative expected reward. In the context of modern language models, the policy πis a language model and the actions ypredare sampled from the distribution over a discrete vocabulary. The task is formulated as a one-step RL problem in which the model generates ypred=π(x), where xis sampled from the dataset D={(x, y target)}, and receives some reward for the generation. Typically, the ground-truth answer ytargetis used to give the model a reward r=R(ytarget, ypred). One reward function which is currently used is simple string matching, where R(ytarget, ypred) =1{ytarget=ypred}. Our work focuses on instead doing unsupervised reinforcement learning, which does not require external supervision for the reward. Specifically, ytarget is not used in the reward r=R(ypred)and we do not assume access to this at any point in training. 3.2 Group Relative Policy Optimization (GRPO) To optimize the policy, we adopt GRPO [ 37], a reinforcement learning algorithm that emphasizes relative rather than absolute performance. Instead of directly maximizing the reward of the current policy, GRPO evaluates the policy in relation to a group of baseline policies. This comparison helps improve learning stability, especially in settings with noisy or unsupervised reward signals. Letπdenote the current policy, and let {π1, π2, . . . , π K}be a fixed or evolving set of reference policies. The GRPO objective is defined as: L(π) =Eypred∼π(x)[R(ypred)]−1 KKX i=1Eypred∼πi(x)[R(ypred)] The first term represents the expected reward
https://arxiv.org/abs/2505.22660v1
under the current policy π, while the second term computes the average reward across the reference group. The learning signal is thus the improvement in reward relative to these baselines. For more details, we refer the reader to Shao et al. [37]. 3.3 Entropy Reward For a given prompt x, the model generates a response ypred=ypred,1,···, ypred,T=π(x), where Tis the number of tokens in the response. At each token t∈ {1, . . . , T }, the model outputs a probability distribution ptover the vocabulary V, i.e., pt(v) =P(yt=v|x, y<t). The entropy of this distribution measures the model’s uncertainty in predicting the next token and is given by: H(pt) =−X v∈Vpt(v) logpt(v) 3 Mistral-7B Qwen2.5-1.5B Qwen2.5-Math-1.5B Qwen2.5-7B Qwen2.5-Math-7B0.00.20.40.60.8Score 0.0050.577 0.0970.725 0.0000.1590.7800.900 0.0000.645GSM8K Baseline w/ RENT (Ours) Mistral-7B Qwen2.5-1.5B Qwen2.5-Math-1.5B Qwen2.5-7B Qwen2.5-Math-7B0.00.10.20.30.40.50.60.70.8Score 0.147 0.1680.548 0.5590.6250.765 0.7620.791 0.6520.827 MATH500 Mistral-7B Qwen2.5-1.5B Qwen2.5-Math-1.5B Qwen2.5-7B Qwen2.5-Math-7B0.00.10.20.30.40.50.6Score 0.0490.0680.2510.2840.4090.4680.4230.5010.4660.605 AMC Mistral-7B Qwen2.5-1.5B Qwen2.5-Math-1.5B Qwen2.5-7B Qwen2.5-Math-7B0.000.050.100.150.200.25Score 0.0020.033 0.0260.0720.0940.110 0.1100.270 0.2100.280 AIME Mistral-7B Qwen2.5-1.5B Qwen2.5-Math-1.5B Qwen2.5-7B Qwen2.5-Math-7B0.000.050.100.150.200.250.300.35Score0.1800.286 0.239 0.249 0.0130.2860.3420.368 0.0580.345GPQAFigure 2: Performance of our method on GSM8K, MATH500, AMC, AIME, and GPQA. Across benchmarks and models, we find that entropy minimization alone is an effective reward for improving the reasoning ability of language models. (The "Instruct" is omitted for some models for brevity). To compute the total entropy of the response, we average the entropies across all tokens. The total entropy H(π(x))provides a measure of the overall uncertainty in the model’s response. Higher entropy indicates greater uncertainty or more diverse token predictions, while lower entropy suggests more confident and peaked distributions at each token. We use the negative entropy of the predicted token distribution as a reward signal: R(ypred) =−H(π(x)) =1 TTX t=1X v∈Vpt(v) logpt(v) This reward encourages the model to produce more confident and peaked distributions over the vocabulary, effectively promoting lower uncertainty in its predictions. Within the RL framework, the learning objective becomes maximizing the expected reward over the data distribution: max πEx∼D Eypred∼π(x)[R(ypred)] By optimizing this objective, the model learns to generate responses with lower entropy without relying on external supervision or labeled target responses. 4 MATH500AMCFigure 3: Accuracy and confidence over the course of training. The trends indicate that accuracy and confidence are indeed highly correlated and therefore it is natural to use confidence as a reward. 4 Experiments 4.1 Experimental Setup Benchmarks. We train a model with reinforcement learning on each dataset independently. We conduct our experiments on the following commonly-used benchmarks for evaluating the reasoning capabilities of large language models: •GSM8K [ 6]: GSM8K contains 8792 grade-school math word problems. The train set contains roughly 7473 problems and the test set contains roughly 1319 problems. •MATH500 [ 14,25]: MATH [ 14] is a dataset containing competition math problems spanning seven categories. It contains 12500 problems, of which 7500 are used for training and 5000 are used for testing. MATH500 [ 25] is a subset of the MATH test set created by OpenAI by sampling uniformly at random from the test set. •AMC [ 23]: The American Mathematics Competitions (AMCs) are competitions given to high school students. The specific dataset we use is
https://arxiv.org/abs/2505.22660v1
comprised of 83 problems from the 2022 and 2023 AMC12 exams, which are given to 12th grade students. Although the original problems are in multiple-choice format, the dataset presents modified versions of the problem which expect an integer solution. •AIME24 [ 23]: The American Invitational Mathematics Examination (AIME) is a prestigious high school mathematics competition. It consists of 15 questions meant to be completed in 3 hours and is given to top-scoring students on the AMC exam. Each year, there are two versions of the exam which consist of distinct questions. We train on the 30 problems from both versions of the 2024 exam. •GPQA [ 33]: GPQA is a dataset of 448 multiple-choice problems in biology, physics, and chemistry at the PhD level. They are intended to be "Google-proof" in the sense that they require advanced reasoning skills. Since we are interested in test-time adaptation, and we do not assume access to the ground-truth an- swer, we use the same dataset for both training and evaluation. Additionally, some of the benchmarks do not have standardized train sets. The exception is GSM8K, where we use the standard train and test sets; this shows that generalization does occur and RENT is not merely overfitting to the test set. Models. To showcase the generality of our method, we conduct experiments on a wide range of models from different model families and of varying parameter counts. We test on Mistral-7B-Instruct- v0.3, Qwen2.5-1.5B-Instruct, Qwen2.5-Math-1.5B, Qwen2.5-7B-Instruct, and Qwen2.5-Math-7B. 5 after = id_match last id_match all after \boxed{ after </think> random last 10 tokens first chunk last chunk Strategy0.05 0.000.050.100.150.200.25CorrelationDatasets GSM8K MATH500 AIMEFigure 4: Evaluation (by computing correlation between accuracy and confidence) of various strategies for selecting which tokens to minimize the entropy over. We find the highest correlation between accuracy and confidence in the last few tokens of the response. Implementation details. For the RL optimization we use GRPO [ 37] with a learning rate of 1×10−6 and the Adam optimizer. The batch sizes and sampling hyperparameters may vary among models and datasets. We provide a full list of hyperparameters in the Appendix. 4.2 Main Results Figure 2 shows the performance of models before and after entropy minimization on GSM8K, MATH500, AMC, AIME24, and GPQA. Note that all models except the Math models are Instruct models (e.g., Qwen2.5-1.5B refers to Qwen2.5-1.5B-Instruct). Across model families, model sizes, and benchmarks, entropy minimization allows large language models to improve their reasoning skills, without any external supervision. On the Math models such as Qwen2.5-Math-1.5B and Qwen2.5-Math-7B, the base model often struggles at following instructions and therefore the initial score is zero or near zero, and therefore the boost from entropy minimization is quite large. On models that are already proficient at instruction following, we can still see strong performance improvements from entropy minimization. Given the potential pitfall of overconfidence in language models, we performed extensive experimentation to ensure empirically that entropy minimization is a robust and generalizable reward function across datasets and models. 4.3 Is It Just Formatting? It is a well-known issue with reasoning benchmarks that language models can lose
https://arxiv.org/abs/2505.22660v1
points simply because they do not know how to put their answers in the right format. For example, GSM8K expects answers after " #### " and MATH500 expects final answers to be placed in "boxed". A nontrivial amount of engineering effort has gone into both designing prompts that encourage correct formatting and implementing parsers that effectively extract answers from language model responses, in attempts to mitigate this issue. Therefore, one might wonder if, instead of learning to perform complex reasoning, RENT merely encourages the model to put its answers in the right format. Table 1 shows that this is not the case. Models trained with the RENT reward outperform only using a format reward, which simply assigns a binary reward based on whether the correct format is followed in the response. In some cases, the performance of our method is similar to (or even slightly worse than) just using format reward, but of course it is expected that unsupervised RL methods might not always lead to significant improvements. For example, if the benchmark is extremely easy and the model only needs to learn the right format to achieve near-perfect scores, RENT would not outperform format reward. Or, if the benchmark is so hard that it is beyond the model’s capabilities altogether, neither method would perform well. However, across datasets and model sizes, we find a consistent improvement over using the format reward and this assures us that the model is actually learning to think through difficult problems and improve its ability to reason. 4.4 Correlation Between Entropy and Accuracy Figure 3 shows the accuracy and confidence throughout training Qwen2.5-Math-7B and Qwen2.5- 7B-Instruct on the AMC and MATH500 datasets respectively. Critically, as the model improves its confidence via RENT, the accuracy of the model improves as well. This demonstrates the significant 6 Table 1: Comparison to RL with a format reward. The best result on each benchmark is indicated in bold. RENT generally outperforms only using a format reward. GSM8K MATH500 AMC AIME GPQA Mistral-7B-Instruct-v0.3 w/ Format reward only 0.565 0 .150 0 .051 0 .015 0 .282 w/ RENT (Ours) 0.577 0 .168 0 .068 0 .033 0 .286 Qwen2.5-1.5B-Instruct w/ Format reward only 0.637 0 .558 0 .255 0 .054 0.269 w/ RENT (Ours) 0.725 0 .559 0 .284 0 .072 0.249 Qwen2.5-Math-1.5B w/ Format reward only 0.149 0 .625 0 .442 0.116 0.282 w/ RENT (Ours) 0.159 0 .7652 0 .468 0.110 0.286 Qwen2.5-7B-Instruct w/ Format reward only 0.911 0.7735 0 .458 0 .139 0 .357 w/ RENT (Ours) 0.900 0.823 0 .501 0 .270 0 .368 Qwen2.5-Math-7B w/ Format reward only 0.280 0 .749 0 .570 0 .230 0.349 w/ RENT (Ours) 0.645 0 .827 0 .605 0 .280 0.345 correlation between answer confidence and answer accuracy, supporting our initial hypothesis that optimizing confidence will lead to improved accuracy. 4.5 Comparison to Majority Voting Table 2: Performance of Qwen2.5-7B-Instruct with TTRL and Entropy-based methods. The best result on each benchmark is indicated in bold. RENT matches TTRL on all datasets except AIME, where RENT shows a significant improvement over
https://arxiv.org/abs/2505.22660v1
TTRL. GSM8K MATH500 AMC AIME GPQA Qwen2.5-7B-Instruct w/ TTRL 0.929 0.822 0.521 0.172 0.361 w/ RENT (Ours) 0.900 0.823 0.501 0.270 0.368 TTRL [ 57] is a concurrent work to ours which uses majority voting as a reward. Figure 2 shows the performance of RENT and TTRL with Qwen2.5-7B-Instruct on GSM8K, MATh500, AMC, AIME, and GPQA. RENT and TTRL perform very similarly on GSM8K, MATH500, AMC, and GPQA. However, RENT significantly outperforms TTRL on AIME, which shows that RENT is a stronger learning signal on challenging tasks where majority voting may be too sparse of a reward to improve the model. This showcases the advantage of entropy minimization. 4.6 Qualitative Samples Table 3 shows a qualitative sample from GSM8K and Table 4 shows a qualitative sample from AIME. The qualitative samples verify that the model indeed learns meaningful reasoning skills via entropy minimization. It is not merely learning to format its answer correctly, or otherwise collapsing to some other reward-hacking behavior. 4.7 Which Tokens to Minimize Entropy Over We investigated which response tokens are the best to minimize entropy over. To do this, we evaluated the following response token selection strategies: •last chunk - split the response into kchunks, and select all tokens in the last chunk 7 Table 3: The model splits the problem into reasonable steps and correctly calculates the total cost Prompt system You are Qwen, created by Alibaba Cloud. You are a helpful assistant. user Jack is mad at his neighbors for blasting Taylor Swift all night, so he slashes three of their tires and smashes their front window. If the tires cost $250 each and the window costs $700, how much will Jack have to pay for the damages? Let’s think step by step and output the final answer after "####". assistant Response Let’s break down the problem step by step. 1. **Calculate the total cost of the slashed tires:** Cost per tire = 250 dollars Number of tires = 3 Total cost of tires =Cost per tire ×Number of tires Total cost of tires = 250 dollars/tire ×3tires Total cost of tires = 750 dollars 2. **Identify the cost of the smashed window:** Cost of the window = 700 dollars 3. **Calculate the total damages:** Total cost of tires = 750 dollars Cost of the window = 700 dollars Total damages =Total cost of tires +Cost of the window Total damages = 750 dollars + 700 dollars Total damages = 1450 dollars #### 1450 •first chunk - split the response into kchunks, and select all tokens in the first chunk •last 10 tokens - select the last kresponse tokens •random - select krandom tokens from the response •after </think> - select all tokens after the thought process (after the "</think>" tag) •after \boxed{ - select all tokens after the boxed answer •id_match_last - select the tokens in the model’s final answer •id_match_all - select all tokens tokens in the model’s final answer and any previous occurrences •after = - minimize all tokens after the equality symbol ("=") Figure 4 shows the initial correlation between negative entropy
https://arxiv.org/abs/2505.22660v1
and accuracy for each of these strategies on three datasets. While most token selection strategies do result in a positive correlation between confidence and accuracy, we note that the "last chunk" strategy has a significantly higher correlation compared to the "first chunk" strategy. This suggests that the most important tokens to minimize entropy over are tokens that occur later in the response. Furthermore, based on the low correlation results from the "id_match_last" and "id_match_all" strategies, we find that it is not sufficient to simply minimize the entropy of the final answer tokens; this suggests that, counterintuitively, the token-level confidence of the final answer tokens is not well-calibrated to its true response confidence/accuracy. 8 Table 4: The model identifies the geometric series and succinctly soles the problem afterwards Prompt system Please reason step by step, and put your final answer within \boxed{}. user The infinite product 3√ 10·3q 3√ 10·3r 3q 3√ 10··· evaluates to a real number√m. What is the value of m? Let’s think step by step and output the final answer within \boxed{}. assistant Response To find the value of the infinite product3√ 10·3p 3√ 10·3q 3p 3√ 10···, we can express each term in the product as a power of 10. The first term is3√ 10 = 101/3. The second term is3p 3√ 10 = 101/31/3= 101/9. The third term is3q 3p 3√ 10 = 101/91/3= 101/27. And so on. The infinite product can be written as: 101/3·101/9·101/27···= 101/3+1/9+1/27+···. The exponent is an infinite geometric series with the first term a=1 3and common ratio r=1 3. The sum Sof an infinite geometric series is given by: S=a 1−r=1 3 1−1 3=1 3 2 3=1 2. So the infinite product evaluates to: 101/2=√ 10. Thus, the value of mis 10. The final answer is: 10 5 Limitations Fundamentally, unsupervised learning alone is relatively limited compared to methods which are able to use external supervision for learning. Therefore, it is not surprising that our method is not able to match the performance of methods that have access to the ground-truth answers. It is, of course, a possibility for the model to be confidently wrong. Overconfidence is a well-known issue with language models and these calibration errors can cause RENT to fail catastrophically. It could be dangerous to deploy such an unsupervised learning method in the real world without any safeguards. However, we generally find empirically that confidence does correlate with accuracy and the performance does improve by using confidence alone. This indicates that even if the model is overconfident on some answers, it is well-calibrated overall. 6 Conclusion We presented RENT, an unsupervised reinforcement learning method which uses entropy as a reward. In our experiments, we showed that by simply minimizing entropy, we can improve the reasoning performance of language models on GSM8K, MATH500, AMC, AIME, and GPQA. Our reward function is extremely general and can be applied on a wide range of domains and tasks. We are excited about the possibility of using entropy minimization and, more broadly, unsupervised reinforcement learning to improve the capabilities of machine learning models in
https://arxiv.org/abs/2505.22660v1
regimes where external supervision is unavailable. 9 References [1]David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in neural information processing systems , 32, 2019. [2]Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17682–17690, 2024. [3]Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894 , 2018. [4]Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation. https://github.com/sahil280114/codealpaca , 2023. [5]Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems , 30, 2017. [6]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [7]Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070 , 2018. [8]Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, and Iryna Gurevych. A survey of confidence estimation and calibration in large language models. arXiv preprint arXiv:2311.08298 , 2023. [9]Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. Advances in neural information processing systems , 17, 2004. [10] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [11] Neha Gupta, Harikrishna Narasimhan, Wittawat Jitkrittum, Ankit Singh Rawat, Aditya Krishna Menon, and Sanjiv Kumar. Language model cascades: Token-level uncertainty and beyond. arXiv preprint arXiv:2404.10136 , 2024. [12] Jiuzhou Han, Wray Buntine, and Ehsan Shareghi. Towards uncertainty-aware language agent. arXiv preprint arXiv:2401.14016 , 2024. [13] Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008 , 2024. [14] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [15] Bairu Hou, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, and Yang Zhang. Decom- posing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 , 2023. [16] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. 10 [17] Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. Active retrieval augmented generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages
https://arxiv.org/abs/2505.22660v1
7969–7992, 2023. [18] Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221 , 2022. [19] Adam Tauman Kalai and Santosh S Vempala. Calibrated language models must hallucinate. InProceedings of the 56th Annual ACM Symposium on Theory of Computing , pages 160–171, 2024. [20] Seongun Kim, Kyowoon Lee, and Jaesik Choi. Variational curriculum reinforcement learning for unsupervised discovery of skills. arXiv preprint arXiv:2310.19424 , 2023. [21] Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, et al. Training language models to self-correct via reinforcement learning. arXiv preprint arXiv:2409.12917 , 2024. [22] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML , volume 3, page 896. Atlanta, 2013. [23] Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Huang, Kashif Rasul, Longhui Yu, Albert Q Jiang, Ziju Shen, et al. Numinamath: The largest public dataset in ai4maths with 860k pairs of competition math problems and solutions. Hugging Face repository , 13:9, 2024. [24] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with alphacode. arXiv preprint arXiv:2203.07814 , 2022. [25] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations , 2023. [26] Hao Liu and Pieter Abbeel. Behavior from the void: Unsupervised active pre-training. Advances in Neural Information Processing Systems , 34:18459–18473, 2021. [27] Potsawee Manakul, Adian Liusie, and Mark JF Gales. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 , 2023. [28] Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, and Samuel Rota Bulo. Autodial: Automatic domain alignment layers. In Proceedings of the IEEE international conference on computer vision , pages 5067–5075, 2017. [29] Zachary Nado, Shreyas Padhy, D Sculley, Alexander D’Amour, Balaji Lakshminarayanan, and Jasper Snoek. Evaluating prediction-time batch normalization for robustness under covariate shift. arXiv preprint arXiv:2006.10963 , 2020. [30] Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In International conference on machine learning , pages 2778– 2787. PMLR, 2017. [31] Yuxiao Qu, Tianjun Zhang, Naman Garg, and Aviral Kumar. Recursive introspection: Teaching language model agents how to self-improve. Advances in Neural Information Processing Systems , 37:55249–55285, 2024. 11 [32] Joaquin Quiñonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and N Lawrence. Co- variate shift and local learning by distribution matching. Dataset Shift in Machine Learning , pages 131–160, 2008. [33] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard
https://arxiv.org/abs/2505.22660v1
Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. [34] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Semi-supervised domain adaptation via minimax entropy. In Proceedings of the IEEE/CVF international conference on computer vision , pages 8050–8058, 2019. [35] Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. Advances in neural information processing systems , 33:11539–11551, 2020. [36] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [37] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [38] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735 , 2018. [39] Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, et al. Beyond human data: Scaling self-training for problem-solving with language models. arXiv preprint arXiv:2312.06585 , 2023. [40] Claudio Spiess, David Gros, Kunal Suresh Pai, Michael Pradel, Md Rafiqul Islam Rabin, Amin Alipour, Susmit Jha, Prem Devanbu, and Toufique Ahmed. Calibration and correctness of language models for code. arXiv preprint arXiv:2402.02047 , 2024. [41] Baochen Sun, Jiashi Feng, and Kate Saenko. Correlation alignment for unsupervised domain adaptation. Domain adaptation in computer vision applications , pages 153–171, 2017. [42] Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975 , 2023. [43] Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275 , 2022. [44] Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, and Dong Yu. A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation. arXiv preprint arXiv:2307.03987 , 2023. [45] Yuvraj Virk, Premkumar Devanbu, and Toufique Ahmed. Enhancing trust in llm-generated code summaries with calibrated confidence scores. arXiv preprint arXiv:2404.19318 , 2024. [46] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726 , 2020. [47] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and Zhifang Sui. Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning. arXiv preprint arXiv:2312.08935 , 2023. [48] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. 12 [49] Liangru Xie, Hui Liu, Jingying Zeng, Xianfeng Tang, Yan Han, Chen Luo, Jing
https://arxiv.org/abs/2505.22660v1
Huang, Zhen Li, Suhang Wang, and Qi He. A survey of calibration process for black-box llms. arXiv preprint arXiv:2412.12767 , 2024. [50] Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. arXiv preprint arXiv:2306.13063 , 2023. [51] Tianyang Xu, Shujin Wu, Shizhe Diao, Xiaoze Liu, Xingyao Wang, Yangyi Chen, and Jing Gao. Sayself: Teaching llms to express confidence with self-reflective rationales. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 5985–5998, 2024. [52] Daniel Yang, Yao-Hung Hubert Tsai, and Makoto Yamada. On verbalized confidence scores for llms. arXiv preprint arXiv:2412.14737 , 2024. [53] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in neural information processing systems , 36:11809–11822, 2023. [54] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Reinforcement learning with prototypical representations. In International Conference on Machine Learning , pages 11920–11931. PMLR, 2021. [55] Dongkeun Yoon, Seungone Kim, Sohee Yang, Sunkyoung Kim, Soyeon Kim, Yongil Kim, Eunbi Choi, Yireun Kim, and Minjoon Seo. Reasoning models better express their confidence. arXiv preprint arXiv:2505.14489 , 2025. [56] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems , 35:15476–15488, 2022. [57] Yuxin Zuo, Kaiyan Zhang, Shang Qu, Li Sheng, Xuekai Zhu, Biqing Qi, Youbang Sun, Ganqu Cui, Ning Ding, and Bowen Zhou. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084 , 2025. 13 A Hyperparameters A full list of hyperparameters can be found in Table 5. Table 5: Hyperparameters. Hyperparameter Value Max prompt length 1024 Max response length 3072 Batch size 64 GSM8K 500 MATH500 80 AMC 30 AIME 64 Countdown 196 GPQA Policy mini batch size 32 GSM8K 32 MATH500 80 AMC 30 AIME 32 Countdown 32 GPQA Policy micro batch size per GPU 8 Learning rate 1×10−6 Weight decay 0.01 Learning rate warmup Constant Optimizer Adam Temperature 1.0 for train 0.8 for validation Topk -1 Topp 1 Number of samples per example n 5 Remove padding True Use KL loss True KL loss coefficient 0.001 Clip ratio 0.2 Grad clip 1.0 14
https://arxiv.org/abs/2505.22660v1
arXiv:2505.15516v1 [cs.LG] 21 May 2025Journal of Machine Learning Research 1 (2025) 1-33 Submitted 4/00; Published 10/00 Explainable embeddings with Distance Explainer Christiaan Meijer c.meijer@esciencecenter.nl E. G. Patrick Bos p.bos@esciencecenter.nl Netherlands eScience Center Science Park 402 (Matrix THREE) 1098 XH Amsterdam Editor: JMLR editor Abstract While eXplainable AI (XAI) has advanced significantly, few methods address interpretabil- ity in embedded vector spaces where dimensions represent complex abstractions. We in- troduce Distance Explainer, a novel method for generating local, post-hoc explanations of embedded spaces in machine learning models. Our approach adapts saliency-based tech- niques from RISE to explain the distance between two embedded data points by assigning attribution values through selective masking and distance-ranked mask filtering. We evalu- ate Distance Explainer on cross-modal embeddings (image-image and image-caption pairs) using established XAI metrics including Faithfulness, Sensitivity/Robustness, and Ran- domization. Experiments with ImageNet and CLIP models demonstrate that our method effectively identifies features contributing to similarity or dissimilarity between embed- ded data points while maintaining high robustness and consistency. We also explore how parameter tuning, particularly mask quantity and selection strategy, affects explanation quality. This work addresses a critical gap in XAI research and enhances transparency and trustworthiness in deep learning applications utilizing embedded spaces. Keywords: embedded spaces, explainable AI, attribution, multi-modal, saliency maps 1 Introduction The soaring popularity in the last decade of machine learning (ML) and deep learning (DL) methods resulted in a high demand for understanding trained models. The field specifically aiming to solve this, eXplainable AI (XAI), has therefore been vibrant. XAI algorithms are continuously developed for images, text, time-series and tabular data (Vilone and Longo, 2021). However, XAI methods for general “embedded spaces” are less common. By the term “embedded space” we refer to a multi-dimensional vector space which original data can be projected or encoded into. “Latent space” is a popular alternative term. Embedded spaces are used in dimensional reduction operations that aim to preserve certain properties of the original data, for instance in applications like FaceNet (Schroff et al., 2015), Word2vec (Church, 2017), VAE (Kingma and Welling, 2019), Spec2vec (Huber et al., 2021), and Life2Vec (Savcisens et al., 2024). Multi-modal models such as CLIP (Radford et al., 2021) project multiple data modalities into shared embedded spaces. The ©2025 Meijer and Bos. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/ . Attribution requirements are provided athttp://jmlr.org/papers/v1/00-000.html . Meijer and Bos increasingly broad scientific application of (multi-modal) embedded space models to model complex phenomena, e.g. for studying visually grounded language learning (Chrupala, 2022) to learn about analogies with human learning, make explainable methods especially promising and urgent for increasing the efficiency and trustworthiness of research far beyond the domain of AI research itself (see also e.g. Yang et al., 2022; Gade et al., 2019; Gevaert, 2022). The structure of embedded spaces created by methods such as deep neural nets are by nature difficult to understand as its dimensions often represent multi-step abstractions of the original data space (Shahroudnejad, 2021; Bau et al., 2020). Nevertheless, the structure of some embedded spaces can be made more interpretable using XAI methods (Ali et al., 2023; Boselli et al.,
https://arxiv.org/abs/2505.15516v1
2024). A variety of XAI methods have been developed to interpret and explain model predictions, and these can be broadly categorized based on their approach. Much of the work done for embedded spaces specifically has been focused on the interpretability1 of the spaces or the model as a whole (Boselli et al., 2024). Methods for interpretability can be contrasted with methods for explainability of model decisions. To the best of our knowledge, no methods are currently available for explaining individual model decisions for embedded vectors. In particular, the method introduced in this paper provides a local, post-hoc explanation of aspects of distance between two data items’ projections within embedded spaces. In general, quantifying the input’s features’ effect on the model output is called attribution (Achtibat et al., 2023). Other local, post-hoc methods like RISE (Petsiuk et al., 2018), LIME (Ribeiro et al., 2016) and GradCAM (Selvaraju et al., 2020) take a single input data instance, and calculate the effect of its features on the output of the model that is being explained. Our method however, processes a single data instance by comparing to some other reference data instance. In this paper we introduce a method to locally explain embedded spaces using attribu- tion. In section 2 we describe the what and why of our method. In the rest of the paper, we evaluate its performance. To do so, we ran experiments on a set of models and data items that we list in section 3. We assess the results of the experiments on our explainer using proven quantitative measures in section 4. We also test the explainer’s usefulness from our own human point of view in the intuitive qualitative assessment of section 5. Open questions, future work and other concluding remarks are provided in sections 6 and section 7. 2 Algorithm We consider a model that encodes data instances from multiple data modalities into an single vector space. For example, the CLIP model encodes both image data and text data into a common semantic vector space, making it possible to reason about semantic similarity between instances from different modalities. Proximity between instances in this space can be interpreted as semantic similarity. We would like to be able to ask the model 1. We follow the XAI nomenclature proposed by (Ali et al., 2023). 2 Explainable embeddings with Distance Explainer why certain instances end up closer to each other than others. This question is what our proposed algorithm is developed to address. 2.1 Algorithm description Our starting point was RISE (Petsiuk et al., 2018), a method that assigns saliency values to each pixel or region of the input data. It does so by randomly masking pixels in a given image and examining the effect on the model’s activation value of a user-specified class of interest. Reasons for choosing RISE include that an implementation was freely available, that it is easy to reason about and hence to extend. Also, RISE is model-agnostic; it does not need any internal information about the model, making it widely applicable. Finally, by masking randomly, the method feeds into
https://arxiv.org/abs/2505.15516v1
the model different parts that potentially hold the most meaning when they’re combined, as opposed to masking isolated regions and ignoring the meaning of combinations of parts. This way, it is sensitive to a wide range of semantic contents. Task The RISE implementation accompanying the original paper (Petsiuk et al., 2018) was tailored to the task of classification, specifically on images. In Ranguelova et al. (2024), RISE was extended to classification on tabular data, textual data and time series. In this work, we extended the method to a task that differs in 2 important aspects: 1. In our task, both input and output types are different to what is the case in a clas- sification task. The classification task only has one input data-point, and the model is asked to output a certain class, or more generally a vector of activations over some class-space. In the current task, however, the model gets two input data-points and outputs a single distance of the two encoded inputs in the embedded space. 2. The RISE algorithm for producing the saliency map explaining a certain class equals a weighted sum over the randomly generated masks, where the weights are derived from the model’s output probability (or rather activation value) for the masked image (see Petsiuk et al., 2018, for more details, especially equation 6 therein). For our task, we do not have class probabilities. Instead, we have 2 points in an embedded space with an associated distance between them. As we will discuss below, converting this distance to a RISE weight was a crucial, non-trivial problem for us to solve. The above considerations lead us to the following three modifications to the original RISE method. 1. Because we consider two input items, but only one output saliency map, we define one of the input items to be a “reference” item, and the other to be the “to-be-explained” item. This results in the advantage that the “reference” item can be input to the algorithm in encoded form (i.e. already encoded into embedded space by the model). Therefore, the algorithm implicitly supports any data modality of the reference item. For the data modality of “to-be-explained” item, however, support explicitly needs to be implemented including a sensible masking strategy and visualization. 3 Meijer and Bos 2. The class-activation weight is replaced by the cosine distance dcos(e, r) between the “to-be-explained” item eand the “reference” item r. However, we do not directly use dcos(e, r) in the same way as in RISE, i.e. in a weighted sum over all masks. See Section 2.2 for discussion on this point. 3. We introduce a distance-ranked mask filtering method. We only sum over the masks that meet our filter criterion. Before filtering, the distances are used to rank the masks. Filtering then proceeds according to one of three methods: (a) taking masks associated with the top x% of distances, (b) the bottom x% (c) or combining these two sets, which we call the “mirror” approach. The reasoning behind these three methods is as follows. For masked images where the distance to the original
https://arxiv.org/abs/2505.15516v1
image is low, the masked pixels apparently did not contain much salient information. Hence, masks associated with them should contribute to (through the sum over masks) highlighting minimally salient regions. In contrast, masks associated with high distances contribute to highlighting maximally salient regions2. Selecting from either side should give a distinction between highly and lowly salient regions. Combining both these sources (the “mirror” filtering method), assuming similar statistical properties and sizes of the both sets, an improvement of the signal to noise ratio could be expected in each pixel due to one side partially canceling out the noise of the other side (relative to the signal). After filtering, the remaining masks are summed. In contrast to the original RISE method, we sum without multiplying by any weights (see Section 2.2 for motivation). In the “mirror” method, we subtract the masks from one set from the other, since they are expected to be “mirrors” in terms of their explanatory contents, as explained above. 2.2 Considered algorithm alternatives 2.2.1 Distance metric We chose a cosine distance dcos(x, y) to compare points xandyin embedded space. An- other obvious metric to use would have been the Euclidian distance. Because we use the distances, not to weigh, but just for ranking, the cosine distance is equivalent to the Eu- clidian distance (since one is proportional to the square of the other), in the special case that vectors are normalized to unit length beforehand. In the unnormalized case, however, cosine distance emphasizes angular differences between vectors while Euclidian distance emphasizes differences in vector length. The first of the embedded spaces we applied our method on, an Imagenet classifier (see section 3.1 for more details on the model), is not normalized to unit vectors, but rather is a 2. One may note that these two maps can be expected to be very similar in appearance, although inverted in values, which is why we call them “mirrors” below. 4 Explainable embeddings with Distance Explainer list of activation values, sometimes interpreted as probabilities, where the elements sum to 1. Because Euclidian distance is dependent on vector size and these non-unit vectors can have sizes in the range of [1 /√ D,1], where Dis vector dimension, Euclidian distance will have the unwanted effect of becoming smaller when the model is less certain of its classification, i.e. when activation values are distributed more broadly over dimensions (classes). The second embedded space employed in this work is generated by the CLIP model, which does not hold this specific property. Therefore, in this case a different distance metric may be more suitable for certain applications. Exploring this is out of the scope of this work. We emphasize that these are particularities of the models and layers of choice in this work. In general, the ideal distance metric is dependent on the properties of the particular embedded space. 2.2.2 Ranking or weighting Analogous to RISE, we initially experimented with weighting masks by a factor proportional todcos(e, r). However, we soon found that in the high dimensional embedded spaces of our applications differences between distances from one
https://arxiv.org/abs/2505.15516v1
input item to the next were very small, typically below 10−4. Weights based on these distances are very close together and hence do not distinguish the importance of one mask from another in a numerically significant way in the RISE algorithm. RISE was designed for weights in the range of [0 ,1]. Our first attempt to solve this issue was to use a weight proportional to ad, where dis distance and ais a value around 20. This blew up small numerical differences so that they would again form distinguishing weights. However, we found results lacking: to get sensible results (visually), we had to tune the aparameter for each new instance we wanted to explain. Moreover, in many cases, the end result was that a too small number of masked images would have high enough weights to have a significant effect on the resulting saliency map, leading to artifacts of the particular masking scheme to show up clearly in the result. This led us to our final approach, ensuring a fixed percentage of effective masks, hence avoiding masking artifacts. 2.3 Algorithm summary We can now summarize the “Distance Explainer” algorithm. Given that we take RISE as the basis of our explanation method, and RISE works on only one data item, we run our explainer as follows: 1. The “reference” item r, is passed through the model to produce its position in em- bedded space xr. This position is then used as a fixed input to our explainer. 2. The following is repeated Nmasks times on the “to-be-explained” item e: (a)eis randomly masked according to the RISE masking algorithm (given a set of configuration parameters c, see Section 5.2) producing masked input item Mi(e;c) at iteration i. 5 Meijer and Bos (b)Mi(e;c) is passed through the model to produce an embedded space position xMi(e;c). (c) Distance di=dcos(Mi(e;c), r) between xrandxMi(e;c)is calculated. 3. The distances {di}for all masked input images are used to rank the masked images. 4. A selection filter is applied on the ranked masked input images, as explained in Sec- tion 2.1. 5. After filtering, the remaining masks are summed to produce the attribution map. Our implemention of the algorithm is available on GitHub3. 3 Experimental setup To experiment with the explainability of distances in embedded spaces, we need models that enable us to compare the position in embedded space of one data item to that of another item. In this section we describe the models and data items we use in our experiments to assess our explainer in sections 4 and 5. A gallery of all experiment results applied on all item pairs is available on Zenodo4. The code used for the experiments is available on GitHub5. 3.1 Data item modalities and embedded space models We have run our experiments on the following two pairs of data item modalities and asso- ciated models for transforming data items into a common embedded space: Image vs image For this modality pair, we use ImageNet models to transform data items to a 1000-dimensional ImageNet classification vector. For each of the 1000 ImageNet classes, the
https://arxiv.org/abs/2505.15516v1
corresponding vector element gives the activation of the model on that class. While this would not typically be considered an embedded space vector, it is structurally similar and can be interpreted as a kind of semantic vector in an albeit quite arbitrary but in fact also quite well defined space. This latter fact makes it especially suited for our experiments, as we can unequivocally interpret the “embedded space” vectors. In most experiments we used ResNet50 from Keras. In section 4.3 we used VGG16, because it has a smaller number of layers. Image vs caption For this modality pair we did use an actual embedded space model, namely the ViT-B/32 (Dosovitskiy et al., 2021) CLIP model (Radford et al., 2021). It transforms both images and captions to a common 512-dimensional semantic vector space. 3.https://github.com/dianna-ai/distance_explainer 4.https://zenodo.org/records/14044386 5.https://github.com/dianna-ai/explainable_embedding/ 6 Explainable embeddings with Distance Explainer In this work we restrict ourselves to attribution maps on images, so we only use the captions as reference items6. 3.2 Data items For the Imagenet model, the data item pairs we used in our experiments were chosen to probe performance in four application areas of the algorithm: Same class We explain the ImageNet model, using two images of the same class (e.g two different images of a bee). We expect highest attribution scores for the pixels of areas that we consider key aspects for the specific class. Multiple classes per image Classification is slightly complicated by having multiple classes in one image. We use an image containing both a dog and a car and compare that to images containing only a dog or only a car. Close and/or related classes We compare an image of a bee to an image of a fly. Here we expect to see the algorithm highlight the wings and general insect-like character- istics of the animals that contribute to their closeness, while the stripes that saliently distinguish a bee from a fly are given as reasons for greater distances. Similarly, we compare an image of a car to that of a bike. Both may have similar traffic surroundings, and both have wheels, but the rest of the structure is quite different. “Opposite” items (that have nothing in common) In this category we compare for instance a flower and a car. A similar set of data pairs was explored for the CLIP model comparing images to captions: Bee image - “a bee sitting on a flower” We expect the explainer to highlight both flower and bee. Bee image - “a bee” Where we expect only to see the bee being highlighted. Bee image - ‘an image of a bee‘ Specifying the caption this way may improve perfor- mance for CLIP. Bee image - “a fly” Close and related class. We expect elements of an image, depicting a bee, that are are also present in images of flies, to be highlighted, while element that are not present in images of flies, not to be highlighted. Bee image - “a flower” Can the other main object in our bee image also be detected or is a flower not
https://arxiv.org/abs/2505.15516v1
salient enough for CLIP? 6. To do this in other data domains one needs to define a good masking function for that modality. In the DIANNA project (Ranguelova et al., 2024), we have implemented masking for text, tables and time-series (Meijer, 2024). 7 Meijer and Bos Labradoodle image - “a labradoodle” Single salient object, simple caption. Image of a dog and a car - “a car” and “a dog” Again the “multiple classes per im- age” case, compared to single object captions. Flower image - “a car” An “opposite items” example. Car image - “a bicycle” The “close and/or related classes” example. 4 Quantitative performance results In this section we explore quantitative performance aspects of our explainer. Quantifying the quality of explanations is a field still much in motion. Nauta et al. (2023) provide an overview on how to evaluate various aspects of XAI methods. To assess Nauta’s Correctness and Completeness, we use the Incremental Deletion algorithm (van der Spek, 2023, p.27) and MPRT (Adebayo et al., 2020). To assess Continuity we use Average Sensitivity (Bhatt et al., 2021). Note that recent research (Hedstr¨ om et al., 2024) provides robustness improvements in score calculation over the original MPRT method. However, we do not use the direct MPRT score output itself, because those are mainly useful for comparing multiple XAI algorithms, while we evaluate a single XAI algorithm. Instead, we qualitatively interpret the attribution maps from the intermediate steps in MPRT. The improvements in score calculation robustness the new methods – while we recommend using them when scoring itself is the goal – do not provide any added benefits for our purposes. For both MPRT and Average Sensitivity we used software implementations in Quantus (Hedstr¨ om et al., 2023), adapted to work with our specific task as opposed to classification. For the chapter headers, we therefore use the Quantus terminology instead of Nauta’s. The methods we used, faithfulness, sensitivity/robustness and randomization align with Nauta’s concepts of Correctness, Continuity and Completeness, respectively. 4.1 Faithfulness Explainers such as ours function by highlighting salient parts of input images (or data items more generally). The concept of faithfulness evaluates whether altering these highlighted regions produces corresponding changes in the model’s output—whether that output is a classification or, in our specific case, a distance measurement. In essence, faithfulness mea- sures the extent to which an XAI method successfully identifies truly meaningful compo- nents of the input. A faithful explanation highlights precisely those elements that genuinely influence the model’s decision-making process. We measured faithfulness of our explainer using the incremental deletion method. We ran it using three different deletion orders: 1. starting with pixels that contribute most to the two data items having a low distance; “LoDF” for low distance first, 8 Explainable embeddings with Distance Explainer 2. starting with pixels that most contribute to high distance; “HiDF”7for high distance first, 3. random order. We illustrate and discuss the results from two image pairs: bee versus fly in figures 1 and 2 and bee versus another bee in figures 3 and 4. In Figure 2 and Figure 4 we
https://arxiv.org/abs/2505.15516v1
see the respective “model scores” of these orders, in our case simply the resulting distance. LoDF and HiDF orders are shown in blue in separate panels, while random order is shown in both images in orange. LoDF distance: 0.79% distance: 1.63% distance: 1.39% distance: 1.47% distance: 1.35% HiDF distance: -69.80% distance: -8.47% distance: -32.47% distance: -34.89% distance: -25.24% Removed: 0.11random distance: 1.36% Removed: 0.22 distance: 1.61% Removed: 0.33 distance: 1.68% Removed: 0.44 distance: 1.71% Removed: 0.55 distance: 1.70% Distance to image 'fly' after removing pixels Figure 1: Incremental deletion on the bee image whose distance to a fly’s image is explained. Deleted pixels are coloured brown. From left to right, in every column a larger percentage of pixels is deleted. The top row has LoDF order. The middle row has HiDF order. The bottom row has random order. 7. We use the terms LoDF and HiDF instead of the “most/least relevant first” (MoRF/LeRF) terminology that is common in the literature on incremental deletion. We need different terms, for two reasons. First, because our maps are not “relevance” maps of pixels contributing to a certain classification (the task most often studied in this context) with some positive intensity, but are a bidirectional map of pixels that contribute either to an increase or a decrease of embedding vector distance, with a certain positive or negative intensity. Second, even if one would equate that intensity with relevance, one would have to deal with the fact that high positive and high negative intensity are two “modes” of “high relevance” that cannot be combined into one MoRF measure without losing critical information. On top of this, “LeRF” does not map straightforwardly to anything in our situation. 9 Meijer and Bos 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of removed pixels0.00000.00250.00500.00750.01000.01250.01500.0175Model scoreModel score after removing fraction of pixels LoDF random (a) LoDF vs random order. 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of removed pixels0.8 0.6 0.4 0.2 0.0Model scoreModel score after removing fraction of pixels HiDF random (b) HiDF vs random order. Figure 2: The vertical axis shows distance under incremental deletion on the bee image whose distance to a fly image is explained. Percentage of deleted pixels on the horizontal axis. 4.1.1 How to interpret these results? The “bee versus fly” graph in figure 2b removes the least fly-like pixels first. These are pixels that are indicative of a bee and not of a fly. Removing them significantly changes the model’s class predictions. The model classifies the original image as a bee, so there is a lot of distance towards the fly classification part of the embedded space. Therefore, removing pixels in this order results in large changes in distance. This explains the large decrease (large negative values). Figure 2a shows results when removing most fly-like pixels first. These are parts of the image that look like they could be parts of a fly and hardly influence the model’s scores with respect to its scores towards the bee class. The reason is that the model was already predicting the images as a bee, and not a
https://arxiv.org/abs/2505.15516v1
fly, with high confidence. There is almost nothing to gain here, and therefore any change in distance is small, comparable to random pixel removal. The second experiment compares two bee images. Two results in particular caught our attention: 1. Why is the striped part of the bee (the posterior segment) not highlighted by the XAI method? Both in the LoDF and the HiDF panels of figure 3 it seems to be of limited relevance to the distance. 2. Why is the order of magnitude of the model scores so much higher in figure 4 than in figure 2? 10 Explainable embeddings with Distance Explainer LoDF distance: 4923.43% distance: 80807.15% distance: 84561.44% distance: 84901.04% distance: 51996.11% HiDF distance: -87.48% distance: -95.26% distance: -88.92% distance: -97.59% distance: -93.27% Removed: 0.11random distance: -98.45% Removed: 0.22 distance: -82.52% Removed: 0.33 distance: -95.94% Removed: 0.44 distance: 2446.62% Removed: 0.55 distance: 882.49% Distance to image 'bee2' after removing pixels Figure 3: Incremental deletion on the bee image whose distance to another bee ’s image is explained. See the caption of figure 1 for a description of the rows and columns. 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of removed pixels0200400600800Model scoreModel score after removing fraction of pixels LoDF random (a) LoDF vs random order. 0.0 0.2 0.4 0.6 0.8 1.0 Fraction of removed pixels0200400600800Model scoreModel score after removing fraction of pixels HiDF random (b) HiDF vs random order. Figure 4: Like figure 2, but with another bee image as reference item. 11 Meijer and Bos The former seems due to the reference bee image having a lower contrast stripe pattern than the bee image shown here. In fact, this image is a painting of a bee, not a photograph. The artist may have exaggerated the stripe pattern compared to natural bee stripes. The second question can be answered by considering the structure of the “embedded space”, namely the 1000-dimensional ImageNet classification vector. A pretty certain clas- sification of a bee yields a vector that looks something like (class A, . . . , bee, . . . , class Z) = (0 .00001 , . . . , 0.98, . . . , 0.00001) , (1) which will have a very small distance to the other bee image’s vector. The maximum uncertainty classification vector, i.e. the vector representing equally likely classes, is (class A, . . . , bee, . . . , class Z) = (0 .001, . . . , 0.001, . . . , 0.001). (2) The distance between these two vectors is about 0.979. The change in distance from ∼0 (one bee versus another bee) to 0.979 (one bee versus maximum uncertainty) is also ∼0.979. Looking at figure 4a, we see indeed the distance to the reference point increasing by removing the most bee-like parts. Starting from 15%, the distance increases almost maximally away from the bee, meaning that it is now classified as something other than a bee (perhaps a flower, in this case). Looking at the random order curve, not a lot happens while removing the first 55%, indicating that the image
https://arxiv.org/abs/2505.15516v1
still scores highest in the bee dimensions. In other words, it still looks like a bee. This can be confirmed by looking at the last column in figure 3. The same happens in the HiDF order. Only the last 5% changes the model’s activations. Changes occur after removing more of the image than in the random order. 4.2 Sensitivity / robustness Robustness refers to the property that small changes in the input should result in corre- spondingly small changes in the output. This ensures that the explainer’s results are stable and not overly sensitive to minor perturbations in the input data. We use the Average Sensitivity metric (Bhatt et al., 2021) to measure the robustness of our explainer. The Quantus (Hedstr¨ om et al., 2023) implementation we use, described in more detail in Yeh et al. (2019), has an expected value range typically between 0 and 1, sometimes above 1. This can be explained by the fact that the metric is based on a ratio of two quantities. The numerator is the difference in attribution values between two attribution arrays: one for the unperturbed input image and one for the perturbed image. The denominator is the unperturbed attribution array. In these experiments we make MC sampling in the RISE-based part of our algorithm (the random mask generation) deterministic, otherwise we would have two sources of randomness when measuring robustness. We set Average Sensitivity parameters to nrsamples = 20 andperturb std= 0.1∗255, i.e. 10 percent of the maximum range given by RGB integer images that in each channel run from 0 to 255. We use 500 masks. We again focus here on the results of running this metric on the same two cases as for faithfulness: 1. the bee image versus the fly reference image case and 2. the bee versus 12 Explainable embeddings with Distance Explainer the other bee reference image case. The Average Sensitivity results are 0 .06 for the “bee versus fly” case and 0 .04 for the “bee versus bee” case. These are low sensitivity numbers, meaning the robustness of the explainer is high. Note that a minor limitation in the Quantus implementation of Average Sensitivity is that lower and upper limits of resulting perturbed image values cannot be set. In principle, images should not contain values below 0 or above 255, but with a normally distributed noise function, such values will certainly occur occasionally, making the input data go out of distribution to some extent. We expect the effect of this to be minor. 4.3 Randomization An XAI method should explain the model’s decisions. The result of running the XAI method should therefore be highly dependent on the model. This is tested using a ran- domization test, which assesses whether the explainer’s output changes as expected when model parameters are changed randomly. We tested our XAI method using the Model Parameter Randomization Test (MPRT) (Adebayo et al., 2020) which is a popular randomization test (Hedstr¨ om et al., 2024). MPRT randomly perturbs the weights in each model layer by shuffling them. The MPRT implementation in Quantus offers 3
https://arxiv.org/abs/2505.15516v1
modes: incrementally shuffling layers top-down, incre- mentally bottom-up and shuffling each layer independently. After perturbing the layers, MPRT summarizes its assessment in a metric: the Spearman correlation between the origi- nal “unperturbed” attribution map (i.e. without random perturbation of model layers) and the attribution map of a model with one or more randomized layers. The progression of this value from 1 (i.e. the unperturbed map’s correlation with the first “perturbed” map with zero perturbed layers) to something less than one can be used to characterize the explainer (although see the discussion in the introduction paragraph of this section about more robust characterization). In table 1 the MPRT scores are shown for each layer, and for each of the 3 “modes”. For both top-down and bottom-up modes, correlations with the original attribution map are low after randomizing the first layer already, indicating a high dependency of the attribution map on the model. The independent mode shows high correlations for certain layers. This is puzzling given the low correlation scores of the first 2 modes. Possibly these particular layers have low variance in their weights and shuffling them only has a small effect on the model’s output. In addition to the quantitative metric, we inspected the resulting attribution maps8of the distance explainer while tested through all 3 modes. For input data we use an image of a bee, and a reference point of an image of a fly. We use nmasks=1000 . Results are shown in Figures 5, 6 and 7. 8. The Quantus implemenation does not itself return these maps. Rather, they are used as intermediate results. To be able to inspect them, we modified the code so that the maps would be stored after each MPRT run. 13 Meijer and Bos original 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Figure 5: MPRT (top-down) results: The first image shows the saliency map for the unper- turbed model. From left to right, the second images onwards show saliency maps for the model of which iteratively one additional layer has its weights perturbed, starting with the final fully connected layer and ending with weights of all layers in the model perturbed. 14 Explainable embeddings with Distance Explainer original 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Figure 6: MPRT (bottom-up) results. Like figure 5, except here perturbation starts with the first layer. 15 Meijer and Bos original 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Figure 7: MPRT (independent) results. Like figure 5, except here for each image only one single layer has its weights perturbed while all other layers remain unchanged, starting with the final fully connected layer and ending with first layers of the model. 16 Explainable embeddings with Distance Explainer Layer Top down Bottom up Independent 1 0.09 0.30 0.11 2 0.28 0.05 0.10 3 -0.10 0.22 0.11 4 0.02 -0.18 0.92 5 -0.24 -0.09 0.51 6 -0.11 0.17 0.38 7 -0.09 0.01 0.67 8 -0.04 -0.02
https://arxiv.org/abs/2505.15516v1
0.61 9 -0.11 0.02 0.32 10 -0.15 0.03 0.60 11 -0.13 -0.03 0.56 12 -0.19 0.02 0.63 13 -0.19 0.01 0.41 14 -0.21 -0.03 0.42 15 -0.27 -0.01 0.54 16 -0.22 0.02 0.18 Table 1: MPRT scores 4.3.1 How to interpret these results? Consider these two hypothetical extremes: 1. The attribution map almost remains unchanged as a result of randomizing model layers. This happens when the explainer’s result is independent of the model. It is ignoring the model. Any patterns in the result are the explainer “doing the model’s work”. For instance, the explainer may do edge detection, which possibly is also what the model is expected to do. This would be undesired behavior for an explainer, because the explainer should stricly explain the model’s decisions without making assumptions itself about the data. 2. The attribution map changes completely after randomizing the first layer already, and keeps on changing for every additional randomized layer. This means the explainer’s result is highly dependent on the model, and does not indicate any hidden and invalid assumptions about the model. It is diserable behaviour for an explainer. Overall, figures 5, 6 and 7 show results that demonstrate a strong dependency on the model and align well with the expected behavior of the explainer. In the top-down random- ization of figure 5, progression in the randomization cascade is such that the information 17 Meijer and Bos is basically lost immediately after randomizing the first layer9, which is desirable as it in- dicates that the explainer’s output is highly dependent on the model’s learned parameters. This dependency ensures that the explainer is not generating results independently of the model but is instead accurately reflecting the model’s internal decision-making process. Note that randomization just shuffles the values within each layer. We need not worry about normalization issues. The first independent randomized image in figure 7 should be the same (or at least similar) as the first bottom-up randomized image in figure 6. Indeed, we see approximately the same effect. The last three images in the independent run randomize the three fully connected layers of the VGG16 model. There we expect all remaining information to be destroyed and that is indeed what we see in these results. This is confirmed by what happens similarly in the first randomized image of the top-down order. We do not fully understand in the independent order images why there are still so many images where the pattern looks almost the same as the original unperturbed one. How can it be that when any layer is completely randomized so much modelling information is preserved? Especially layers 5 through 13 look very much still like the unperturbed result. Another open question is why the first bottom-up and independent images are not exactly the same given equal random seeds. We leave these as open question for future research, because it is not the main focus of this work. 5 Qualitative assessment In addition to the quantitative performance assessment done in Section 4, in this section we inspect results of experiments visually and describe our findings qualitatively.
https://arxiv.org/abs/2505.15516v1
As well as thus providing our opinionated guide on how to interpret the attribution maps, this will cover aspects of performance that are overly complex to quantify. For instance, we will show that Nauta’s Consistency (Nauta et al., 2023) can be achieved in section 5.2.1. Contrastivity is tested by assessing the output of “Opposite” data items listed in section 3.2. Finally, Coherence is implicitly a main topic for almost all of this section. 5.1 Resulting attribution maps using default parameters For those interested only in what in our opinion are the best possible results, we present first the attribution maps on all data item pairs defined in section 3.2 using the explainer with what we determined to be its default settings. These default settings are motivated in the following sections 5.2 and 5.3. In figure 8 the image-versus-image model item pair attribution maps are displayed. The “bee versus fly” image highlights that the wings especially bring the bee’s image closer to that of a fly, while the stripes on the bee’s back drive it away from the fly. For the labradoo- dle, it seems like its eyes and its collar are especially labradoodle-like, which surprised us; a 9. Most “disruption” in the attribution map, caused by randomization of model layers, happens at the beginning, which indicates that the explainer performs well. Then progressive changes show weaker changes, which are expected if more layers are randomized. 18 Explainable embeddings with Distance Explainer bee_vs_fly labradoodles dogcar_vs_car dogcar_vs_dog flower_vs_car car_vs_bike0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.6 0.4 0.2 0.00.20.40.6 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 Figure 8: Attribution maps using default explainer parameter settings on the image-versus- image item pairs. 19 Meijer and Bos human would perhaps also identify its fur as distinctive. In the top right image, we compare the image containing both a dog and a car to a reference image of a car and indeed the car is highlighted as the distance-decreasing area, while the dog increases distance. As expected, the situation is reversed in the bottom left image in which the reference image contains another dog. In the bottom middle image, a flower is compared to a reference image of a car. While the algorithm detects that the flower moves the image away from the car region of embedded space, it does not find any regions that bring it closer to the car, which matches our expectation. Finally, the bottom right shows an image of a car compared to a reference image of a bicycle. Like the previous case, the car is highlighted as different from the reference, but the wheels are excluded from the strongly differing area. Indeed, bicycles have wheels in common with cars. The model does seem to distinguish somewhat between bicycle and car wheels, since the wheels are not indicated as a strong commonality. A human would perhaps conceptually group them together (as indeed the human authors of this work just did), but in terms of image classification, the correspondence is perhaps weaker. Figure 9 shows the image-versus-caption model item pairs attribution maps. The
https://arxiv.org/abs/2505.15516v1
first impression is that these attributions seem slightly less sharp in their delineations of the salient areas of interest. This might be an artifact of the model or of the fact that the pa- rameters used in both cases may be less ideal for this model than for the image-versus-image cases. Nevertheless, the performance of the explainer on this model, too, is convincing. 5.2 Parameter exploration While we strove to minimize the number of free parameters of the algorithm, we could not eliminate them all. For the remaining parameters, we found default values that give decent results for the experiments in this paper, as shown in the galleries in the previous section. However, in different situations (e.g. a different model, data sizes, data modalities, or even different objects of interest within the data items), the optimal parameters must be tuned. In this section, we explore the effects of non-default parameter choices in case of our own model and data choices. This can serve as a starting point for optimal tuning of our algorithm on experiments beyond those in this work. 5.2.1 Number of masks The higher the number of masks used, the more stable the attribution map becomes, as more samples are being used to average over as part of the algorithm. The statistics of this are shown in table 2. With increasing number of masks, the difference decreases between maps produced with a different set of random masks. The true attribution pattern reveals itself through a decrease in the mean pixel standard deviation. A visual inspection of figure 10, where we show attribution maps obtained by running the explainer using different initial random-generator-seeds, yields the same conclusion. The patterns become increasingly similar with increasing number of masks. 20 Explainable embeddings with Distance Explainer bee image wrt a bee sitting on a flower bee image wrt a bee bee image wrt an image of a bee bee image wrt a fly bee image wrt a flower labradoodles dogcar_vs_car dogcar_vs_dog flower_vs_car car_vs_bicycle0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.3 0.2 0.1 0.00.10.20.30.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.3 0.2 0.1 0.00.10.20.30.4 Figure 9: Attribution maps using default explainer parameter settings on the image-versus- caption item pairs. 21 Meijer and Bos Number of masks Mean STD per pixel bee vs fly flower vs car dogcar vs car 100 0.139 0.140 0.140 500 0.064 0.062 0.062 2000 0.031 0.031 0.032 Table 2: Statistics showing how an increase in number of masks decreases standard deviation between maps produced with a different initial random seed. The three columns show that this happens regardless of the exact data pairs. 100 masks 500 masks random seed 12000 masks random seed 2 random seed 30.4 0.2 0.00.20.4 0.6 0.4 0.2 0.00.20.40.6 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 Figure 10: Convergence of the attribution map with increasing number of masks (rows) shown for three different random seeds (columns) for the case of
https://arxiv.org/abs/2505.15516v1
comparing the bee image to the fly image. 22 Explainable embeddings with Distance Explainer As noted before, the numbers in table 2 do not generalize to all use-cases of Distance Explainer. Different resolution images, particular embedding models, etcetera, will require fine-tuning of the number of masks. Also changing other Distance Explainer parameters may necessitate re-tuning the number of masks, as we discuss in section 5.2.3. For the purpose of this work, unless otherwise noted, we use 1000 masks. We confirmed that in these cases 1000 masks provides a decent trade-off between stability/accuracy, computational cost and experimental complexity.10 5.2.2 Mask coverage percentage The proportion of pixels to keep unmasked in each random mask, pkeep, plays a subtle role. What value for pkeepresults in the most informative output, is dependent on the specific data item and the particular salient features. We found two cases that illustrate this. pkeep: 0.1 0.3 0.5 0.7 0.90.2 0.1 0.00.10.2 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.2 0.00.2 (a) Image of a bee versus caption “a bee”. pkeep: 0.1 0.3 0.5 0.7 0.90.2 0.00.2 0.2 0.00.2 0.4 0.2 0.00.20.4 0.2 0.00.2 0.2 0.1 0.00.10.2 (b) Image of a car versus caption “a bicycle”. Figure 11: From left to right, pkeepis increased, showing values: 0.1, 0.3, 0.5, 0.7, 0.9. The first case is again that of the image of the bee, using the image-versus-caption model to determine its distance to various captions like “a bee”, “a flower”, “a fly” or the more explicit “an image of a bee”. The explainer in these cases works best at values around the center of the pkeeprange, i.e. roughly between 0.4 and 0.6, and decreases in performance 10. We should note that there were other cases – not shown in this work – where we only saw stabilization around 5000 masks. For instance, one case was of an image containing both two partially obscured cars and one fully shown dog compared to the caption “a dog”. In this case, while the large scale features (the dog and the cars) stabilized at 1000 masks, a negative highlight (i.e. a “distance increasing” attribution area) started stabilizing on one of the wheels at the higher number of masks only. If users are interested in certain details, they should hence explore the optimal number of masks for their use-case. Another test case we explored was to compare an image of a flower to the caption query “a car”, i.e. to compare two completely orthogonal embedded vectors. In this case, again, the attribution map only stabilized at 5000 masks, although at less masks it seemed more or less usable. There was more noise around the flower at 1000 and 2000 masks. Whether this is relevant will differ per use-case. 23 Meijer and Bos the further one sets pkeeptowards 0 or 1. A decrease in performance in these cases mainly means noisy spots appearing, i.e. patches of high attribution values that do not reflect any salient parts of the image. The average amplitude of the attribution map also tends to decrease towards pkeepof 0 or 1, which
https://arxiv.org/abs/2505.15516v1
also increases the relative amplitude of possibly already lingering noisy patches, thus increasing the appearance of noise. This is illustrated in figure 11a. The second case is that of an image of a car compared to the caption “a bicycle”, as shown in figure 11b. In this case, we found that there are two interesting pkeepranges that explain different parts of the image. At values between 0.01 and 0.1 the car’s wheels are highlighted. At these low masking values, this is indeed something you could expect; if for some masks only the wheels are left in the image, the model might think it’s (part of) a bicycle. Mostly the bottom of the wheels are highlighted, so it seems to interpret the car fenders as proof against this being an image of a bicycle, which also makes sense. At 0.3 this tendency has faded away completely. Values between 0.2 and 0.9 highlight the car body, with other parts of the images remaining quite spotty. It’s hard in this case to say what is an optimal value of pkeep. Two different patterns emerge in two different ranges. Only 0.2 seems to cover both patterns, but sub-optimally. The range of 0.05-0.1 seems optimal for the wheels, while 0.2-0.7 are fine for the car body (with again a preference towards the central values around 0.5, where the average amplitude is higher and noise lower). These cases illustrate that the particulars of the data items play an important role in the performance of the explainer under different parameter settings. We recommend users of the algorithm to do a sweep of this parameter. Possibly, a multi-parameter combination of attribution maps could be made, but we leave this as a topic of future study. 5.2.3 Mask feature resolution In figure 12 we illustrate the effect of changing the number of random “superpixel” areas to split the random masking into. The numbers below each panel indicate the feature resolution: the number of features along both image axes. We see that resolutions of 2 and 4 are too coarse for this image. Everything above 32 is also useless, but for a different reason: it becomes too noisy. This can be compensated by using more masks. It makes sense that when trying to differentiate one class from a similar other class, that details of the input need to be taken into account and therefore a higher feature resolution is necessary. At values of 8 to 16 we see that the salient parts of the bee, like its wings, are more precisely delineated. So, also for this parameter, the perfect number of features depends on the purpose of the explanation (which determines the scale) and a cost-benefit analysis, because finer resolution requires more masks. We ran a follow-up experiment with more masks to confirm that the noise would disap- pear as we expected. We indeed see that the noise disappears as we increase the number of used masks as illustrated in figure 13. We do see that for image-versus-caption cases, even more masks are needed to reach the same level of noise reduction
https://arxiv.org/abs/2505.15516v1
as for the image-versus- image case. This matches our earlier observation of figure 9 in section 5.1. 24 Explainable embeddings with Distance Explainer feature resolution: 2 4 6 8 12 16 24 32 64 1280.50 0.25 0.000.250.50 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.2 0.00.2 0.2 0.00.2 0.2 0.1 0.00.10.2 0.2 0.1 0.00.10.2 0.2 0.1 0.00.10.2 Figure 12: The effect of using different mask feature resolution is shown. The number of features in both image axis directions is shown below each panel. number of masks: 2000feature res.: 16 4000 8000 2000feature res.: 32 8000 320000.3 0.2 0.1 0.00.10.20.3 0.3 0.2 0.1 0.00.10.20.3 0.3 0.2 0.1 0.00.10.20.3 0.2 0.1 0.00.10.2 0.2 0.1 0.00.10.2 0.2 0.1 0.00.10.2 Figure 13: The effect of increasing the number of masks on different mask feature resolution results are shown. Each row represents a different feature resolution, with the number of masks used shown below each panel. 25 Meijer and Bos 5.3 Mask selection As explained in section 2, we replaced the RISE algorithm’s mask weighting with a mask selection step. There are multiple ways to select the masks which we explore in this section. 5.3.1 One-sided vs two-sided We first explore the effect of using one-sided versus two-sided mask selection. In the one- sided case, we only select the top n-percent masks, i.e. those that decrease the distance to the reference point. In the two-sided case, we select both the top n-percent masks and the bottom n-percent masks, i.e. those that increase the distance to the reference point. We multiply the bottom n-percent masked attribution maps by -1 before adding them to the rest. We ran the explainer on the Resnet50 model and the “bee versus fly” case using the topn-percent masks and compared the attribution maps with the same run while using the bottom n-percent masks, showing the results in figure 14. We conclude that both cases yield an almost indistinguishable pattern, and are both measuring a similar signal. This led us to conclude that the two-sided approach is the better one, as it allows us to use all masks and therefore more data, averaging out noise in the final attribution map. Looking at the columns of figure 14, we see that across the range of mask selection percentages, from 10 to 50%, the signal remains more or less stable. This suggests that either most of the signal amplitude is contained in the first 10% of the masks, or all mask ranges contain similar information but also similar noise. We will look at this in more detail in section 5.3.3. top "closing" selected masks selected: 10% 20% 30% 40% 50% top "distancing" selected 10% 20% 30% 40% 50%0.6 0.4 0.7 0.6 0.5 0.4 0.3 0.6 0.5 0.4 0.6 0.5 0.4 0.6 0.5 0.4 0.40.6 0.30.40.50.60.7 0.30.40.50.60.7 0.40.50.6 0.40.50.6 Figure 14: Attribution maps produced using one-sided mask selection. Explaining the Resnet50 model on the “bee versus fly” case. Top row: only selecting from the top distance- decreasing masks. Bottom row: only selecting from the top distance-increasing masks (multiplied by -1 for easier
https://arxiv.org/abs/2505.15516v1
comparison). From left to right, the percentage of selected masks is increased. 26 Explainable embeddings with Distance Explainer 5.3.2 Threshold value Having chosen the two-sided “mirror” approach, we next studied the effect in this mode on the selection range of distance rank-ordered masks. Figure 15 shows how an increase in the selection thresholds (i.e. increasing inclusion of data) does not produce any meaningful changes in the attribution map patterns beyond a threshold of 10%. Neither the highlighted regions change significantly, nor does the quality of the maps (i.e. there is no major change in noise patterns). One could expect some discreteness effects with less data, but we see none for the thresholds we use. Concluding from this, we set our default threshold value to 10%. masks selected (each side): 5% 10% 15% 20% 40%0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.4 0.2 0.00.20.4 0.2 0.00.2 Figure 15: Attribution maps produced using two-sided “mirror” mask selection. Explaining the Resnet50 model on the “bee versus fly” case. From left to right, the percentage of selected masks is increased. 5.3.3 Non-selected masks Finally, for completeness’ sake, we ask what information is in the masks that were discarded by our selection filter(s). In figure 16, we compare an attribution map of the two-sided 10% setting we motivated in the above two sections, to four maps generated by combining slices of the discarded percentage ranges of the masks. discarded: 50-40% (both sides) 40-30% 30-20% 20-10% included: top 10%0.10 0.05 0.000.050.10 0.1 0.00.1 0.2 0.1 0.00.10.2 0.2 0.00.2 0.4 0.2 0.00.20.4 Figure 16: Image vs image: bee versus fly. The panels show attribution maps from two- sided masking threshold range slices indicated below the panels. The left four panels show results from masks that are excluded from the usual attribution maps. The right panel for comparison shows a default settings attribution map (i.e. the result from masks that areincluded). The first map, composed of the masks in the center of the ranking order, barely contains any signal at all. In the other three discarded maps one can start discerning the expected patterns, but they still contain more noise and therefore result in a less clear map. The 27 Meijer and Bos noise decreases towards the higher end of the ranking order. We conclude that masks that are closer to the top and bottom sections have a higher signal to noise ratio. 6 Discussion The proposed method provides a novel approach to explaining distances in embedded spaces, building on the principles of saliency-based attribution. Our experiments demonstrate that the method effectively identifies features contributing to the similarity or dissimilarity be- tween embedded data points. While the primary focus in this paper has been on image-based embeddings, the methodology is flexible and can be easily extended to other modalities, such as text and multi-modal embeddings. One such multi-modal application we demonstrated in this paper; we performed all our experiments on the image-versus-caption model as well and found comparable end-results to those in the image-versus-image experiments. One of the key challenges we encountered is parameter tuning, particularly in selecting
https://arxiv.org/abs/2505.15516v1
the appropriate number of masks and filtering criteria. The method’s reliance on random masking introduces a trade-off between computational cost and stability of results. Addi- tionally, we expect that different embedding models may require tailored distance metrics to better capture the underlying structure of their embedded spaces. Moreover, alternative perturbation strategies could be explored to reduce the risk of generating out-of-distribution inputs that might affect model behavior. In particular, the imputation method of RISE is simplistic and results in possible out-of-distribution inputs for the model. In this work, we left this imputation method unchanged and used it for our own algorithm. Another consideration is the interpretability of the resulting attribution maps. While they provide meaningful insights for AI researchers and developers, their utility for non- expert users remains an open question. Future work could explore user studies to assess the comprehensibility of explanations across different audiences. In particular, we are interested in how academic researchers can leverage this tool to improve their AI-enhanced research. 7 Conclusion We introduced a method to explain distances in embedded spaces using a saliency-based approach adapted from RISE. By analyzing the impact of input perturbations on similarity metrics, our method generates local explanations that highlight which features contribute most to embedding proximity or separation. Our experimental results demonstrate the efficacy of this approach across a small num- ber of different models and data modalities, particularly in image-based and multi-modal embeddings. Quantitative evaluations confirm that our method maintains robustness, con- sistency, and dependency on model parameters, aligning with established XAI evaluation criteria. Future work could focus on refining the method for text and other non-visual embed- dings, exploring alternative distance metrics, and optimizing and possibly automating pa- 28 Explainable embeddings with Distance Explainer rameter selection. Additionally, studies on the human interpretability of these explanations could provide insights into their usability in real-world applications. Acknowledgements We want to thank Willem van der Spek for fruitful discussions and feedback on our quan- titative evaluation and for providing an implementation for Incremental Deletion. We also thank Jisk Attema and Elena Ranguelova for helpful discussions about the algorithm and the experimental setup. Research software usage Software used in our Distance Explainer algorithm includes: DIANNA (Ranguelova et al., 2024), NumPy (Harris et al., 2020), scikit-learn (Pedregosa et al., 2011), tqdm, pyyaml and dataclass wizard. For the analysis in this paper, we additionally used Matplotlib (Hunter, 2007), Quantus (Hedstr¨ om et al., 2023), Keras (Chollet et al., 2015), PyTorch (Ansel et al., 2024), torchtext, CLIP (Radford et al., 2021), gitpython and Pillow. Generative AI assistance We used ChatGPT 4o on 21 Jan 2025 for refining the Discussion and Conclusion texts, used NotebookLM on 28 Jan 2025 to refine our introduction text, again ChatGPT 4o through Copilot on 11, 24 and 25 April 2025 for refining sections 4 and 5 and Claude 3.7 Sonnet and Mistral on 6 May 2025 to refine section 4. The abstract was written using Claude 3.7 Sonnet and NotebookLM on 20 May 2025. All AI-output has been verified for correctness, accuracy and completeness, adapted where needed, and approved by the
https://arxiv.org/abs/2505.15516v1
authors. References Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wie- gand, Wojciech Samek, and Sebastian Lapuschkin. From attribution maps to human- understandable explanations through concept relevance propagation. Nature Machine Intelligence , 5(9):1006–1019, 9 2023. ISSN 2522-5839. doi: 10.1038/s42256-023-00711-8. URL https://doi.org/10.1038/s42256-023-00711-8 . Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps, 2020. URL https://arxiv.org/abs/1810.03292 . Sajid Ali, Tamer Abuhmed, Shaker El-Sappagh, Khan Muhammad, Jose M. Alonso-Moral, Roberto Confalonieri, Riccardo Guidotti, Javier Del Ser, Natalia D´ ıaz-Rodr´ ıguez, and Francisco Herrera. Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence. Inf. Fusion , 99(C), November 2023. ISSN 1566-2535. doi: 10.1016/j.inffus.2023.101805. URL https://doi.org/10.1016/j. inffus.2023.101805 . 29 Meijer and Bos Jason Ansel, Edward Yang, Horace He, Natalia Gimelshein, Animesh Jain, Michael Voz- nesensky, Bin Bao, Peter Bell, David Berard, Evgeni Burovski, Geeta Chauhan, An- jali Chourdia, Will Constable, Alban Desmaison, Zachary DeVito, Elias Ellison, Will Feng, Jiong Gong, Michael Gschwind, Brian Hirsh, Sherlock Huang, Kshiteej Kalam- barkar, Laurent Kirsch, Michael Lazos, Mario Lezcano, Yanbo Liang, Jason Liang, Ying- hai Lu, CK Luk, Bert Maher, Yunjie Pan, Christian Puhrsch, Matthias Reso, Mark Saroufim, Marcos Yukio Siraichi, Helen Suk, Michael Suo, Phil Tillet, Eikan Wang, Xi- aodong Wang, William Wen, Shunting Zhang, Xu Zhao, Keren Zhou, Richard Zou, Ajit Mathews, Gregory Chanan, Peng Wu, and Soumith Chintala. PyTorch 2: Faster Ma- chine Learning Through Dynamic Python Bytecode Transformation and Graph Compi- lation. In 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2 (ASPLOS ’24) . ACM, April 2024. doi: 10.1145/3620665.3640366. URL https://pytorch.org/assets/pytorch2-2.pdf . David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba. Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences , 117(48):30071–30078, 2020. doi: 10.1073/pnas. 1907375117. URL https://www.pnas.org/doi/abs/10.1073/pnas.1907375117 . Umang Bhatt, Adrian Weller, and Jos´ e M. F. Moura. Evaluating and aggregating feature- based model explanations. In Proceedings of the Twenty-Ninth International Joint Con- ference on Artificial Intelligence , IJCAI’20, 2021. ISBN 9780999241165. Roberto Boselli, Simone D’Amico, and Navid Nobani. explainable ai for word embeddings: A survey. Cognitive Computation , 17(1):19, 12 2024. ISSN 1866-9964. doi: 10.1007/ s12559-024-10373-2. URL https://doi.org/10.1007/s12559-024-10373-2 . Fran¸ cois Chollet et al. Keras. https://keras.io , 2015. Grzegorz Chrupala. Visually grounded models of spoken language - a survey of datasets, architectures and evaluation techniques. Journal of Artificial Intelligence Research , 73: 673–707, February 2022. doi: 10.1613/jair.1.12967. DBLP’s bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Cre- ative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions. Publisher Copyright: ©2022 AI Access Foun- dation. All rights reserved. Kenneth Ward Church. Word2vec. Natural Language Engineering , 23(1):155–162, 2017. doi: 10.1017/S1351324916000334. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain 30 Explainable embeddings with
https://arxiv.org/abs/2505.15516v1
Distance Explainer Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. URL https://arxiv.org/abs/2010.11929 . Krishna Gade, Sahin Cem Geyik, Krishnaram Kenthapadi, Varun Mithal, and Ankur Taly. Explainable ai in industry. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , KDD ’19, page 3203–3204, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450362016. doi: 10.1145/3292500.3332281. URL https://doi.org/10.1145/3292500.3332281 . Caroline M. Gevaert. Explainable ai for earth observation: A review including soci- etal and regulatory perspectives. International Journal of Applied Earth Observa- tion and Geoinformation , 112:102869, 2022. ISSN 1569-8432. doi: https://doi.org/ 10.1016/j.jag.2022.102869. URL https://www.sciencedirect.com/science/article/ pii/S1569843222000711 . Charles R. Harris, K. Jarrod Millman, St´ efan J. van der Walt, Ralf Gommers, Pauli Vir- tanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern´ andez del R´ ıo, Mark Wiebe, Pearu Peterson, Pierre G´ erard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Na- ture, 585(7825):357–362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https: //doi.org/10.1038/s41586-020-2649-2 . Anna Hedstr¨ om, Leander Weber, Daniel Krakowczyk, Dilyara Bareeva, Franz Motzkus, Wo- jciech Samek, Sebastian Lapuschkin, and Marina M.-C. H¨ ohne. Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond. Journal of Machine Learning Research , 24(34):1–11, 2023. URL http://jmlr.org/papers/v24/ 22-0142.html . Anna Hedstr¨ om, Leander Weber, Sebastian Lapuschkin, and Marina MC H¨ ohne. Sanity checks revisited: An exploration to repair the model parameter randomisation test, 2024. URL https://arxiv.org/abs/2401.06465 . Florian Huber, Lars Ridder, Stefan Verhoeven, Jurriaan H. Spaaks, Faruk Diblen, Simon Rogers, and Justin J. J. van der Hooft. Spec2vec: Improved mass spectral similarity scoring through learning of structural relationships. PLOS Computational Biology , 17 (2):1–18, 02 2021. doi: 10.1371/journal.pcbi.1008724. URL https://doi.org/10.1371/ journal.pcbi.1008724 . J. D. Hunter. Matplotlib: A 2d graphics environment. Computing in Science & Engineering , 9(3):90–95, 2007. doi: 10.1109/MCSE.2007.55. Diederik P. Kingma and Max Welling. An introduction to variational autoencoders. Foun- dations and Trends ®in Machine Learning , 12(4):307–392, 2019. ISSN 1935-8245. doi: 10.1561/2200000056. URL http://dx.doi.org/10.1561/2200000056 . 31 Meijer and Bos Christiaan Meijer. Masking time-series for explainable ai. https://blog.esciencecenter. nl/masking-time-series-for-explainable-ai-90247ac252b4 , 2024. Accessed: 18 October 2024. Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, J¨ org Schl¨ otterer, Maurice van Keulen, and Christin Seifert. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput. Surv. , 55(13s), 7 2023. ISSN 0360-0300. doi: 10.1145/3583558. URL https://doi.org/10.1145/3583558 . F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blon- del, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830, 2011. Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explana- tion of black-box models, 2018. URL https://arxiv.org/abs/1806.07421 . Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini
https://arxiv.org/abs/2505.15516v1
Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. URL https://arxiv.org/abs/2103.00020 . Elena Ranguelova, Patrick Bos, Yang Liu, Christiaan Meijer, Fakhereh (Sarah) Alidoost, Leon Oostrum, Giulia Crocioni, Aron Jansen, Laura Ootes, Pranav Chandramouli, Stef Smeets, and Willem van der Spek. dianna, October 2024. URL https://doi.org/10. 5281/zenodo.14337052 . Marco Ribeiro, Sameer Singh, and Carlos Guestrin. “why should I trust you?”: Explaining the predictions of any classifier. In John DeNero, Mark Finlayson, and Sravana Reddy, editors, Proceedings of the 2016 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Demonstrations , pages 97–101, San Diego, Califor- nia, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-3020. URL https://aclanthology.org/N16-3020 . Germans Savcisens, Tina Eliassi-Rad, Lars Kai Hansen, Laust Hvas Mortensen, Lau Lille- holt, Anna Rogers, Ingo Zettler, and Sune Lehmann. Using sequences of life-events to pre- dict human lives. Nature Computational Science , 4(1):43–56, 1 2024. ISSN 2662-8457. doi: 10.1038/s43588-023-00573-5. URL https://doi.org/10.1038/s43588-023-00573-5 . Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 815–823. IEEE, June 2015. doi: 10.1109/cvpr.2015. 7298682. URL http://dx.doi.org/10.1109/CVPR.2015.7298682 . 32 Explainable embeddings with Distance Explainer Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep net- works via gradient-based localization. Int. J. Comput. Vision , 128(2):336–359, 2 2020. ISSN 0920-5691. doi: 10.1007/s11263-019-01228-7. URL https://doi.org/10.1007/ s11263-019-01228-7 . Atefeh Shahroudnejad. A survey on understanding, visualizations, and explanation of deep neural networks, 2021. URL https://arxiv.org/abs/2102.01792 . Willem van der Spek. Explaining the explainer. Master’s thesis, University of Amsterdam (UvA), Amsterdam, Netherlands, 9 2023. Available at https://staff.fnwi.uva.nl/a. s.z.belloum/MSctheses/MScthesis_Willem_van_der_Spec.pdf . Giulia Vilone and Luca Longo. Classification of explainable artificial intelligence methods through their output formats. Machine Learning and Knowledge Extraction , 3(3):615– 661, 2021. ISSN 2504-4990. doi: 10.3390/make3030032. URL https://www.mdpi.com/ 2504-4990/3/3/32 . Guang Yang, Qinghao Ye, and Jun Xia. Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: A mini-review, two showcases and be- yond. Information Fusion , 77:29–52, 2022. ISSN 1566-2535. doi: https://doi.org/10. 1016/j.inffus.2021.07.016. URL https://www.sciencedirect.com/science/article/ pii/S1566253521001597 . Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, and Pradeep Raviku- mar. On the (in)fidelity and sensitivity of explanations. In Proceedings of the 33rd In- ternational Conference on Neural Information Processing Systems , Red Hook, NY, USA, 2019. Curran Associates Inc. 33
https://arxiv.org/abs/2505.15516v1
arXiv:2505.15517v1 [cs.RO] 21 May 2025Robo2VLM: Visual Question Answering from Large-Scale In-the-Wild Robot Manipulation Datasets Kaiyuan Chen1,∗Shuangyu Xie1,∗Zehan Ma1Ken Goldberg1 1University of California, Berkeley∗Equal contribution {kych, syxie, zehanma, goldberg}@berkeley.edu https://huggingface.co/datasets/keplerccc/Robo2VLM-1 Abstract Vision-Language Models (VLMs) acquire real-world knowledge and general rea- soning ability through Internet-scale image-text corpora. They can augment robotic systems with scene understanding and task planning, and assist visuomotor policies that are trained on robot trajectory data. We explore the reverse paradigm — using rich, real, multi-modal robot trajectory data to enhance and evaluate VLMs. In this paper, we present Robo2VLM, a Visual Question Answering (VQA) dataset generation framework for VLMs. Given a human tele-operated robot trajectory, Robo2VLM derives ground-truth from non-visual and non-descriptive sensory modalities, such as end-effector pose, gripper aperture, and force sensing. Based on these modalities, it segments the robot trajectory into a sequence of manipulation phases. At each phase, Robo2VLM uses scene and interaction understanding to identify 3D properties of the robot, task goal, and the target object. The properties are used to generate representative VQA queries – images with textural multiple- choice questions – based on spatial, goal-conditioned, and interaction reasoning question templates. We curate Robo2VLM-1, a large-scale in-the-wild dataset with 684,710 questions covering 463 distinct scenes and 3,396 robotic manipulation tasks from 176k real robot trajectories. Results suggest that Robo2VLM-1 can benchmark and improve VLM capabilities in spatial and interaction reasoning. 1 Introduction Emerging Vision-Language Models (VLMs) [ 1,2,3,4,5,6,7] can perform high-level reasoning and scene interpretation [ 8,9]. Recent robotic manipulation systems that integrate VLMs demonstrate enhanced capabilities in semantic and long horizon task reasoning [ 10,11,12]. Yet, thekey challenge persists: the image-text corpora used for VLM pre-training high-quality lack fine-grained spatial information, which are prerequisites for robots to identify long-tail objects, complex scenes, reason about spatial relationships, and plan physical interactions. To address this challenge, some research [ 13,14,15] relies on data generation through simulation [ 16, 17,18]. However, such data has inherent limitations due to the sim-to-real gap, because simulator cannot accurately model visual properties such as noise, clutter, and lighting variations and physical properties such as contact dynamics, and interactions. Therefore, strong performance in simulation often fails to translate reliably to the physical world. Meanwhile, deriving spatial knowledge from real-world (“in-the-wild”) data typically requires extensive and costly human labeling [ 19,20]. In contrast, teleoperated robot trajectories that are used to train visuomotor policies [ 21], such as Vision- Language-Action(VLA) [ 10,22] or diffusion policies [ 23], typically include precise, structured proprioceptive and kinematic information—joint angles, end-effector poses, gripper states, and Preprint. Trajectory UnderstandingKey Modalities: End effector pose, languageQ: what language instruction best describes the robot’s trajectory shown in the image? Q: The robot is tasked to move the spoon, which arrow shows the most possible direction to move next?Key Modalities: End effector pose, languageSpatial RelationshipQ: The robot is tasked to move the tap, which configuration shows the goal state that the robot show achieve? ABCDE Key Modalities: language, gripper stateTask State - GoalKey Modalities: language, gripper stateQ: Is the robot grasp Sponge stable?Task State - Grasp Key Modalities: 2 views, stereo (depth) imgKey
https://arxiv.org/abs/2505.15517v1
Modalities: stereo (depth) imagesScene UnderstandingKey Modalities: gripper state, Side+Wrist ViewRobot State - GripperKey Modalities: 2 Side View, Wrist ViewTask State - Success Q: In the left image (ext1 camera), a red dot is marked. Which point is the closest point in the right image (ext2 camera) corresponding to the same 3D location?Multiple ViewQ: In the image from ext2, which colored point is CLOSEST to the camera?Q: The robot is to turn on the toaster. Has the robot successfully completed the task?LeftRightWristLeftRight LeftWristQ: Is the robot gripper open?ApproachStabilizeContactDetachResetFigure 1: Robo2VLM-1 dataset overview . The middle colorbar traces a typical manipulation episode—from pre-grasp through immobilization, contact, detach, and into post-grasp. Surrounding panels give example questions for each VQA category. Dashed arrows connect every category to the phase(s) in which its questions are sampled. Icons beneath each panel list the key sensing modalities (RGB, stereo depth, wrist/side cameras, gripper state, end-effector pose, language instructions) needed to derive ground-truth answers. force–torque readings—that implicitly encode 3D spatial information. We hypothesize that visual and textual data extracted from robot trajectories can improve VLM’s spatial reasoning capabilities. We present Robo2VLM, a multiple-choice Visual Question Answering (VQA) dataset generation framework for VLMs from real-world robot data. Given a human-teleoperated robot trajectory, Robo2VLM segments the trajectory into distinct manipulation phases, selects representative frames from each phase, and generates questions whose answers are supported by synchronized proprio- ceptive and kinematic ground truth. We apply Robo2VLM to 176k diverse, real-world trajectories from the Open X-Embodiment (OXE) dataset [ 24], producing over 3 million VQA samples. Inspired by data optimization paradigms such as domain reweighting in natural language processing [ 25] and robot policy learning [ 26], we curate Robo2VLM-1, a large-scale, in-the-wild VQA dataset with 684,710 questions covering 463 distinct scenes, 3,396 robotic manipulation tasks, and 149 manipulation skills. We evaluate 14 model configurations with state-of-the-art open source models (LLaV A, Llama and Qwen) and with different parameter sizes and prompting techniques. The results indicate that some VLMs can achieve near human performance in questions related to object reachability and interaction understanding. Evaluation also suggests a significant gap to human performance, especially in complex reasoning of fine-grained spatial relationship and interactions. Finetuning LLaV A [4] with Robo2VLM-1 improves most of the spatial and interaction capabilities with increasing training dataset size, with a maximum 50% accuracy gain in state reasoning and task understanding. This paper makes the following contributions: (1) Robo2VLM, a VQA data generation framework from real robot trajectories. (2) Robo2VLM-1, an open VQA dataset with 684,710 questions covering diverse and realistic evaluation scenarios for manipulation. (3) Extensive evaluation data on state-of- the-art and fine-tuned VLMs. 2 Related Work Large-Scale Robotics Datasets Recent large-scale robotics datasets, such as Open -X-Embodiment [ 24] and DROID [ 27], provide extensive teleoperated demonstrations of complex manipulation skills. These datasets are foundational for training modern gen- 2 eralist robot policies—including Octo [ 22], RT-1 [ 28], RT-2 [ 29], OpenVLA [ 10], Gemini Robotics [ 11],π0[30], and Hi Robot [ 12]—enabling them to learn diverse skills and understand nuanced physical interactions from broad data. Crucially for grounding
https://arxiv.org/abs/2505.15517v1
VLMs, robotics datasets from Open -X-Embodiment contains rich sensory-modal including RGB video, proprioceptive [31,28,32,33,34,35,36,37,38,39,40,41,42,43,44], depth data [ 31,28,32,33,35], and force-torque [ 37,39,40,41], that reflect the dynamics of interaction. These information presents an opportunity to bridge robotics data with VLMs. VQA Benchmarks for Robotics and Embodied AI VQA offers a powerful paradigm for evaluat- ing the visual reasoning capabilities of VLMs [ 45,46,47]. Recently, VQA benchmarks have been developed for robotic tasks such as visual navigation in long-horizon planning [ 48,49]. Simulation- based approaches [ 13,14,15] (often utilizing environments like [ 16,17,18]) generate large-scale VQA dataset, but face the persistent sim-to-real domain gap, where the result may not hold in reality due to factors like noise, clutter, and lighting variations. Real-world data benchmark, such as RoboVQA [ 19] (human-verified Q/A), improve generalization to real world setting but often involve significant manual annotation effort. These methods typically do not fully automate VQA generation by exploiting the rich spectrum of non-visual modalities (e.g., force, torque, proprioception), limiting their ability to support questions grounded in concepts such as grasp stability or multi-view spatial alignment. In contrast, Robo2VLM reduces the need for manual annotation and enables interaction and physical properties reasoning that are underexplored in previous VQA benchmarks, such as gripper states, grasping stability, task goal, and spatial information focus on the robot and target objects. 3 Robo2VLM Robo2VLM generates five-way multiple-choice question answering (MCQ) from real robot teleoper- ated trajectories. Robo2VLM offers the following key features: (1) High-quality and representative keyframe selection from long-horizon, in-the-wild, multi-modal robot trajectories, ensuring semantic diversity and relevance; (2) Manipulation-centric question generation encompassing spatial, goal- conditioned, and interaction reasoning, each aligned with specific manipulation phases and grounded in corresponding sensor modalities. We begin by defining a robot trajectory as a time-synchronized sequence of data frames from multiple sensor modalities including exteroceptive and proprioceptive [ 50]. Let Tdenote the length of a trajectory, and let t∈ {1,2, . . . , T }index the discrete time steps. Definition 3.1 (Robot Observation Data Frame) At each time step t, the robot data frame is repre- sented as a tuple: Dt= IRGB t,IStereo t,pEE t, sGripper t ,ft where IRGB t={IRGB t∈RH×W×3}is a set of multi-view RGB images captured from monocular cameras, IStereo t={IStereo t∈R2×H×W×3}denotes a set of multi-view stereo image pair (left and right) if available, pEE t∈SE(3)is the 6-DoF end-effector pose and sGripper t∈Rdenotes the scalar gripper state such as gripper aperture, ft∈R6is the force-torque vector from the end-effector sensor. The camera images are referred as exteroceptive sensing and the end-effector-related states belong to proprioceptive sensing. Definition 3.2 (Robot Trajectory) A trajectory Tis defined as the temporally ordered sequence of observations D1:Twith a trajectory task language description l: T={D1:T, l} Given a robot trajectory, Robo2VLM (Fig. 2) begin with scene-interaction understanding , applying semantic segmentation and manipulation phase classification to identify key segments (e.g., pre- grasp/approaching, contact, grasp, release). From these, we extract keyframes based on phase transitions, scene coverage, and visibility of objects or the robot across multiple camera views. We use manipulation domain knowledge to design question prototype to target
https://arxiv.org/abs/2505.15517v1
core manipulation skills such as spatial relationship, goal conditions, and interaction understanding. Robo2VLM instantiates 3 Left CamRight Cam Wrist CamRobo2VLM Scene-interaction Understanding Keyframe selectionSemantic SegmentationManipulation Phase Classification Embodied Question Template QueryObject info, current phaseSelected camera viewSpatial Query ProjectionVisual Language Grounding Embodied Question Conversion176k Manipulator Episode from 463 Real Scenes Sample Question Multi-modality Real Data VQA categoriesFigure 2: Robo2VLM framework. Robo2VLM generates multi-modal real-world robot trajectories through (1) manipulation phase classification, (2) keyframe selection guided by scene and interaction cues, and (3) structured VQA question prototype. these prototypes on selected keyframes and transforms them into natural language multiple-choice questions via a visual-language grounding module that performs question conversion and spatial query projection. 3.1 Scene-Interaction Understanding Embodied Scene Understanding Given a task description in nature language and all images from different camera views, we first parse the language instruction using an off-the-shelf LLM such as Qwen 2.5 [ 2] to obtain {target object} , scene, task, and skill description. For the spatial understanding in manipulation, we need to know the relative direction and displacement between target object and gripper. From the proprioceptive data, we obtain the target object interaction point ground-truth from the robot trajectory data frames. Manipulation Phase Segmentation To segment robotic manipulation trajectories into semantically meaningful phases, we define a temporal phase classification function based on the sequence of end- effector poses, gripper aperture signals, and force-torque measurements: pEE 1:T, sGripper 1:T,f1:T. To align different types of gripper aperture, sGripper t is normalized to [0,1], where 0 indicates fully open and 1 indicates fully closed. Let st∈[0,1]denote the normalized aperture at time t, and∆st=st−st−1 its temporal derivative. ∆st≈0denotes a small change within a tolerance margin ϵ, typically set to filter out noise. Let ∥ft∥be the force magnitude (if available). We introduce three threshold parameters: τg(grasp threshold), τc(closure threshold), and τf(force threshold for contact detection). Manipulation processes can be represented as a sequence of discrete phases, including approaching, stabilizing, contacting, releasing, and resetting or transitioning to subsequent actions. We denote the phase varible as Φ ={Φapp,Φstab,Φcont,Φrel,Φreset,Φtrans}. Each timestep tis assigned a label ϕt∈Φaccording to the following temporal logic rules: ϕt=  Φapp ifst< τg∧∆st<−ϵ Φstab ifϕt−1= Φ app∧st< τg∧ |∆st| ≤ϵ Φcont ifϕt−1= Φ stab∧st≥τc∧ |∆st| ≤ϵ∧(∥ft∥> τf∨force unavailable ) Φrel ifϕt−1= Φ cont∧st≥τc∧∆st> ϵ Φreset ifϕt−1= Φ rel∧st< τg∧∆st> ϵ Φtrans otherwise The inclusion of force magnitude ensures that passive closure without external contact is not misclas- sified as active interaction. This multimodal phase labeling strategy captures both kinematic intent and physical contact, enabling robust segmentation of diverse manipulation behaviors. To enforce a temporally coherent yet flexible phase progression, we define a partial order over the manipulation phases: Φapp≺Φstab≺Φcont≺Φrel≺Φreset→Φapp This structure enforces unidirectional transitions along the phase chain, while allowing both phase skipping (e.g., directly from ΦapptoΦcont) and looping from the terminal phase Φresetback to the 4 Table 1: Categorization of visual reasoning questions for robotic manipulation, with manipulation phase (color- coded) and data modality context. Approach, Stabilize, Contact, Release, Rest. Capabilities Question Prototype Manip. Phase Sensor Modality Spatial Reasoning Object State Is the {target object} reachable by the robot? IRGB t,
https://arxiv.org/abs/2505.15517v1
D t Spatial Relationship What’s the relative direction in 3 -D between end effector and {target object} ?IRGB t,pEE t Scene Understanding Which point is closer to the camera viewing the scene? IRGB t, IStereo t Multiple View Which point in the right -side image corresponds to the point in the left-side image?IStereo t Goal-conditioned Reasoning Task State-success Has the robot successfully completed the task? IRGB t Task State-Goal What is the goal configuration for {interaction} ? IRGB t,pEE t Action Understanding The robot is {interaction} . What is the robot’s current action phase?IRGB t,T1:t Interaction Phase What will the robot do next? IRGB t,˙pEE t Trajectory Understanding What task does this trajectory likely accomplish? IRGB t,pEE t Interaction Reasoning Task State-grasp Is this a stable grasp? IRGB t,ft Robot State Is the robot gripper currently open? IRGB t, sGripper t initial phase Φapp, which is common in sequential manipulation routines. At each time step t, the phase label must satisfy ϕt⪰ϕt−1, orϕt= Φ appifϕt−1= Φ reset, ensuring temporal monotonicity or task repetition without reversal. The auxiliary state Φtransis used for ambiguous, missing, or conflicting observations where no confident assignment is possible. This symbolic, temporally- constrained model supports robust segmentation of complex manipulation behaviors under noisy or partially missing sensory input. 3.2 Visual Question Prototype We design a set of visual question prototypes , each of which aligns with specific manipulation task completion required robot capabilities and anchors to distinct manipulation phases as illustrated in Table 1. These prototypes are organized into three reasoning categories. Spatial Reasoning focuses on the robot’s understanding of object geometry, reachability, and spatial layout across viewpoints. Questions such as “Is the object reachable?” or “What’s the relative direction between the gripper and the object?” are grounded in the early approach and stabilize stages. These rely on RGB, depth, stereo, and 3D gripper pose data, which together enable accurate localization and spatial inference across frames or views. Goal-conditioned Reasoning probes the agent’s high-level understanding of tasks, including goal inference, future action prediction, and overall task success. Questions such as “Is the task failed?”, “What will the robot do next?”, and “What is the robot’s current action phase?” span multiple manipulation phases from approach through reset . These require temporal context, pose estimation, and sometimes motion history, leveraging the multi-step evolution of the scene. Interaction Reasoning focuses on physical interaction dynamics, such as grasp stability or the robot’s current actuator state. These occur during stabilize , contact , and release phases, and depend on RGB, tactile, or gripper aperture signals. For instance, the question “Is this a stable grasp?” may depend on contact force readings or inferred object displacement. The ground truth of the questions are grounded by multiple sensor modality observations. We design the incorrect answers as part of the visual question prototypes. For example, in the scene understanding, we require the sampled points to be significantly different in depth from other points and from the depth sensor to account for sensor inaccuracy. In action understanding, the correct action arrow differs significantly from the distractor arrows by having
https://arxiv.org/abs/2505.15517v1
a large angular separation in the projected 2D image. To detect guessing by hallucination, we randomly replace some correct answers with "None of Above" option. 5 OverallTotal samples: 60,000QuestionsAvg. length: 108.69 chMedian length: 113.00 chMin length: 28 chMax length: 378 chChoicesAvg. # / Q: 4.65Avg. length: 14.22 chMax length: 271 chImage ResolutionsAvg. width: 520.66 pxAvg. height: 292.99 pxMost common: 640x360 (39.61%)Unique res.: 19across 35 research labs worldwide, encompassing over 1 million trajectories covering more than 500skills. We select a subset focusing on manipulation with real robot embodiments. In total, we use 13datasets [29?,30,31,32,33,34,35,36,37,38,39,40,41,42] with a total of 176,139 trajectories.While most of the modalities are included in Open X-Embodiments release, we manually includemodalities introduced by the original paper. For example, DROID dataset [29] includes cameracalibration information and stereo depth. The detailed modality inclusion can be found in3.Table 3: Trajectories and sensing modalities across datasets with a total of 176,139 trajectories.Modalities include proprioception, depth sensing, camera configurations, calibration availability,and number of VQA items used in benchmark.↭= available,✁= not available,—= unknown.Eric:details need to be updatedDataset Name # Traj. Proprio Depth Cameras Force Calibration ManipulationVQA QuestionsDROID [29] 92,233↭↭2 side, wrist✁—30,000Fractal20220817 (Google Robot) [?] 73,499↭↭1 side✁✁15,000Freiburg Franka Play [30,31] 3,242↭↭2 side, 2 wrist✁↭1,200Stanford Kuka Multimodal [32] 3,000↭✁1 side——1,200Berkeley Autolab UR5 [33] 896↭↭1 side, wrist✁—3,000Austin Sirius [34] 600↭✁2 side, wrist✁—1,200Berkeley MVP [35] 480↭✁1 side, wrist↭—2,166NYU VINN [41] 435↭✁1 wrist✁↭34Berkeley Fanuc Manipulation [36] 415↭✁2 side, wrist↭—1,200ASU Table Top [37,38] 110↭✁1 side↭—1,446VIOLA [39] 135↭✁2 side, wrist✁—2,049Austin BUDS [40] 50↭✁2 side, wrist✁—1,471NYU ROT [42] 14↭✁1 side——34Robo2VLM for Open X-EmbodimentWe use Robo2VLM to processes each robot trajectoryfrom the Open X-Embodiment dataset by selecting and interpreting the scenes. The entire processtakes 2935.7 GPU hours on Nvidia A100 GPUs. For each selected keyframe, Robo2VLM instantiatesquestions from embodied question templates resulting in the generation of an initial pool of over3 million VQA items. For each question, we generate four wrong answers that are significantlydifferent from the correct answer. To detect guessing by hallucination, we randomly drop 10% of thecorrect answers by replacing with "None of Above".ManipulationVQA CurationThis curation process selects a representative and high-quality subsetof questions that effectively balances diversity in scenes, tasks, skills, and reasoning types, whileensuring clarity and unambiguous ground truth. The curation combines guided sampling by humanannotators, with an aim that the distribution of these questions across different scene types and tasktypes (Fig.??). We gather 684,710 questions. We use further human annotation to clean the generateddata with aimed of 60,000 high quality VQA questions as ManipulationVQA for training and testing6Figure 3: Distribution and key statistics of Robo2VLM-1 dataset. (Left) Robo2VLM-1 covers diverse scenes with the most frequent scenes in office (33.6%), lab (25.3%), and kitchen (16.9%). (Middle) Robo2VLM-1 covers tasks including common manipulation actions include pick (21.5%), put (20.6%), and move (9.9%). (Right) The table summarizes key dataset statistics including question characteristics, answer choices, and image resolutions. 3.3 Keyframe Selection Given that raw robotic trajectories often contain hundreds of frames sampled at high frequency, using all frames is computationally expensive and can introduce redundancy due to minimal temporal variation. Moreover, many intermediate frames are visually or semantically uninformative for downstream reasoning tasks.
https://arxiv.org/abs/2505.15517v1
To address this, we select a compact set of keyframes that retain essential semantic and visual cues while reducing redundancy and data volume. These keyframes are extracted from the multi-modal robot trajectory T={Ot}T t=1based on manipulation phase transition, scene coverage diversity and context visibility. 4 Robo2VLM-1 Dataset Table 2: Trajectories and sensing modalities across datasets with a total of 176k trajectories. # Traj : number of trajectories; Prop : joint-state proprioception; Dpth : depth images; GripAp : gripper-aperture signal; # VQA : number of questions. ✓denotes modality is available, ✗ denotes absent. Dataset # Traj Prop Dpth GripAp # VQA DROID [31] 92k ✓ ✓ ✓ 299k Fractal [28] 73k ✓ ✗ ✓ 267k Kuka MM [34] 3k ✓ ✓ ✓ 25k Autolab [35] 896 ✓ ✓ ✓ 22k Sirius [36] 600 ✓ ✗ ✓ 21k MVP [37] 480 ✓ ✗ ✓ 8k VINN [38] 435 ✗ ✗ ✗ 34 Fanuc [39] 415 ✓ ✗ ✓ 11k TableTop [41] 110 ✓ ✗ ✓ 5k VIOLA [42] 135 ✓ ✗ ✓ 8k BUDS [43] 50 ✓ ✗ ✓ 6k ROT [44] 14 ✓ ✗ ✓ 245Open X-Embodiment and its datasets Open X-Embodiment [ 24] is major collaborative re- search initiative that aggregates robotic demon- stration data collected from 22 different robot embodiments across 35 research labs worldwide, encompassing over 1 million trajectories cov- ering more than 500 skills. Applying domain reweighting [ 25], we select a subset focusing on manipulation with real robot embodiments. In total, we use 13 datasets [ 31,29,32,33,34, 35,36,37,39,40,41,42,43,38,44] with a total of 176,139 trajectories. While most modal- ities are included in Open X-Embodiments re- lease, we manually include modalities intro- duced by the original paper. For example, DROID dataset [ 31] includes camera calibra- tion information and stereo depth. The detailed modality inclusion can be found in Table. 2. Robo2VLM for Open X-Embodiment We use Robo2VLM to process each robot trajectory from the Open X-Embodiment dataset by selecting and interpreting the scenes. The entire process takes 2935.7 GPU hours on Nvidia A100 GPUs. For each selected keyframe, Robo2VLM instantiates questions from embodied question templates resulting in the generation of a pool of over 3 million VQA items. 6 Table 3: Performance Comparison of Multimodal Foundation Models on OpenX-VQA Benchmark Categories (%). Upper part: zero-shot. Lower part: with CoT prompting. Spatial Reasoning Goal Reasoning Interaction Reasoning Model Overall RS OS SR SU MV TS-G TS-S TS-GL AU IP TU (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) Zero-Shot LLaV A 1.5-7B 21.58 35 .32 23 .87 16 .08 17 .78 17 .50 31 .82 23 .79 19 .03 20 .30 21 .74 22 .37 LLaV A 1.6 Mistral-7B 24.09 30 .31 35 .13 19 .42 20 .2419.29 34.20 30 .77 19.52 18.67 20 .70 22 .83 LLaV A 1.6-34B 24.94 26 .66 29 .75 21 .47 23 .18 17 .86 29 .19 29 .40 17 .90 19 .49 36 .98 30 .59 Llama 3.2-90B 28.60 31 .94 55 .87 18 .51 26 .61 16 .43 28 .23 35 .27 8 .06 18 .13 51 .56
https://arxiv.org/abs/2505.15517v1
49 .77 Qwen 2.5 VL-7B 30.63 41 .68 55 .63 21 .55 24 .38 17 .32 33 .01 42 .57 7 .82 25 .71 46 .61 39 .73 Qwen 2.5 VL-32B 37.68 49.39 71.37 21 .8528.53 17.5034.21 55.08 12.90 30 .45 63 .80 49 .32 Qwen 2.5 VL-72B 37.76 38.8485.00 22.31 28.23 15 .71 28 .47 51 .89 10 .08 33.96 71.09 54.79 CoT Reasoning LLaV A 1.5-7B 21.61 28 .28 21 .00 17 .37 20 .90 18 .93 25 .36 24 .19 21.53 21.24 20 .31 20 .09 LLaV A 1.6 Mistral-7B 24.05 27 .60 38 .87 17 .15 20 .1822.32 25.84 28 .03 18 .47 18 .40 30 .60 29 .68 LLaV A 1.6-34B 23.49 20 .43 31 .00 21 .24 22 .88 20 .36 18 .18 26 .14 16 .77 21 .79 35 .16 26 .94 Llama 3.2-90B 30.45 32 .34 79 .87 13 .35 26 .37 18 .57 29 .90 29 .14 14 .27 19 .76 59 .24 44 .75 Qwen 2.5 VL-7B 34.82 38 .02 90 .0021.78 23.30 16 .7936.84 46.48 18 .39 28 .15 42 .71 36 .99 Qwen 2.5 VL-32B 41.30 48.85 90.50 18 .82 29 .19 19 .82 35 .1760.43 18.71 32 .21 71 .35 49.32 Qwen 2.5 VL-72B 39.52 44 .7992.37 18.3629.73 13.39 29 .19 55 .28 13 .15 36.13 74.09 46.12 Category Abbreviations: Spatial Reasoning: RS: Robot State (gripper/arm position estimation), OS: Object State (object reachabil- ity/manipulability), SR: Spatial Relationship (relative positioning between robot and objects), SU: Scene Understanding (spatial layout comprehension), MV: Multiple View (cross-view correspondence). Goal-Conditioned Reasoning: TS-G: Task State-grasp (grasp stability assessment), TS-S: Task State-success (task completion status), TS-GL: Task State-goal (goal configuration understanding), Interaction Reasoning: AU: Action Understanding (robot’s current action phase), IP: Interaction Phase (prediction of next robot action), TU: Trajectory Understanding (overall task interpretation). Robo2VLM-1 Curation Inspired by data optimization paradigms such as domain reweighting in natural language processing [ 25] and robot policy learning [ 26], our curation process aims to balance the distribution of questions across diverse scene and task types. It selects a representative and high- quality subset of questions that effectively balances diversity across scenes, tasks, skills, and reasoning types, while ensuring clarity and unambiguous ground truth. In total, Robo2VLM-1 contains 684,710 questions, spanning 463 distinct real-world scenes, 3,396 unique robotic manipulation tasks, and 149 different manipulation skills. 5 Experiment In this section, we sample 60k VQA from Robo2VLM-1 with a 50k training set (Robo2VLM-1-Train) and a 10k testing set (Robo2VLM-1-Test). We mainly study two research questions: (1) How does Robo2VLM-1-Train dataset improve the spatial and interaction reasoning capabilities of VLMs? and (2) How effectively does Robo2VLM-1-Test evaluate VLMs in these reasoning tasks? Evaluation Setup We benchmark state-of-the-art open-source models in different configurations, including LLaV A, Llama 3.2 Vision, and Qwen2-VL/Qwen2.5-VL. Each model is evaluated under both zero-shot and Chain-of-Thought (CoT) prompting settings. For CoT, we follow the prompting strategy from [ 11] by appending the following instruction to the end of each question: “Reason step by step about the answer, and show your work, for each step. Only after that, proceed
https://arxiv.org/abs/2505.15517v1
to the final answer." We run a simultaneous Llama-3.2-3B-Instruct to extract model outputs for final letter answer. We focus fine-tuning on language layers (both attention and MLP modules) while keeping vision layers frozen. For each configuration, we use random 2000 questions from the testing set. For consistency, all models are evaluated with a temperature of 0.7, a maximum completion token length of 4096, and overall context length of 10240. All models use their vision or vision instruct version with float16 quantization. All models are evaluated with 8 Nvidia A100 GPUs with 80GB memory. We use LoRA to fine-tune LLaV A 1.6 with rank 128 and alpha 256. 7 Robot StateObject StateSpatial RelationScene UnderstandingMultiple ViewTask State-graspTask State-successTask State-goalAction UnderstandingInteraction PhaseTrajectory Understanding0.00.10.20.30.40.50.60.70.80.91.0AccuracyTraining Data NoFinetune 10k 20k 30k 40k 50kFigure 4: Fine-tuning LLaV A 1.6 with increasing training data of Robo2VLM-1 from 10k to 50k VQA items. Accuracy improvements almost all categories compared to no fine-tuning. 5.1 Benchmark with Robo2VLM-1 Table 3 presents a detailed comparison of vision–language foundation models on the Robo2VLM-1 benchmark, evaluated under both zero-shot and Chain-of-Thought (CoT) prompting conditions. The results reveal nuanced interactions across model architecture, scale, and reasoning strategy. Cross-Model Performance: Evaluation data on Robo2VLM-1-test suggests that Qwen models has higher overall accuracy compared to other VLMs of the same configuration, which align with the observation from other VQA benchmarks such as [ 51,52]. Qwen 2.5 VL-72B achieves the highest zero-shot accuracy at 37.76%, while Qwen 2.5 VL-32B achieves 41.30% overall accuracy in the CoT setting. Qwen models particularly excel in object-centric categories such as Object State, where Qwen 2.5 VL-72B reaches 85.00% (zero-shot) and 92.37% (CoT), and Interaction Phase (IP) (71.09% zero-shot, 74.09% CoT for 72B). Impact of Model Scale. Zero-shot accuracy generally improves with model size — rising from 30.63% (Qwen 7B) to 37.76% (Qwen 72B). However, this trend does not hold in the CoT setting, where the 32B model surpasses the 72B model (41.30% vs. 39.52%). The observation aligns the official technical report of Qwen2.5[ 2] that the mathematical and problem-solving capabilities of Qwen2.5-VL-32B are further enhanced through reinforcement learning. LLaMA models display a different trend — while the 11B model outperforms the 90B version in zero-shot setting, the larger model benefits more under CoT prompting, suggesting that scaling may unlock latent capabilities only when paired with explicit reasoning support. Effectiveness of CoT Prompting: CoT prompting generally enhances performance for both Qwen and LLaMA models. For example, Qwen 2.5 VL-7B improves from 30.63% to 34.82%, and LLaMA 3.2-90B increases from 28.60% to 30.45%. The most substantial gains are observed in Qwen 2.5 VL-32B, which improves from 37.68% to 41.30%. Results suggest that CoT benefits Task State–Success(from 55.08% to 60.43%), and Interaction Phase (from 63.80% to 71.35%). However, in the Spatial Relationship category, for example, Qwen 32B’s accuracy drops from 21.85% to 18.82%, indicating that verbose reasoning chains may introduce noise in tasks requiring precise spatial localization. 5.2 Finetuning with Robo2VLM-1 We perform model finetuning experiment using Robo2VLM-1-train and evaluate on Robo2VLM-1- test. We increase the training data samples from 10k to 50k in finetuning. As depicted
https://arxiv.org/abs/2505.15517v1
in Figure 4, increasing the fine-tuning data generally leads to notable performance enhancements across most VQA categories. Significant gains are observed in ‘Object State’ understanding, where accuracy improved from 29.34% to 80.24%. “Task State-success” also sees a substantial rise from 47.65% to 68.03%. Other categories demonstrating clear positive trends with more data. However, in some categories such as Spatial Relationship and Task State–Goal, fine-tuning with limited data (e.g., 10k) underperforms the no-finetuning baseline. This may be because the model has not yet seen enough task-specific examples to begin generalizing, or because the question formats in Robo2VLM-1 differ from those seen during pretraining, requiring adaptation time. In some categories, finetuning with Robo2VLM-1 does not improve the performance due to the reasoning capability limitation of the base model. This is also reflected in the fact that LLaV A shows performance degradation in CoT prompting in Table 3. The “interaction phase” question requires the model to predict the next frame, 8 demanding complex reasoning and making it a particularly challenging problem. This suggests that for complex tasks, the base model language performance is important for further improvement with Robo2VLM-1. 5.3 Comparison with Human Performance Robot StateObject StateSpatial Relationship Scene Understanding Multiple View T ask State Action Understanding Interaction PhaseTrajectory Understanding20 40 60 80LLaVA 1.6-7B LLaVA 1.6-7B-FinetunedQwen 2.5 VL-32B-CoT Human Performance Figure 5: Comparison of human performance to different multimodal foundation models.We conducted a human evaluation covering all 11 cate- gories defined in Table 3. For each category, a human evaluator was asked to randomly answer questions from Robo2VLM-1-test. We use the average success rate as a reference for comparison with three models—LLaV A 1.6-7B, LLaV A 1.6-7B-Finetuned, and Qwen 2.5 VL- 32B—CoT on the same set of categories as shown in Figure 5. Qwen 2.5 VL-32B—CoT achieves near hu- man accuracy, with 90.5% in Object State compared to 96.7% for humans, and 71.35% in Interaction Phase ver- sus the human score of 80.0%. In more complex spatial reasoning tasks such as Spatial Relationship, where human achieves 60.0% accuracy, the best model (LLaVa 1.6-7B, finetuned) reaches only 19.42%. This may suggest that even if observing from multiple views, a monocular image may lack the full depth information needed to accurately determine the spatial relationship. Furthermore, finetuning enhances model performance. LLaV A 1.6-7B finetuned on the Robo2VLM-1 training dataset shows consistent improvements across multiple categories, particularly in Task State, Object State, and Trajectory Understanding, compared to its non-finetuned LLaV A 1.6-7B. These findings demonstrate the potential Robo2VLM- 1 in studying and narrowing the gap between model and human performance in spatial and task reasoning. 6 Conclusion and Discussion In this paper, we introduce Robo2VLM, a framework that generates VQA grounded in robot sensory modalities. We apply Robo2VLM to 176k real robot trajectories from Open X-Embodiment, and curate Robo2VLM-1, a comprehensive dataset of 684,710 questions covering 463 distinct scenes, 3,396 robotic manipulation tasks, and 149 manipulation skills. Evaluation of state-of-the-art open- source VLMs suggests that some VLMs, such as Qwen2.5 VL 32B with CoT prompting, can achieve near human performance in questions related to object reachability and interaction
https://arxiv.org/abs/2505.15517v1
understanding, while there is a significant gap to human in reasoning fine-grained spatial relationship and interactions. Evaluation also suggests that finetuning Robo2VLM-1 dataset improves in spatial and interaction reasoning. Future work will focus on generalizing Robo2VLM to a wider range of robot embodiments and generating an even more diverse dataset. We also plan to explore the deployment of models trained on Robo2VLM-1 to real-world robotic tasks. Limitation We acknowledge that Robo2VLM is a data generation framework that relies on the quality of input tele-operated trajectories. If the original trajectory is wrongly calibrated, it compromises the quality of generated VQA data. Or if the original trajectory misses embodiment sensory modalities, such as NYU VINN [ 38] (0.2% of the 176k trajectories), it limits the amount of questions that Robo2VLM can generate. Acknowledgement This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab. This work is supported in part by donations from Google. 9 References [1]Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 8748–8763. PMLR, 18–24 Jul 2021. [2] Qwen Team. Qwen2.5: A party of foundation models, September 2024. [3]Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [4]Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36:34892–34916, 2023. [5]Anthropic. Claude 3.5 Sonnet. https://www.anthropic.com/news/claude-3-5-sonnet , June 2024. [6] OpenAI. GPT-4o System Card. https://openai.com/index/gpt-4o-system-card/ , August 2024. [7]Koray Kavukcuoglu. Gemini 2.5: Our most intelligent AI model. https://blog.google/technology/ google-deepmind/gemini-model-thinking-updates-march-2025/ , March 2025. [8]Boyuan Chen, Zhuo Xu, Sean Kirmani, Brain Ichter, Dorsa Sadigh, Leonidas Guibas, and Fei Xia. Spatialvlm: Endowing vision-language models with spatial reasoning capabilities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 14455–14465, June 2024. [9]Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, and Dorsa Sadigh. Prismatic vlms: Investigating the design space of visually-conditioned language models, 2024. [10] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, and Chelsea Finn. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. [11] Gemini Robotics Team, Saminda Abeyruwan, et al. Gemini robotics: Bringing ai into the physical world, 2025. [12] Lucy Xiaoyang Shi, Brian Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, Adrian Li-Bell, Danny Driess, Lachy Groom, Sergey Levine, and Chelsea Finn. Hi robot: Open-ended instruction following with hierarchical vision-language-action models, 2025. [13] Md Mofijul Islam, Alexi Gladstone, Riashat Islam, and Tariq Iqbal.
https://arxiv.org/abs/2505.15517v1