text
string
source
string
omit). 3.1.2 Build A Multi-modal Knowledge Base. With the logs, perfor- mance metrics and bug descriptions retrieved from the prior step, we can build a knowledge base that associates related information. The idea behind the knowledge base is to do up front work so that later, we can quickly find similar bugs by compar...
https://arxiv.org/abs/2505.21419v2
contains 3072-dimensional vectors of 32-bit floating point numbers, and we embed the logs using the "text-embedding-3-large" model from OpenAI. The preprocessed performance metrics are represented as 21-dimensional vectors of 32-bit floating point numbers. The knowledge base in ARCA is made of 3 object stores, with one...
https://arxiv.org/abs/2505.21419v2
far fewer clus- ters than for the raw log (the left picture): evidence that this step achieved its goals. We additionally colored the dots in both images to signify root cause labels. As we can easily see, the dots from the same root cause, i.e., memory, CPU and network, are correctly clustered after preprocessing but ...
https://arxiv.org/abs/2505.21419v2
so that corresponding SREs can chime in. For example, if a bug seems to be CPU-related, it could be assigned to SREs working on performance issues, ones working on scheduling, and ones investigating disruptions associated with locking. With just a small number of approximate matches we might miss some relevant categori...
https://arxiv.org/abs/2505.21419v2
ARCAs ability to identify similar prior incidents may be helpful to SREs even if its proposed mitigation plans are flawed. To obtain recommenda- tions with a natural tone and style, ARCA uses gpt-4o for both the evaluating and generation LLM stages. 4 EXPERIMENTAL RESULTS To evaluate our work, we first build a data set...
https://arxiv.org/abs/2505.21419v2
module. This is also the size of the output from the triage step. So we compare both the triage accuracy and the system accuracy. For a triage operation to be accurate, ARCA needs to include the closest bug in the output of the triage steps. For the whole system to be accurate, ARCA needs to pick the labeled closest bu...
https://arxiv.org/abs/2505.21419v2
Telemetric Data Only 0.34 2.81 4.67 Log Only 0.72 2.31 4.16 Telemetric Data + Log 0.74 2.89 4.89 Table 1: Comparison of the efficacy via utilizing different modes of data. 4.4 ARCA as A Log Clustering Tool Very similar to log clustering tools, ARCA’s RAG-LLM based log processing module can be used alone to detect anoma...
https://arxiv.org/abs/2505.21419v2
2024. Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv:2312.10997 [cs.CL] https://arxiv.org/abs/2312.10997 [6]Hongcheng Guo, Jian Yang, Jiaheng Liu, Jiaqi Bai, Boyang Wang, Zhoujun Li, Tieqiao Zheng, Bo Zhang, Junran peng, and Qi Tian. 2024. LogFormer: A Pre- train and Tuning Pipeline for Log A...
https://arxiv.org/abs/2505.21419v2
Siyuan Zhuang, Zhanghao Wu, Ion Stoica, and et al. 2024. Judging LLM-as-a-judge with MT-bench and Chatbot Arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems (New Orleans, LA, USA) (NIPS ’23) . Curran Associates Inc., Red Hook, NY, USA, Article 2020, 29 pages. [23] Jiemin...
https://arxiv.org/abs/2505.21419v2
arXiv:2505.21420v1 [cs.CV] 27 May 2025PAPER SUBMITTED. 1 Mentor3AD: Feature Reconstruction-based 3D Anomaly Detection via Multi-modality Mentor Learning Jinbao Wang1, Hanzhe Liang1, Can Gao1, Chenxi Hu1, Jie Zhou1, Yunkang Cao2, Linlin Shen1, Weiming Shen3,† 1Shenzhen University 2Hunan University 3Huazhong University o...
https://arxiv.org/abs/2505.21420v1
leads to poor performance in the final scoring stage. Additionally, shape-guided [15] relies on feature alignment. Although this method attempts to leverage complementary information between modalities, it may not fully exploit it, which negatively impacts subsequent detection performance. Therefore, a key issue is pos...
https://arxiv.org/abs/2505.21420v1
from the features of the training set, and by comparing the difference between the features to be tested and the normal feature distribution at the time of testing [25], [26]. Reg3D-AD [8] utilizes a registration-based approach and feature memory banks to preserve critical details essential for anomaly detection, thoug...
https://arxiv.org/abs/2505.21420v1
challenging, as evidenced by the difficulty of cross-modality reconstruction due to significant differences in feature distribution between modalities, leading to poor discrimination. This paper proposes an approach that uses mentor modality to address this problem, leading to better anomaly detection. PAPER SUBMITTED....
https://arxiv.org/abs/2505.21420v1
scoring maps are created. Finally, These scoring maps are then fed into the V oting Module (VM) to generate the anomaly scoring map. C. Mentor of Fusion Module It is essential to leverage the shared features between two modalities, especially when reconstructing a modal feature into another modality. Utilizing the shar...
https://arxiv.org/abs/2505.21420v1
˜FMtr. Reconstruction differences compute scoring maps, which are processed by the voting module (VM) to produce the final score. the fusion of RGB modal feature maps with point cloud feature maps, facilitating a self-supervised approach to feature fusion. The loss function can be expressed as follows: Lcon=F(i,j) RGB·...
https://arxiv.org/abs/2505.21420v1
based on the cosine similarity between the original feature FRGB and the reconstructed feature ˜FRGB , as illustrated below: Lcos= 1−Pn i=1˜FXY Z,i FXY Z,iqPn i=1˜F2 XY Z,iqPn i=1F2 XY Z,i, (4) where ˜FRGB,i andFRGB,i represent the ith component of the vectors ˜FRGB andFRGB , respectively. ndenotes the dimen- sionality...
https://arxiv.org/abs/2505.21420v1
Sthat reflects the overall eval- uation across multiple reconstruction disparities. The function fcan be expressed as follows: f=CU CL(SInput) , (7) where SInput represents the fraction to be calculated, C represents the convolution, and its superscripts UandL represent the convolution at different depths. Then we us...
https://arxiv.org/abs/2505.21420v1
(XYZ+RGB) 0.981 0.965 0.920 0.951 0.950 0.978 0.982 0.983 0.981 0.980 0.967 Mentor3AD 0.981 0.976 0.982 0.958 0.966 0.975 0.983 0.983 0.982 0.989 0.978 PointCloud Method (3D+RGB)AUPRO@1%BTF CVPR23 0.428 0.365 0.452 0.431 0.370 0.244 0.427 0.470 0.298 0.345 0.383 AST WACV23 0.388 0.322 0.470 0.411 0.328 0.275 0.474 0.48...
https://arxiv.org/abs/2505.21420v1
are anomalous. Eyecandies is also an RGB and 3D dataset containing 10,000 normal data pairs as training samples [39]. As existing methods use different benchmarks, e.g. some methods use only part of the normal data for training, whileothers use all of the data, this may have implications [14], [15], [30]. For a fair co...
https://arxiv.org/abs/2505.21420v1
detailed characterization of each patch. This process produces a fea- ture map with dimensions of 28 ×28×768. All training was conducted on a server equipped with a single NVIDIA A100- PCIE-40GB and a 64-core Intel Xeon Silver 4314 processor. To ensure consistent speed comparison criteria, tests were implemented on a s...
https://arxiv.org/abs/2505.21420v1
en- hances sample-level scoring. Our experiments confirm that the proposed voting strategy, which balances scoring at both the sample and pixel levels, achieves the best results across all 3DAD metrics. This success is attributed to the comple- mentary strengths of the different modalities: RGB detects surface anomalie...
https://arxiv.org/abs/2505.21420v1
B EST RESULTS ARE IN BOLD . Category BTF M3DM Mentor3AD Duck 1 71.0/72.3 83.3/76.2 98.9/93.4 Duck 2 76.2/60.4 68.3/59.9 93.7/89.6 Duck 3 63.7/74.3 71.0/83.4 82.4/90.2 Means 70.3/69.0 74.2/73.2 91.7/91.1 D. Actual Inspection on Industry Object We conducted detection experiments on real industrial prod- ucts to evaluate ...
https://arxiv.org/abs/2505.21420v1
Zhang, “A survey on rgb, 3d, and multimodal approaches for unsupervised industrial image anomaly detection,” Information Fusion , vol. 121, p. 103139, 2025. [5] J. Liu, G. Xie, J. Wang, S. Li, C. Wang, F. Zheng, and Y . Jin, “Deep industrial image anomaly detection: A survey,” Machine Intelligence Research , vol. 21, p...
https://arxiv.org/abs/2505.21420v1
Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” 2021. [20] V . Zavrtanik, M. Kristan, and D. Sko ˇcaj, “Draem – a discriminatively trained reconstruction embedding for surface anomaly detect...
https://arxiv.org/abs/2505.21420v1
¨olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson, “Estimating the support of a high-dimensional distribution,” Neural Computation , vol. 13, no. 7, pp. 1443–1471, 2001. [38] P. Bergmann, X. Jin, D. Sattlegger, and C. Steger, “The mvtec 3d- ad dataset for unsupervised 3d anomaly detection and loc...
https://arxiv.org/abs/2505.21420v1
arXiv:2505.21426v1 [cs.AI] 27 May 2025Learning Individual Behavior in Agent-Based Models with Graph Diffusion Networks Francesco Cozzi Sapienza University, Rome, Italy CENTAI, Turin, Italy francesco.cozzi@centai.euMarco Pangallo CENTAI, Turin, Italy marco.pangallo@centai.eu Alan Perotti CENTAI, Turin, Italy alan.perott...
https://arxiv.org/abs/2505.21426v1
“die,” and pink for “reproduce.” The bottom panel shows the data generation phase: given a new observed state, the trained surrogate can simulate plausible future states, effectively mimicking the original ABM’s generative behavior. of individual agents to data. One potential approach is to manually construct a probabi...
https://arxiv.org/abs/2505.21426v1
describing the ecological dynamics between two interacting species, with one acting as predator and the other as prey, similarly to the Lotka-V olterra equations. In this ABM, Ztincludes agent position and type (prey-predator), while Θgoverns the probability to move, reproduce or die. This model replicates the cyclical...
https://arxiv.org/abs/2505.21426v1
generation of graphs, while our architecture learns to generate random samples that are conditioned on information found on a graph. To the best of our knowledge, our work is the first application of this generative framework to individual behavior modeling in simulation systems, such as ABMs. 3 3 Methods Denoting the ...
https://arxiv.org/abs/2505.21426v1
Z(i) t+1. The consecutive application of GDN ϕ,ωallows to reproduce the behavior of the original model. We now describe in detail each of these components. Message-passing GNN. The GNN operates on the provided interaction graph Gt= (A, E t), that we assume to be known or to be computable from Zt(e.g., in the Schelling ...
https://arxiv.org/abs/2505.21426v1
observed in the ramification data (see Algorithm 1). At each training iteration, it uniformly sam- ples a time index tand extracts the condi- tioning pair (Z(i) t,{Z(j) t})from the main branch Zt[0]. We compute the interaction embedding g(i) tvia Equation (3), then draw a diffusion step τto form the condition vec- torc...
https://arxiv.org/abs/2505.21426v1
models. We evaluate our approach on the two ABMs described in Section 2 as case studies. The first is the Schelling segregation model, in which nagents occupy cells on a two- dimensional grid. Each agent has a fixed binary “color” and a position on the grid. At each timestep, an agent is considered happy if the proport...
https://arxiv.org/abs/2505.21426v1
models. Macro evaluation metrics. Next, we test whether agent-level predictions translate into faithful reproduction of emergent, system-level behavior. For each model, we track a summary statistic over time: the number of happy agents in Schelling, and the number of active (i.e. Alive and Pregnant) agents in the preda...
https://arxiv.org/abs/2505.21426v1
1.25 sMAPE 2 4 6Surrogate (ours) Ablation 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.00 0.02 0.04 0.06 0.08 EMD0.00 0.25 0.50 0.75 1.00 1.25 sMAPE 0 2 4 6 Surrogate (ours) Ablation 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Figure 4: Errors obtained by the proposed approach ( Surrogate ) and by the naive baseline ( Ablation ) in four different ta...
https://arxiv.org/abs/2505.21426v1
summarizes the results of our experiments: the left panel shows the microscopic evaluation of both our surrogate model and the ablated variant, while the right panel presents the macroscopic evaluation results. 8 For the Schelling model, we observe that, on the micro level, the surrogate’s mean EMD is lower than the ab...
https://arxiv.org/abs/2505.21426v1
the interaction graph is assumed to be fully known. Future works might remove this limitation by estimating such a graph directly from available data. However, the estimation of a latent interaction graph is a follow-up challenge, for which our GNN-based approach represents a necessary first step. Second, highly sophis...
https://arxiv.org/abs/2505.21426v1
versus density-limited predator-prey dynamics on different spatial scales. Proceedings of the Royal Society of London. Series B: Biological Sciences , 246(1316):117–122, 1991. [9]Douglas D Donalson and Roger M Nisbet. Population dynamics and spatial scale: effects of system size on population persistence. Ecology , 80(...
https://arxiv.org/abs/2505.21426v1
, 91:391–408, 2018. [25] Corrado Monti, Gianmarco De Francisci Morales, and Francesco Bonchi. Learning Opinion Dynamics from Social Traces. In ACM , KDD, pages 764–773, 2020. [26] Corrado Monti, Marco Pangallo, Gianmarco De Francisci Morales, and Francesco Bonchi. On learning agent-based models from data. Scientific Re...
https://arxiv.org/abs/2505.21426v1
12 Supplemental Material A Neural models and training details In this section, we provide a detailed overview of the core components of the Graph Diffusion Network (GDN) and its methodology. We begin by introducing the diffusion process (A.1), which defines the diffusion process to be reversed in order to generate futu...
https://arxiv.org/abs/2505.21426v1
agent state Z(i) tas node features. The messages correspond to the node features and are aggregated by an aggregation function such as sumormean 13 value. The choice of the aggregation function depends on the ABM to be reproduced. In general, sum is a suitable choice, as the MLP fωwill capture the behavior rules of the...
https://arxiv.org/abs/2505.21426v1
Generation The generation of Z(i) t+1starts from the last latent of the denoising diffusion process, which is a sample of Gaussian noise ˜Z(i) t+1(τmax)∼ N(0,I). The Message Passing GNN takes in input the current state Z(i) tand the states of its neighbors {Z(j) t}j∈N(i) tand forms the embedding g(i) t. Then, iterative...
https://arxiv.org/abs/2505.21426v1
grid. Thus, the update rule is deterministic when agents are happy, and stochastic when they are unhappy. Algorithm 2 presents the pseudo-code of the ABM. Of particular interest are lines 15–25, which describe how agents relocate by searching for an empty cell on the grid. It is clear that this search process is not a ...
https://arxiv.org/abs/2505.21426v1
ratio 8: ifr < ξ then ▷Agent iis unhappy 9: unhappy ←unhappy ∪{i} 10: end if 11: end for 12: ifunhappy = ∅then 13: break ▷Convergence 14: end if 15: for all i∈unhappy do 16: (x(i) t+1, y(i) t+1)←(x(i) t, y(i) t) 17: fork= 1, . . . , K do 18: θ∼Uniform(0 ,2π), d∼Uniform(0 , dmax) ▷Random direction, distance up to dmax 1...
https://arxiv.org/abs/2505.21426v1
child 0.00 0.00 0.00 1.00 0.00 0.00 Dead 0.00 0.00 0.00 0.00 1.00 0.00 Unborn + Npp∗0.00 0.00 0.00 0.00 0.00 1.00 Table 5: Transition matrix Ψ4 Die Move Turn pregnant Turn alive Stay dead Stay unborn Alive Pred + Prey 0.15 0.35 0.50 0.00 0.00 0.00 Alive Pred + No prey 0.25 0.45 0.30 0.00 0.00 0.00 Alive Prey + Pred 0.4...
https://arxiv.org/abs/2505.21426v1
total we trained 112 models, 64 ( 8×4×2) for Predator-Prey, and 48 ( 8×3×2) for Schelling. All our evaluations are done across these 8 models per parameter configuration. We fixed some of the ABM parameters across experiments, which are reported in Table 6. For Predator-Prey, density refers to the density of agents tha...
https://arxiv.org/abs/2505.21426v1
get sMAPE preys and sMAPE predators . Then, to work with a single value, we calculate the mean value: sMAPE predprey =1 2(sMAPE preys +sMAPE predators ) (7) For each of the 8 experiments, we calculate the sMAPE over 25 timesteps and then compute its mean. The box plots in Figure 4 have as entries the 8 mean sMAPE value...
https://arxiv.org/abs/2505.21426v1
the ground-truth Predator-Prey ABM, our surrogate model, and the ablated model. For both parameter sets shown ( Ψ1andΨ4), the surrogate accurately reproduces the stochastic dynamics beyond the training window, while the ablation fails to capture the key oscillation patterns. Figure 8 extends the previous analysis to pa...
https://arxiv.org/abs/2505.21426v1
clusters. The surrogate model successfully reproduces both the population dynamics and the emergent spatial structures, whereas the ablated model fails to capture any meaningful spatial organization. Figure 10: Evolution of the position of preys (black) and predators (red) in the predator-prey ABM, for parameters Ψ = Ψ...
https://arxiv.org/abs/2505.21426v1
5, considering that the scale is given by the length of the grid, L= 50 ) and better than the surrogate model. Predator-prey model. Figure 15 instead shows the distribution of the EMD scores only for the stochastic rules of the Predator-Prey model. Here, the stochasticity lies in the Alive phase, where the agent might ...
https://arxiv.org/abs/2505.21426v1
Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning Xianling Mu1Joseph Ternasky2Fuat Alican2Yigit Ihlamur2 (1)University of Oxford(2)Vela Research Abstract Early-stage startup investment is a high-risk en- deavor characterized by scarce data and uncer- tain outcomes. Tradit...
https://arxiv.org/abs/2505.21427v1
training cycles, and very low compute cost. Using only the GPT-4o mini API and a few dollars’ worth of compute, we are able to produce policies that outperform random baselines by 3–4 ×in precision, even without any further opti- mization. 1arXiv:2505.21427v1 [cs.AI] 27 May 2025 Policy Induction: Predicting Startup Suc...
https://arxiv.org/abs/2505.21427v1
refer to asfounder cleaned data , which was constructed by converting unstructured information from LinkedIn pro- files and Crunchbase entries into structured features using LLM-powered extraction techniques. This dataset focuses on US-based companies founded in or after 2010 and contains information on 1,022 successfu...
https://arxiv.org/abs/2505.21427v1
Iterative Policy Refine- ment •Further Policy Enhancement via Reflection or Expert InterventionOne of the key benefits of our framework is its flexibility: policy improvement does not rely on a strict sequential pipeline. Instead, various methods can be applied iteratively or in combination, with each resulting policy ...
https://arxiv.org/abs/2505.21427v1
high-quality samples. Each strategy has its strengths. The parallel method is faster and performs better in the early stages of training. However, the sequential method typically produces better final policies when aiming for maximum performance. These two strategies can also be used in a looped fashion to form an iter...
https://arxiv.org/abs/2505.21427v1
on this information, determine if the founder will succeed. Answer using only one word: True or False The performance of three different LLMs: GPT-4o-mini, GPT-4o and the most powerful o3 is summarized below: Model Accuracy Precision Recall F0.5 GPT-4o-mini 0.653 0.137 0.530 0.160 GPT-4o 0.772 0.202 0.510 0.229 o3 0.76...
https://arxiv.org/abs/2505.21427v1
isinherently weaker, suggesting variability in data difficulty or distribution shift. Despite this, the mean precision across all eight test sets is approximately 0.467, indicating strong average performance of 5 ×and robustness to moderate dataset variation. 5.4. Evaluation on Real-World Distribution VC firms typicall...
https://arxiv.org/abs/2505.21427v1
training data, including 1:5, 1:2, and 1:1. Our findings suggest that the absolute number of successful founders is more important than the overall ratio. Too few success examples reduce learning efficiency, leading to longer con- vergence times. A balanced 1:1 or lightly skewed ratio yielded the best trade-off between...
https://arxiv.org/abs/2505.21427v1
and Kubli, M. Chatgpt out of 7 Policy Induction: Predicting Startup Success via Explainable Memory-Augmented In-Context Learning the box: How does it fare on political and legal rea- soning?, 2023. URL https://arxiv.org/abs/ 2305.03511 . Graves, A., Wayne, G., and Danihelka, I. Hybrid computing using a neural network w...
https://arxiv.org/abs/2505.21427v1
Match proven technical, operational, or sales expertise to venture stage; generic “entrepreneur” labels penalize. 12.Consistent Role Tenure & Title Concentration : Favor ≥4-year focus in one core venture; multiple simul- taneous C-suite/advisory titles or role inflation is a downgrade. 13.Network Quality & Engagement :...
https://arxiv.org/abs/2505.21427v1
arXiv:2505.21432v1 [cs.RO] 27 May 2025Hume: Introducing System-2 Thinking in Visual-Language-Action Model Haoming Song1,2∗Delin Qu2∗Yuanqi Yao2Qizhi Chen3,2Qi Lv2Yiwen Tang2 Modi Shi4Guanghui Ren4Maoqing Yao4Bin Zhao2Dong Wang2†Xuelong Li2 1Shanghai Jiao Tong University2Shanghai AI Laboratory3Zhejiang University4AgiBot...
https://arxiv.org/abs/2505.21432v1
with System-2 thinking capabilities poses two primary challenges. First, thinking and reasoning techniques have mainly been demonstrated in text modality, while the delicate, tenuous robot actions lack of clear and consistent semantics, making it difficult to semantic Chain-of-Thought (CoT) [ 49] thinking as in LLMs. S...
https://arxiv.org/abs/2505.21432v1
value-guided thinking and cascaded action denoising to seamlessly combine low frequency System 2 and high frequency System 1, resulting in effective thinking and reasoning in various robot deployments. •Hume achieves state-of-the-art performance on multiple benchmarks and real-robot tests, achieving +4.4% increase in s...
https://arxiv.org/abs/2505.21432v1
deployment strategy of the model is explained in Sec. 3.3. 3.1 Value-Guided System-2 Thinking As shown in Fig. 2, the System 2 module is instantiated as a vision-language-action model (VLA) built upon a pretrained Vision-Language Model. Formally, the inputs of System 2 module consists of RGB 3 FlowMatchingProgress Flow...
https://arxiv.org/abs/2505.21432v1
one actor network for assisting the training of critic networks. Specifically, a special query token qtis introduced and attached at the end of the VLM input sequence, which is a learnable token with the same embedding dimension as the language tokens. Then, for one action chunk At(either ground-truth action At or deno...
https://arxiv.org/abs/2505.21432v1
2: Lω(θ) =Ep(˜At+kh|˜ot+kh),q(˜Aω t+kh|˜At+kh)||vθ(˜Aω t+kh,˜ot+kh)−u(˜Aω t+kh|˜At+kh)||2, (2) while the superscript ωrepresents the flow matching timestep in System 1. Note that, during training and inference, the generated candidate action chunks from System 2 are not fully denoised, i.e.,˜Aτ∗ t+kh̸=˜A1 t+kh, requiri...
https://arxiv.org/abs/2505.21432v1
Hz and execute all h= 15 actions on real robot immediately, resulting in a overall 90 Hz robot action control frequency. After the robot executes all K=H/h = 2 sub-action chunks in ˜At, System 1 repeatly get the newest selected action chunks from the shared queue for subsequent action denoising. Due the different worki...
https://arxiv.org/abs/2505.21432v1
GR00T) on the LIBERO-Long task, which consists of long-horizon tasks, demonstrating the model’s strong long-term planning capabilities. Tab. 2 presents the SimplerEnv experimental results on WidowX and Google robot tasks. Hume also achieves state-of-the-art performance on WidowX multitasks, with an average success rate...
https://arxiv.org/abs/2505.21432v1
45.0% RoboVLM [30] 77.3% 61.7% 43.5% 63.4% 75.6% 60.0% 10.6% 51.3% SpatialVLA [42] 86.0% 77.9% 57.4% 73.8% 88.0% 72.7% 41.8% 70.7% HPT [48] 56.0% 60.0% 24.0% 46.0% ————– ————– ————– ————– π0[3] 72.7% 65.3% 38.3% 58.8% 75.2% 63.7% 25.6% 54.8% π0-FAST [41] 75.3% 67.5% 42.9% 61.9% 77.6% 68.2% 31.3% 59.0% Hume 97.0% 80.4% ...
https://arxiv.org/abs/2505.21432v1
the complex long-horizon task (#Pour Water), Hume achieves success rate of 82%, significantly improving by +20% over π0, and +60% over GR00T. Additionally, Hume also achieves an average success rate of 87% across various tasks on the Franka robot, improving by +14.75% over π0, and +37.25% over OpenVLA. 4.3 Ablations on...
https://arxiv.org/abs/2505.21432v1
being sampled from the same distribution. This consequently reduces the range of candidates that the model choose from, resulting in suboptimal candidates selection. The models suffer a significant performance drop in variant aggregation, showing an average decline of -3.2% across multiple SimplerEnv tasks, -2.7% acros...
https://arxiv.org/abs/2505.21432v1
10 Hume: Introducing System-2 Thinking in Visual-Language-Action Model Supplementary Material Abstract This supplementary material accompanies the main paper by providing more detailed visualization analysis of Hume’s workflow, as well as implementation details and additional experimental results: ▷Sec. 6 : Detail Hume...
https://arxiv.org/abs/2505.21432v1
with low state- action values. Additionally, we also show the ground truth actions from collected demonstrations in the value map for comparison. By observing the positions of ground truth actions in the value map, we find these ground truth actions are consistently located in high-value regions, which demonstrates tha...
https://arxiv.org/abs/2505.21432v1
the initial state distribution, and γ∈(0,1)denotes the discount factor. The training objective of the Value-Query Head is to minimize the Bellman error with a regularization term R(θ), which is defined as: min θαR(θ) +1 2Eqt,At,q′ t∼Dh Qθ(qt,At)− Bπ¯Q(qt,At)2i , (5) The second term in eq. (5) is the standard TD error...
https://arxiv.org/abs/2505.21432v1
98.0% 79.7% 38.0% 51.2% 44.6% 74.1% Visual MatchingRT-1 (Begin) 5.0% 00.0% 03.0% 02.7% 05.0% 00.0% 27.8% 13.9% 07.2% RT-1 ( 15%) 86.0% 79.0% 48.0% 71.0% 35.4% 46.3% 66.7% 56.5% 54.3% RT-1 (Converged) 96.0% 90.0% 71.0% 85.7% 44.2% 60.1% 86.1% 73.1% 67.7% RT-1-X 82.0% 33.0% 55.0% 56.7% 31.7% 29.6% 89.1% 59.4% 49.3% RT-2-...
https://arxiv.org/abs/2505.21432v1
the microwave. Lift red pepper: Lift the red pepper. Put green cup on the pink cloth : Put the green cup on the pink cloth. Put purple cup on the white plate : Put the purple cup on the white plate. Figure 10: Evaluation Setup of WidowX 250s. We evaluated models with 9 tasks on WidowX 250s to verify the model’s learnin...
https://arxiv.org/abs/2505.21432v1
the table and place it on a cutting board, trained with 100 human demonstrations. 6 Restock the hanging basket area: The robot is in front of the snack shelf, with the shopping cart positioned between the snack shelf and the robot. The snacks that need to be restocked are in the box inside the shopping cart. Pour water...
https://arxiv.org/abs/2505.21432v1
state, cannot be recovered. Pi0 falls in error state, cannot be recovered. Hume uses value-guided thinking to choose the correct action trajectory among candidates, recover from error state, and successfully execute tasks. Figure 13: Failure Recovery of Hume. When a failure occurs, such as missing the grasping position...
https://arxiv.org/abs/2505.21432v1
they enter an error state, since the observation of the error state does not appear in their training dataset, these models are easily trapped in the error state and cannot recover, resulting in the failure of the final task. For Hume, although the error state observation also does not appear in its training dataset, i...
https://arxiv.org/abs/2505.21432v1
and David Meger. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning (ICML) , pages 1587–1596, 2018. [14] Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu, Hongzhang Liu, Aiwei Liu, Xuming Hu, and Lijie Wen. Interpretable contrastive monte carlo tree search reaso...
https://arxiv.org/abs/2505.21432v1
action models. arXiv preprint arXiv:2412.14058 , 2024. [30] Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, and Huaping Liu. Towards generalist robot policies: What matters in building vision-language- action models. arXiv preprint arXiv:2412.14058 , 2024. [31] ...
https://arxiv.org/abs/2505.21432v1
Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, Adrian Li-Bell, Danny Driess, Lachy Groom, Sergey Levine, and Chelsea Finn. Hi robot: Open-ended instruction following with hierarchical vision-language-action models, 2025. [45] Noah Shinn, Federico Ca...
https://arxiv.org/abs/2505.21432v1
prompt learning for lifelong robot manipulation, 2025. [53] Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, and Sergey Levine. Robotic control via embodied chain-of-thought reasoning. arXiv preprint arXiv:2407.08693 , 2024. [54] Jianke Zhang, Yanjiang Guo, Xiaoyu Chen, Yen-Jen Wang, Yucheng Hu, Ch...
https://arxiv.org/abs/2505.21432v1
arXiv:2505.21441v1 [stat.ML] 27 May 2025Autoencoding Random Forests Binh Duc Vu∗ King’s College London binh.vu@kcl.ac.ukJan Kapar∗ University of Bremen kapar@leibniz-bips.de Marvin Wright University of Bremen wright@leibniz-bips.deDavid S. Watson King’s College London david.watson@kcl.ac.uk Abstract We propose a princi...
https://arxiv.org/abs/2505.21441v1
these methods in a series of experiments and benchmark against a wide array of neural and tree-based alternatives. Our results demonstrate that the RF autoencoder is competitive with the state of the art across a range of tasks including data visualization, compression, clustering, and denoising. The remainder of this ...
https://arxiv.org/abs/2505.21441v1
forests. However, they do not explore the connections between this approach and kernel methods, nor do they propose any strategy for decoding latent representations. Other more heuristic approaches involve running PCA on a weighted matrix of all forest nodes (not just leaves), a method that works well in some experimen...
https://arxiv.org/abs/2505.21441v1
exists some f∈ H such that ∥f∗−f∥∞< ϵ. Several variants of universality exist with slightly different conditions on X[82]. Examples of universal kernels include the Gaussian and Laplace kernels [83]. Definition 3.3 (Characteristic) .The bounded measurable kernel kischaracteristic if the function µ7→R Xk(·,x)dµ(x)is inj...
https://arxiv.org/abs/2505.21441v1
The elements of this decomposition have several notable properties.4For instance, the resulting eigenvectors uniquely solve the constrained optimization problem: min V∈Rn×dZX i,jkij∥vi−vj∥2s.t.V⊤V=I, for all dZ∈[n], thereby minimizing Dirichlet energy and producing the smoothest possible represen- tation of the data th...
https://arxiv.org/abs/2505.21441v1
satisfying ∥π(b) f(x)∥1= 1for all x∈ X. Let ψf:Z 7→ { 0,1}dΦbe a similar leaf assignment function, but for latent vectors. Then for a fixed forest fand encoder g, the leaf assignment oracle ψ∗ f,gsatisfies πf(x) =ψ∗ f,g g(x) , for all x∈ X. Theorem 4.2 (Oracle consistency) .Letfnbe a RF trained on {xi, yi}n i=1i.i.d....
https://arxiv.org/abs/2505.21441v1
ˆK0coincides with the ground truth K∗ 0. Then, under the assumptions of Thm. 3.4, as n→ ∞ , with high probability, the ILP of Eq. 3 is uniquely solved by the true leaf assignments Ψ∗. Together with Thm. 4.2, Thm. 4.3 implies that the ILP approach will converge on an exact recon- struction of the data under ideal condit...
https://arxiv.org/abs/2505.21441v1
find the most proximal points in the latent space. Once nearest neighbors have been identified, we reconstruct their associated inputs using the leaf assignment matrix Πand the splits stored in our forest fn. From these ingredients, we infer the intersection of all leaf regions for each training sample—what Feng and Zh...
https://arxiv.org/abs/2505.21441v1
4’s and 9’s demonstrates that the RF is somewhat uncertain about these samples, although with extra dimensions we find clearer separation (not shown). In other words, the embeddings suggest a highly interpretable recursive partition, as we might expect from a single decision tree. Reconstruction We limit our decoding e...
https://arxiv.org/abs/2505.21441v1
(see Fig. 4). For more details on these datasets, see Appx. B.1, Table 1. We evaluate performance over ten bootstrap samples at each compression factor, testing on the randomly excluded out-of-bag data. We find that RFAE is competitive in all settings, and has best average performance in 12 out of 20 datasets (see Appx...
https://arxiv.org/abs/2505.21441v1
and k-NN decoders can work in tandem with any valid encoding scheme. For instance, we could relabel an RF’s splits to approximate the behavior of sample points in principal component space, or indeed any Zfor which we have a map g:X 7→ Z . We highlight two notable limitations of our approach. First, the computational d...
https://arxiv.org/abs/2505.21441v1
, 47(2): 1148–1178, 2019. [4]M. Balog, B. Lakshminarayanan, Z. Ghahramani, D. M. Roy, and Y . W. Teh. The Mondrian kernel. In Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence , pages 32–41, 2016. [5]Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data r...
https://arxiv.org/abs/2505.21441v1
images for machine learning research [best of the web]. IEEE Signal Processing Magazine , 29(6):141–142, 2012. [27] Misha Denil, David Matheson, and Nando De Freitas. Narrowing the gap: Random forests in theory and in practice. In Proceedings of the 31st International Conference on Machine Learning , pages 665–673, 201...
https://arxiv.org/abs/2505.21441v1
of the dimensionality reduction of manifolds. In Proceedings of the 21st International Conference on Machine Learning , 2004. [48] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-V AE: Learning basic visual concepts with a constr...
https://arxiv.org/abs/2505.21441v1
R. Coifman, and Ioannis G. Kevrekidis. Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Appl. Comput. Harmon. Anal. , 21(1):113–127, 2006. Special Issue: Diffusion Maps and Wavelets. [69] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In A...
https://arxiv.org/abs/2505.21441v1
Mauro, Alejandro Molina, Kristian Kersting, and Floriana Es- posito. Sum-product autoencoding: Encoding and decoding representations using sum-product networks. InProceedings of the 32nd AAAI Conference on Artificial Intelligence , 2018. [91] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-A...
https://arxiv.org/abs/2505.21441v1
on any given Xjis bounded from below by some ρ >0. (A4) Every split puts at least a fraction γ∈(0,0.5]of the available observations into each child node. (A5) For each tree b∈[B], the total number of leaves d(b) Φsatisfies d(b) Φ→ ∞ , d(b) Φ/n→0as n→ ∞ . Under (A1)-(A5), decision trees satisfy the criteria of Stone’s t...
https://arxiv.org/abs/2505.21441v1
rows and columns sum to one). (b) Universal Recall that the Moore-Aronszajn theorem tells us that every PSD kernel defines a unique RKHS [ 1]. Given that kRF nis PSD, universality follows if we can show that the associated RKHS His dense in C(X). Of course, this is provably false at any fixed n, asdΦ=o(n)by (A5), and a...
https://arxiv.org/abs/2505.21441v1