text
string
source
string
2d+poly(d,1 δ)same same RSGM (ISM) d poly(d,1 δ)same same TDM (heat ker.) 1 —— —— dlog(1 δ) TDM (ISM) d poly(d,1 δ) poly(d,1 δ)dlog(1 δ) RDM d poly(d,1 δ)same same SCRD 1 2d+ poly(d,1 δ) poly(d,1 δ)dlog(1 δ) This paper 1 dω 2log(1 δ)dlog(1 δ)dlog(1 δ) O(nωlog(1 δ)) =O(dω/2log(1 δ))arithmetic operations in the case of t...
https://arxiv.org/abs/2505.21640v1
as z≡z(U,Λ)where U=φ(z)∈MandΛ≡Λ(z)∈Afor someA⊆Rd−dim(M)is another parameter. For instance, on the sphere, U=z ∥z∥is the projection onto the sphere, and Λ =∥z∥is the distance to the origin. ForSO(n)orU(n), the parametrization comes from the spectral decomposition z=UΛU∗, where U∈MandΛis a diagonal matrix. On the torus, ...
https://arxiv.org/abs/2505.21640v1
and sphere, and O/parenleftbigg d5.5log/parenleftbiggd ε/parenrightbigg/parenrightbigg forSO(n)andU(n). Each iteration requires one evaluation of ˆf, one evaluation of ˆg, one evaluation of the exponential map onM, andO(d)arithmetic operations. Comparison with prior work. Theorem 2.2 improves on the accuracy and runtim...
https://arxiv.org/abs/2505.21640v1
forward diffusion XtonMlacks a closed form expression. Instead, we first use (1) to obtain an SDE for the reverse diffusion of Zt∈Rd,dHt= (Ht/2 + 2∇logqT−t(Ht))dt+ dBt. We use Itô’s Lemma to project this SDE onto M, giving an SDE for Yt (see Section 6.1), dYt=E[(∇φ(Ht)⊤+dH⊤ t 2∇2φ(Ht))dHt/vextendsingle/vextendsingleφ(H...
https://arxiv.org/abs/2505.21640v1
An oracle which returns the value of the exponential map exp(x,v)on some manifold M, for anyx∈M,v∈TxM Require: An oracle for the “projection” map φ:Rd→M Require: Modelsfˆθ:M× [0,T]→TMandgˆϕ:M× [0,T]→TM×TM .ˆθ∈Ra1,ˆϕ∈Ra2 denote trainable parameters Require: Parameters θ,ϕ(from output of Algorithm 1), Require:T >0,N∈N,∆>...
https://arxiv.org/abs/2505.21640v1
a simple transition kernel from the invariant measure on SO(2)(see Section 6.7). 12 4.1 Proof outline of Theorem 2.2 In the following, for any random variable Xwe denote its probability distribution by LX. As already mentioned, previous works use Girsanov’s theorem to bound the accuracy of diffusion methods. However, G...
https://arxiv.org/abs/2505.21640v1
every point. This is the case for the sphere, where the map φ(z) =z ∥z∥has a singularity at z= 0. This issue also arises in the case of the unitary group and orthogonal group, since the derivative of the spectral decomposition φ(z) =U∗ΛUhas singularities at any matrix zwhich has an eigenvalue gapλi−λi+1= 0. To tackle t...
https://arxiv.org/abs/2505.21640v1
This is because f⋆andg⋆are given by condi- tional expectations conditioned on U, and can thus be decomposed as integrals over Λ. Towards 14 this end we express f⋆as an integral over the parameter Λ, f⋆(U,t) =cU/integraldisplay Λ∈A/bracketleftbigg ∇φ(z(U,Λ))⊤∇logqT−t|0(z(U,Λ)) +1 2tr∇2φ(z(U,Λ))/bracketrightbigg qT−t(z(U...
https://arxiv.org/abs/2505.21640v1
we plug in our Wasserstein bound W(Yt+τ,ˆyt+τ)≤O(ε)into the formula for the KL divergence between two Gaussians to bound ∥LYt+τ+ˆ∆−L ˆyt+τ+ˆ∆∥TV. Specifically, noting that Lˆyt+τ+ˆ∆|ˆyt= expˆyt+τ(N(ˆyt+τ+ˆ∆f(ˆyt+τ,t+τ),ˆ∆g2(ˆyt+τ,t+τ)Id)), we have that DKL/parenleftbigν1,Lˆyt+τ+ˆ∆|ˆyt+τ/parenrightbig= Tr/parenleftig/p...
https://arxiv.org/abs/2505.21640v1
onU(n), for varying dimensions d=n2. We also separately analyze per-iteration runtime to study scaling across dimensions, requiring only a single training step and limited computational resources. For the torus Td, following several works [12, 37], we train diffusion models on data sampled from wrapped Gaussians on tor...
https://arxiv.org/abs/2505.21640v1
SO(n)andU(n)withn= 3. Forn>3, the cost of their expansion grows exponentially withn, making it infeasible for higher dimensions. Metrics. For the torus, as in [12, 37]), we evaluate the quality of generated samples by computing their log-likelihood. For SO(n)andU(n), we use the Classifier Two-Sample Test (C2ST) metric ...
https://arxiv.org/abs/2505.21640v1
atleastfor n≤50(correspondingtoamanifolddimension ofd≤1225), nearly closing the gap with the per-iteration runtime of the Euclidean model. Moreover, we find that (except in very low dimensions) our model is capable of improving on the quality of samples generated by previous diffusion models, when trained on different ...
https://arxiv.org/abs/2505.21640v1
this choice of coupling, we have that Yt=XT−t=φ(ZT−t) =φ(Ht)for allt∈[0,T]. In the special case when there is only one datapoint x0, the SDE for the reverse diffusion YtonM can be obtained by applying Itô’s lemma (Lemma 3.1) to Yt=φ(Ht): dYt[i] =∇φi(Ht)⊤dHt+1 2(dHt)⊤(∇2φi(Ht))dHt∀i∈[d]. (21) In the following, to simpli...
https://arxiv.org/abs/2505.21640v1
solution to the following optimization problem: mingEt∼Unif([0,1])Eb∼π/bracketleftbigg/vextenddouble/vextenddouble/vextenddoubleJφ(Ht)⊤Jφ(Ht)−(g(Yt,t))2/vextenddouble/vextenddouble/vextenddouble2 F/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg , where∥·∥Fis the Frobenius norm. Since Ht|{H...
https://arxiv.org/abs/2505.21640v1
in Rd, one can easily observe that e.g. ∥∇φ(z)∥≤O(1)for anyzoutside a ball of radius r≥Ω(1)centered at the origin. As the volume of a ball of radius r=αis1 rdtimes the volume of the unit ball, one can use standard Gaussian concentration inequalities to show that the Ornstein-Uhlenbeck process Zt, which is a Gaussian pr...
https://arxiv.org/abs/2505.21640v1
the r.h.s. of (43) we have, cU×d dU/integraldisplay Λ∈A/bracketleftbigg (∇Uφ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ =cU×/integraldisplay Λ∈A/parenleftbiggd dU/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg/parenrightbigg 27 ×qT−t(z(U,Λ)) ...
https://arxiv.org/abs/2505.21640v1
transition kernel ˜pt+τ+ˆ∆|t+τ(·|Ht+τ)of the reverse diffusion Htin Rdis close to a Gaussian in KL distance over short time steps ˆ∆: DKL(N(Ht+τ+ˆ∆∇˜pT−t−τ(Ht+τ),ˆ∆Id)∥˜pt+τ+ˆ∆|t+τ(·|Ht+τ))≤ατ T. One can do this using Girsanov’s theorem, since, unlike the diffusion Yton the manifold, the reverse diffusion in Euclidean ...
https://arxiv.org/abs/2505.21640v1
γi(t)−γj(t)uj(t)−1 2/summationdisplay j̸=idt (γi(t)−γj(t))2ui(t)∀i∈[n]. (56) From (54), one can see that over the “bad” time intervals [ai,bi], each eigenvalue γi(t)has at most one neighboring eigenvalue, say γi+1(t), with small gap γi(t)−γi+1(t)≤O(1√ d)w.h.p. Roughly speaking, this implies that only the interactions i...
https://arxiv.org/abs/2505.21640v1
on sampling accuracy—improving upon prior works that lacked such guarantees—tightening this dependence remains an important challenge for future research. Acknowledgments OM was supported in part by a Google Research Scholar award. NV was supported in part by NSF CCF-2112665. 33 References [1] Brian DO Anderson. Revers...
https://arxiv.org/abs/2505.21640v1
sys- tems, 35:24240–24253, 2022. [20] Jaehyeong Jo and Sung Ju Hwang. Generative modeling on manifolds through mixture of Riemannian diffusion processes. In International Conference on Machine Learning , 2024. [21] Adam Leach, Sebastian M Schmon, Matteo T Degiacomi, and Chris G Willcocks. Denoising diffusion probabilis...
https://arxiv.org/abs/2505.21640v1
expmi(Z), where expx(·)denotes the exponential map at any point x∈M. Datasets on the Torus Td.The synthetic dataset is sampled from a single-wrapped Gaussian distribution, with mean at the origin, (0,..., 0)Tand covariance matrix 0.2Id. A total of 30,000 points were sampled as the training dataset, and 10,000 were samp...
https://arxiv.org/abs/2505.21640v1
Hardware. Simulations evaluating sample quality on the Torus were run on an Apple M1 chip with 10 cores. Simulations on the special orthogonal group and unitary group were run on a single RTX 3070. All simulations evaluating per-iteration training runtime were run on a single RTX 3070 as well. A.3 Evaluation metrics In...
https://arxiv.org/abs/2505.21640v1
to generate points that visually resemble those of the target distribution more closely than the points generated by the Euclidean diffusion model or the RSGM model. C2ST score and visual results on the special orthogonal group SO(n).We train our model, a Euclidean diffusion model, RSGM, and TDM on a dataset sampled fr...
https://arxiv.org/abs/2505.21640v1
2.85±.09 7.63±.12 Ours 0.13±.01 0.13±.01 0.14±.00 0.14±.01 0.20±.01 Runtime on the special orthogonal group SO(n).6 shows the per-iteration runtime of the Euclidean model, our model, RSGM and TDM, on the special orthogonal group SO(n), forn∈ {3,5,10,30,50}. Weobservethatourmodel’sper-iterationtrainingruntimeremainswith...
https://arxiv.org/abs/2505.21640v1
previous diffusion models in Euclidean space. 2.Torus Td.For the torus, the forward and reverse diffusion of our model are the same as the models used in previous diffusion models on the torus [12] [23]. The Forward diffusion is given by the SDE dXt=−1 2Xtdt+ dBton the torus, initialized at the target distribution π. T...
https://arxiv.org/abs/2505.21640v1
for this covariance term would require at least d2arithmetic operations. However, as a result of the symmetries of the sphere, the covariance matrix has additional structure: it is a multiple α(Xt,t)of thed×didentity matrix. Thus, to learn this covarianceterm, itissufficienttotrainamodel ˆα(Xt,t)forα(Xt,t). Thiscanbeac...
https://arxiv.org/abs/2505.21640v1
j∈[n],j̸=i1 λi−λjfor eachi∈[n]. Here, ˆz=be−1 2(T−t)+/radicalbig 1−e−(T−t)Gwhere Gis a Gaussian random matrix with i.i.d. N(0,1)entries and UΛU∗denotes the spectral decomposition of ˆz+ ˆz∗. To learn the SDE of the reverse diffusion, we must also train a model for the covariance term, which is given by a d×d=n2×n2covar...
https://arxiv.org/abs/2505.21640v1
boundary is composed of piecewise flat(d−1)-dimensional faces. Geodesics restricted to a single face are linear and computable efficiently, satisfying the spirit of property (1). For property (2), one can define a projection φ:Rd→Mthat maps any x∈Rdto the point where the ray emanating from pand passing through xinterse...
https://arxiv.org/abs/2505.21640v1
∥A∥2→2:= sup v1∈V1\{0},...,vk∈Vk\{0}∥A(v1,...,vk)∥2 ∥v1∥2···∥vk∥2. 13.Partial derivatived dU.In parameterizations of the form x=x(U,Λ), we writed dUx(U,Λ) for the derivative with respect to U∈M. For example, if M= SO(n)andx(U,Λ) =UΛU⊤, then this derivative corresponds to projecting UΛ + ΛU⊤onto the tangent space of SO(...
https://arxiv.org/abs/2505.21640v1
thesectional curvature K(u,v)is defined as the Gaussian curvature of the 2-dimensional surface in Mobtained by exponentiating the plane spanned by uandvatx. Formally, the sectional curvature of the plane Π = span{u,v}⊆TxMis given by: K(u,v) :=⟨R(u,v)v,u⟩ ∥u∥2∥v∥2−⟨u,v⟩2, whereRis the Riemann curvature tensor, and ⟨·,·⟩...
https://arxiv.org/abs/2505.21640v1
arXiv:2505.21652v1 [cs.RO] 27 May 2025PartInstruct: Part-level Instruction Following for Fine-grained Robot Manipulation Yifan Yin∗1Zhengtao Han∗2Shivam Aarya1Jianxin Wang1Shuhang Xu1Jiawei Peng1 Angtian Wang1Alan Yuille1Tianmin Shu1 1Johns Hopkins University2ShanghaiTech University https://partinstruct.github.io Pick ...
https://arxiv.org/abs/2505.21652v1
object but also understand and interact with specific parts of that object to perform the intended task as instructed. This involves reasoning about the relationship between the Table I: Comparison of PartInstruct with existing tabletop robot manipulation benchmarks based on: the number of distinctive part-level instru...
https://arxiv.org/abs/2505.21652v1
to a differenttype of generalization test. Together, these tests assess how well a learned policy performs in unseen scenarios, including new states, objects, and tasks. We compare PartInstruct with several existing table-top manipulation benchmarks in Table I. We evaluated multiple state-of-the-art vision-language pol...
https://arxiv.org/abs/2505.21652v1
semantic part-level instruc- tions. However, it does not support policy learning; instead, it outputs final goal positions and orientations, relying on an oracle planner to plan for intermediate actions. There have been recent approaches supporting part-level manipulation, such as Composable Part-based Manipulation (CP...
https://arxiv.org/abs/2505.21652v1
POS_INIT_OBJ+VEC(dir)) Phase2 :GRIPPER_OPEN ,MIN_DISTANCE(gripper, obj) D Rotate the part of the object to face the opposite direction FACING(part, ∼DIR_INIT(part)) ,ON(obj, table) Table III: Definitions of base skills. Skill Description grasp_obj (obj, part)Robot grasps objatpart. move_gripper (dir, dis=UNIT, grasping...
https://arxiv.org/abs/2505.21652v1
To “push” the bucket’s left part, the robot must first touch the left side of the bucket by executing touch_part (bucket, left) , then move the end effector to the right via move_gripper (right) . Following the “push” action, the robot executes release_gripper ()to complete the task. We hypothesize that structuring fin...
https://arxiv.org/abs/2505.21652v1
034 84 0 0 063 0 0 0404 13 028 5 927 0 057 0 0 0 091 0215 059 89 0 053 73 0 0 0187 0376 0 0246 0 0 0470 0100200300400500600 Frequency Figure 4: Annotated parts grouped by object categories. The horizontal axis stands for different part names, and the vertical axis gives different object categories. The value in the hea...
https://arxiv.org/abs/2505.21652v1
such that [ part] is Figure 6: Representative object assets from PartInstruct. facing [ direction ].” To perform these tasks, the model needs not only to know the location of the part but also to infer its final state. The agent must manipulate some part of the object to achieve that state, even when the part being dir...
https://arxiv.org/abs/2505.21652v1
have been two common types of approaches: (1) end-to-end policy learning that directly maps observation and instruction to actions (e.g., [48, 8, 23, 32, 41, 11, 13, 5, 49]) and (2) bi- level planning that first generates high-level plans (typically subgoals), then compute and execute the low-level action plans to achi...
https://arxiv.org/abs/2505.21652v1
scratch and fine-tuned the pretrained baseline Octo on our training data. Our hypothesis is that fine-tuning Octo will improve its performance on our benchmark by leveraging its large-scale pretraining on Open X-Embodiment [28]. The implementation details can be found in Appendix D. 2) Results: To evaluate each learned...
https://arxiv.org/abs/2505.21652v1
learning and bi-level planning. Standard errors are reported alongside each value. The best-performing results are highlighted in bold. Baselines Test 1 (OS) Test 2 (OI) Test 3 (TP) Test 4 (TC) Test 5 (OC) All End-to-End LearningOcto 1.82±1.3 0.0 0.91±0.1 0.0 3.33±3.2 1.11±1.5 Act3D 6.25±1.8 5.68±1.7 4.55±1.6 0.0 2.08±...
https://arxiv.org/abs/2505.21652v1
an additional vision input. Since there has not been a general-purpose object part segmentation model on 3D point cloud [39, 36], we obtain the 3D part segmentation using a lift-to-3D method.In detail, we first apply the same method in DP to obtain a 2D segmentation mask tracked using SAM2. We then lift the 2D mask int...
https://arxiv.org/abs/2505.21652v1
likely to accumulate. C. Ablation Studies In Section IV-B, we demonstrate that bi-level planning models with low-level action policies informed by part seg- mentation perform significantly better than state-of-the-art end-to-end policies. To evaluate the effect of each component of the high-level planning models, we co...
https://arxiv.org/abs/2505.21652v1
in the capacity of state- of-the-art segmentation methods to accurately segment object parts. V. D ISCUSSION How well can current vision-language policies perform in our part-level manipulation tasks? The experimental results on our benchmark systematically reveal the perfor- mance of current vision-language policies i...
https://arxiv.org/abs/2505.21652v1
more fine-grained vision grounding than object-level tasks, since the part-level information is much more detailed and changes dynamically over time (e.g., the front of a mug at the current step may no longer be the front in future steps after rotation). By decomposing the task into part-level tasks, we reduce the burd...
https://arxiv.org/abs/2505.21652v1
current VLMs perform in the planning for fine-grained manipulation tasks? Our experiments show that bi-level planning baselines significantly outperform end- to-end policy learning approaches, as indicated in Table V. This suggests that current VLMs possess certain capabilities in understanding and reasoning about part...
https://arxiv.org/abs/2505.21652v1
Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012 , 2015. [5] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Dif- fusion policy: Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 20...
https://arxiv.org/abs/2505.21652v1
Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for em- bodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA) , pages 9493–9500. IEEE, 2023. [22] Weiyu Liu, Jiayuan Mao, Joy Hsu, Tucker Hermans, Animesh Garg, and Jiajun...
https://arxiv.org/abs/2505.21652v1
Carion, Chao-Yuan Wu, Ross Girshick, Piotr Doll ´ar, and Christoph Feichtenhofer. Sam 2: Segment anything in images and videos, 2024. URL https://arxiv.org/abs/2408. 00714. [35] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kun- chang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, Zhaoyang Zeng, Hao Zhan...
https://arxiv.org/abs/2505.21652v1
Nahavandi. A survey of imitation learning: Al- gorithms, recent developments, and challenges. IEEE Transactions on Cybernetics , 2024. [49] Yanjie Ze, Gu Zhang, Kangning Zhang, Chenyuan Hu, Muhan Wang, and Huazhe Xu. 3d diffusion policy. arXiv preprint arXiv:2403.03954 , 2024. [50] Shengqiang Zhang, Philipp Wicke, L ¨u...
https://arxiv.org/abs/2505.21652v1
air, then rotate part to point towards direction2GRASPING(obj) , AT_POSITION(obj,POS_INIT_OBJ+VEC(UP)+VEC(dir1)) , FACING(part, dir2) Table XII: Unseen Task Instructions and Goal States Unseen (6) Order Example Task Instruction Goal States 11 Rotate part in the air so it points towards direction , then put it downPhase...
https://arxiv.org/abs/2505.21652v1
skill instruction. 3) Visualization of Test Splits: We provide the visualization of all 5 test sets in this section. Figure 10: Left: Training set. Right: Test 1(OS). Figure 11: Left: Training set. Right: Test 2(OI). Figure 12: Above: Training set. Below: Test 3(TP). Figure 13: Above: Training set. Below: Test 4(TC). F...
https://arxiv.org/abs/2505.21652v1
of 8. The input RGB images are cropped to a size of 76×76. For language instructions, we use a pre-trained T5-small language encoder to obtain a language embedding of 512 dimensions. This language embedding is then concatenated with other features to form the final feature representation. 3D Diffusion Policy (DP3): The...
https://arxiv.org/abs/2505.21652v1
task planner features a skill inference mechanism that leverages comprehensive contextual information, including user task instructions, previously executed skill chains, and real- time state data such as vision and pose information, to determine the next appropriate action. Recall that the high-level task planner upda...
https://arxiv.org/abs/2505.21652v1
an expert at planning manipulation tasks. You will be given one task instruction for each manipulation task. Each task instruction can be divided into a chain of skill instructions. Your job is to infer the next skill instruction (you only need to output one immediate next skill instruction each time, even if the entir...
https://arxiv.org/abs/2505.21652v1
front, back -Pliers: base body, leg, outlier, left, right, top, bottom, front, back -Bottle: mouth, lid, body, neck, left, right, top, bottom, front, back -Knife: base body, translation blade, rotation blade, left, right, top, bottom, front, back -Stapler: base body, lid, body, left, right, top, bottom, front, back -Ke...
https://arxiv.org/abs/2505.21652v1
arXiv:2505.21657v1 [cs.CL] 27 May 2025EXPLAINABILITY OF LARGE LANGUAGE MODELS USING SMILE: S TATISTICAL MODEL -AGNOSTIC INTERPRETABILITY WITH LOCAL EXPLANATIONS Zeinab Dehghani University of Hull United Kingdom z.dehghani-2023@hull.ac.ukKoorosh Aslansefat University of Hull United Kingdom k.aslansefat@hull.ac.ukAdil Kh...
https://arxiv.org/abs/2505.21657v1
training to decide which input elements are the most important [ 6]. This process is more than just identifying individual keywords; it is about picking up sophisticated connections and context that shape the model’s responses [ 10]. Understanding how this works is crucial: it explains why these models excel in some sc...
https://arxiv.org/abs/2505.21657v1
to create explainable large language models. We discuss the key challenges involved and the techniques being developed to improve how we interpret and trust these powerful models. 2.1 Large Language Models Large language models (LLMs) have become increasingly common thanks to significant strides in deep learning, bette...
https://arxiv.org/abs/2505.21657v1
36]Customizable models for research applications Retrieval-AugmentedCommand R+ [ 30], GPT-4 Bing [ 26], Claude 3 RAG [ 28], Gemini Search [ 29]Enhance information retrieval accuracy, reduce misinformation Long-Text ProcessingClaude 3 [ 28], Gemini 1.5 [ 29], GPT-4o [ 27], Grok-1.5 [ 37]Specialized in processing long-fo...
https://arxiv.org/abs/2505.21657v1
to weigh data points. For instance, SMILE uses statistical techniques to generate more consistent and reliable explanations. However, while these improvements enhance reliability, they also come with increased computational complexity. Another approach to improving LIME focuses on upgrading its simple linear models wit...
https://arxiv.org/abs/2505.21657v1
controlling model behaviour. Additionally, the SEER framework [ 81] introduces self-explainability mechanisms to enhance the interpretability of internal representations in LLMs. The CELL framework [ 82] further advances mechanistic interpretability by integrating concept-based explanations directly into the training a...
https://arxiv.org/abs/2505.21657v1
3.2 Input-Level Distance To measure how different each perturbed input ˆxjis from the original prompt x, we compute the semantic distance using Word Mover’s Distance (WMD) [103]: δxj:= WMD( x,ˆxj). (1) Equation 1 quantifies the semantic dissimilarity between xandˆxj, ensuring that closer perturbations are treated as mo...
https://arxiv.org/abs/2505.21657v1
the generated outputs. Understanding the interaction between input prompts and generated text improves transparency and predictability in the model’s behaviour [8, 102]. In the context of classification tasks, Fig. 4 illustrates SMILE, a tool designed to explain model predictions by isolating critical features that dri...
https://arxiv.org/abs/2505.21657v1
original input and the perturbed texts [108]. While the text generation model faces challenges in tasks such as counting objects and spatial reasoning, these issues can be mitigated by carefully selecting perturbation texts and restricting the input domain for perturbations [ 109]. The text generation models enable the...
https://arxiv.org/abs/2505.21657v1
(13) 11 APREPRINT - MAY29, 2025 In the above relation, the δWMD function represents the embedding extraction model applied to the texts before calculating the distance [ 103]. In Eq. 13, nrepresents the number of features, pdenotes the norm order, and W(t′ org, t′i pert) demonstrates the Wasserstein distance between th...
https://arxiv.org/abs/2505.21657v1
used to measure the degree of similarity between each perturbation text and the original, and this value is employed as a sample weight in the interpretable model. Eq. 15 calculates this similarity, and a Gaussian kernel is applied to normalise it into the range [0,1][103]. 13 APREPRINT - MAY29, 2025 WMD (porg, pi pert...
https://arxiv.org/abs/2505.21657v1
Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) [ 121]. AUC reflects the model’s ability to rank relevant elements (from the ground truth) higher than irrelevant ones: •AUC∼1:The model effectively distinguishes relevant elements, closely matching human-identified ground truth. •AUC∼0.5:The mod...
https://arxiv.org/abs/2505.21657v1
This consistency is crucial for applications where repeatable and reliable results are required. In our case, the model consistently assigns higher weights to the words ‘‘meaning’’ and ‘‘life’’, demonstrating that it understands the importance of these words in generating the desired output [ 6]. Such behaviour undersc...
https://arxiv.org/abs/2505.21657v1
models’ predicted scores, reflecting the risk of prediction discrepancies. We compute fidelity across various scenarios. Fidelity is measured by comparing the predictions of the explainable model to those of the black-box text generation model. We analyse how different perturbations in the input text affect fidelity sc...
https://arxiv.org/abs/2505.21657v1
including science, history, law, and the humanities. For this study, we accessed a pre-formatted version of the dataset available on Kaggle [ 131] to facilitate prompt construction and experimentation. Figure 9 shows a set of attribution heatmaps that visualise which parts of the input prompt most influenced the model’...
https://arxiv.org/abs/2505.21657v1
standard deviation metrics for each word coefficient. This step is essential to confirm that the model’s predictions are not random or heavily dependent on initialisation factors but are instead deterministic and robust. Table 3: Consistency metrics for different models on the prompt ‘‘What is the meaning of life?’’ Mo...
https://arxiv.org/abs/2505.21657v1
Fidelity Metrics T vs T T vs T WMSE R2 ω WMAE mean-L1 mean-L2 R2 ˆω Cosine Cosine 0.0172 0.3151 0.0659 0.1277 0.0412 0.1508 Cosine WD 0.0216 0.4197 0.0899 0.1332 0.0385 0.2805 WD WD 0.0388 0.7104 0.1731 0.2035 0.0609 0.6409 WD Cosine 0.0048 0.4026 0.0329 0.0871 0.0296 0.2593 WD+C WD+C 0.0349 0.5349 0.1589 0.3050 0.1468...
https://arxiv.org/abs/2505.21657v1
this problem. ’’ As shown in Figure 10, the most influential words identified were ‘‘arithmetic’’ and ‘‘approach,’’ and ‘‘solution’’, confirming their importance in shaping reasoning- oriented outputs. We then extended the analysis to GPT-4 using the instruction: ‘‘Let’s combine our numerical command and clear thinking...
https://arxiv.org/abs/2505.21657v1
not provide API access and require manual local deployment. 7 Conclusion Interpretability remains a cornerstone of responsible AI, particularly in the context of large language models (LLMs) and instruction-based text generation. As AI systems become increasingly embedded in domains such as business, education, healthc...
https://arxiv.org/abs/2505.21657v1
to contribute to standardised evaluation protocols for prompt sensitivity, attribution robustness, and local fidelity. These benchmarks will support consistent assessment across models and help identify best practices for interpretable generation. Data Availability The datasets used in this article are publicly availab...
https://arxiv.org/abs/2505.21657v1
arXiv preprint arXiv:1702.08608 , 2017. [15] Z. C. Lipton, “The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery,” Queue , vol. 16, no. 3, pp. 31–57, 2018. [16] S. M. Ahmadi, K. Aslansefat, R. Valcarce-Dineiro, and J. Barnfather, “Explainability of Po...
https://arxiv.org/abs/2505.21657v1
[43] Xiang Lisa Li et al. XGLM: An Extra Large Cross-lingual Language Model , arXiv preprint arXiv:2112.10668, 2021. [44] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” In Proceedings of the 2019 Conference of the North American C...
https://arxiv.org/abs/2505.21657v1
Series Models. In Proceedings of the AAAI Conference on Artificial Intelligence , 2021. [69] Xiaoran Huang, Yu Rong, Tingyang Xu, Wenbing Huang, and Junzhou Huang. GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems , 2022. [70] Saumitra...
https://arxiv.org/abs/2505.21657v1
Transport: With Applications to Data Science . Foundations and Trends in Machine Learning, 11(5-6), 2019, pp. 355–607. [102] Molnar, C. Interpretable Machine Learning . Lulu.com, 2020. https://christophm.github.io/interpretable- ml-book/ [103] Kusner, M., Sun, Y., Kolkin, N., and Weinberger, K. From Word Embeddings to ...
https://arxiv.org/abs/2505.21657v1
G., Lapuschkin, S., Anders, C. J., and Müller, K.-R. “Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models,” arXiv preprint arXiv:1708.08296 , 2017. [124] Draper, N. R., and Smith, H. Applied Regression Analysis , 3rd ed. John Wiley & Sons, 1998. [125] Hocking, R. R. “Th...
https://arxiv.org/abs/2505.21657v1
Expert Survey: AI Reliability & Security Research PrioritiesMay 2025By Joe O’Brien¹*†, Jeremy Dolan²*, Jay Kim³, Jonah Dykhuizen², Jeba Sania⁴, Sebastian Becker⁵, Jam Kraprayoon¹, and Cara Labrador¹* Equal contribution. † Corresponding author: joe@iaps.ai. ¹ Institute for AI Policy and Strategy (IAPS). ² Independent Re...
https://arxiv.org/abs/2505.21664v1
statistical confidence; results should be read as directional. Future iterations will aim to broaden participation and capture updates in priorities closer to real time. Our study reveals a consistent message from respondents: significant, actionable opportunities exist within technical AI reliability and security resear...
https://arxiv.org/abs/2505.21664v1
by several bottlenecks: ● Resources: While funding for AI R&S has increased in recent years, it remains inadequate relative to the scale and urgency of the problem. ● Expertise: The field faces a shortage of researchers with the necessary technical skills. ● Uncertainty: There is considerable uncertainty about which res...
https://arxiv.org/abs/2505.21664v1
inspiration from the Centre for the Governance of AI’s AGI Safety and Governance Survey (Schuett et al. 2023), which similarly aggregated expert judgments to identify high-priority policy interventions. Expert Survey: AI Reliability & Security Research Priorities | 6 Scope of Results The central result of this survey i...
https://arxiv.org/abs/2505.21664v1
This foundational source provided a well-defined categorization aligned with ongoing discussions within relevant research communities. The taxonomy used in the survey, with brief descriptions of all areas, is reproduced in Appendix A. Items selected for inclusion met two primary criteria: 1. Technical Focus: To maintain...
https://arxiv.org/abs/2505.21664v1
and asked a series of questions. Respondents then assessed that sub-area along dimensions of importance and tractability. ● Importance: “Resolving the core challenges of this sub-area and implementing the resulting solutions would significantly reduce the risk of severe harm (loss of >100 lives or >$10 billion in econom...
https://arxiv.org/abs/2505.21664v1
or fewer ratings in either importance or tractability were excluded from quantitative analysis due to insufficient data for meaningful statistical interpretation. Exclusions are listed in Appendix C and limitations related to this exclusion are discussed in the Limitations section below. For each sub-area with sufficient r...
https://arxiv.org/abs/2505.21664v1
| 12 Strategic Long-Term Opportunities Several research areas emerged with strong consensus on their importance despite lower perceived tractability. These critical domains—primarily in security implementation and applied systems—represent strategic priorities that may require longer timelines and substantial resource ...
https://arxiv.org/abs/2505.21664v1
shorthand throughout the paper. Expert Survey: AI Reliability & Security Research Priorities | 14 centered on theoretical frameworks: ● Evaluation-focused approaches dominate the top rankings. Nine of the top 15 approaches emphasize evaluation, detection, or monitoring of harms (such as "Emergence and task-specific scal...
https://arxiv.org/abs/2505.21664v1
laws. Ranking: Highest ranked sub-area according to promise score (T = 4.25, I = 5). Why It Matters: Anticipating and mitigating potential severe harms from future AI capabilities necessitates accurate forecasting of when and how these capabilities may emerge. This foresight enables the proactive implementation of safe...
https://arxiv.org/abs/2505.21664v1
frameworks for monitoring and evaluating LLM agents (T. Yuan et al. 2024; Ruan et al. 2024; Guo et al. 2024). A relevant field, which in our taxonomy belongs to the sub-area “Control mechanisms for untrusted models," but has important overlap, is “AI control.” AI Expert Survey: AI Reliability & Security Research Priorit...
https://arxiv.org/abs/2505.21664v1
of opportunities for making “measurable progress” in evaluation methodology and metrics. Several highlighted the potential for importing methodological best practices from experimental psychology, such as validity testing and control of confounding variables. Others pointed to a large and urgent need for better automat...
https://arxiv.org/abs/2505.21664v1
we received responses from a total of 53 experts—below our initial goal. This sample size limits the survey’s reliability as a gauge for the perspectives of the broader AI R&S expert community. Accordingly, we encourage readers to consider this survey as one piece of evidence among many, rather than as ground truth. Fo...
https://arxiv.org/abs/2505.21664v1
sums to underfunded research areas that score high on importance but low on tractability (e.g., supply chain security, access control and interface hardening). The AI R&D ecosystem may be unable to address these gaps without large, long-term investments usually provided by government. Incentivizing investment Short of ...
https://arxiv.org/abs/2505.21664v1
subsequent research should aim for: Expert Survey: AI Reliability & Security Research Priorities | 22 ● Broader and More Systematic Sampling: Employing methods to achieve higher response rates and potentially more representative samples of relevant expertise. Incorporating modest incentives, as suggested by Grace (2024...
https://arxiv.org/abs/2505.21664v1
to better targeting critical gaps. We must acknowledge the limitations of this study. First, this study represents a snapshot in a rapidly evolving field. Second, our sample size and response distribution limited our ability to glean insights across all research areas. Future iterations should pursue broader sampling, a...
https://arxiv.org/abs/2505.21664v1
verification methods to embedded agency, decision theory, incentive structures aligned with causal reasoning, and control theory. a. Building verifiable and robust AI architectures: Constructing AI systems with architectures that support formal verification and robustness guarantees, such as world models that enable safe ...
https://arxiv.org/abs/2505.21664v1
work includes (Miyato et al. 2019), (Lightman et al. 2023), and (Casper et al. 2024). c. Scalable techniques for targeted modifications of LLM behavior (including unlearning): Creating scalable methods for precisely adjusting model outputs, such as removing unwanted content or refining responses to adhere to alignment co...
https://arxiv.org/abs/2505.21664v1
behaviors. Example work includes (Leike et al. 2018) and (Jeff Wu et al. 2021). 4. Understanding in-context learning, reasoning, and scaling behavior: Methods to gain a comprehensive understanding of how large language models learn, reason, and scale, such as by examining in-context learning (ICL) mechanisms, the influen...
https://arxiv.org/abs/2505.21664v1