text
string | source
string |
|---|---|
2d+poly(d,1 δ)same same RSGM (ISM) d poly(d,1 δ)same same TDM (heat ker.) 1 —— —— dlog(1 δ) TDM (ISM) d poly(d,1 δ) poly(d,1 δ)dlog(1 δ) RDM d poly(d,1 δ)same same SCRD 1 2d+ poly(d,1 δ) poly(d,1 δ)dlog(1 δ) This paper 1 dω 2log(1 δ)dlog(1 δ)dlog(1 δ) O(nωlog(1 δ)) =O(dω/2log(1 δ))arithmetic operations in the case of the special orthogonal group SO(n)or unitary group U(n), using the singular value decomposition of an n×nmatrix, or in O(dlog(1 δ))operations for the sphere or torus. See Section 3 and Appendix C for details. This significantly improves the per-iteration runtime of training diffusion models on symmetric manifolds (Table 1). For instance, it achieves exponential-in- dsavings in arithmetic operations comparedtoRiemannianScore-basedGenerativeModels(RSGM)[12], asRSGMrequiressumming Ω(2d)terms in truncated heat kernel expansions for manifolds like the torus, sphere, orthogonal, or unitary groups, while our approach avoids this complexity. If RSGM is instead trained with an implicitscorematchingobjective(ISM),whichincludesaRiemanniandivergencetermthatrequires O(d)gradient evaluations, our model achieves a factor of dimprovement in the number of gradient evaluations . Similarly, we get a factor of dimprovement in gradient evaluations over Trivialized Momentum Diffusion Models (TDM) [37], which also rely on ISM objectives, on manifolds like the orthogonal or unitary group. It also improves upon Scaling Riemannian Diffusion (SCRD) [23], where heat kernel computations for orthogonal or unitary groups involve expansions with Ω(2d) terms (SCRD does not provide an ISM objective). Additionally, while RSGM and Riemannian Diffusion Models (RDM) [18] use deterministic heat kernel approximations, these are asymptotically biased with fixed error bounds. Stochastic approximations to the implicit objective introduce dimension-dependent noise [23]. Our method improves the accuracy dependence from polynomial to logarithmic in1 δ. Unlike solvers for SDEs or ODEs, which require polynomial-in-1 δiterations, our forward diffusion adds a Gaussian vector and projects onto the manifold, achieving high accuracy with only logarithmic cost in1 δ. 2.3 Sampling algorithm and theoretical guarantees Next, we present our sampling algorithm, which uses the trained models to simulate the reverse diffusion process. Sampling procedure. Our training algorithm (Algorithm 1) outputs trained models f(x,t)and g(x,t)for the drift and covariance terms of our reverse diffusion. We then use these models to generate samples. First, we sample a point zfrom the stationary distribution of the Ornstein- Uhlenbeck process ZtonRd, which is Gaussian distributed. Next, we project this point zonto the manifold to obtain a point y=φ(z), and solve the SDE dYt=f(Yt,t)dt+g(Yt,t)dBtgiven by our trained model for the reverse diffusion’s drift and covariance over the time interval [0,T], starting at the initial point y. To simulate this SDE, we can use any off-the-shelf numerical SDE solver, 6 which takes as input the trained model for fandg, and the exponential map on M. We give one such solver in Algorithm 2, and prove guarantees for the accuracy of the samples it generates, and its runtime, in Theorem 2.2. Our guarantees assume the trained models f(x,t)andg(x,t)we hand to this solver minimize our training objective within some error ε>0. Symmetry and forward diffusion structure. Our theoretical guarantees hold when Msat- isfies a symmetry property and φsatisfies an “average-case” Lipschitz condition (Assumption 2.1). This symmetry property requires that each point z∈Rdcan be parametrized
|
https://arxiv.org/abs/2505.21640v1
|
as z≡z(U,Λ)where U=φ(z)∈MandΛ≡Λ(z)∈Afor someA⊆Rd−dim(M)is another parameter. For instance, on the sphere, U=z ∥z∥is the projection onto the sphere, and Λ =∥z∥is the distance to the origin. ForSO(n)orU(n), the parametrization comes from the spectral decomposition z=UΛU∗, where U∈MandΛis a diagonal matrix. On the torus, U=φ(x)is the projection onto the torus, and Λ∈2πZd.Zt,t≥0, is the Ornstein-Uhlenbeck process on Rd,Xt:=φ(Zt)is our forward diffusion process onM, andYt:=XT−tits time-reversal (see Section 3). This structure ensures that the reverse diffusion inherits well-behaved properties from the Euclidean process. Average-case Lipschitzness. Assumption 2.1 (Average-case Lipschitzness). ∀t∈[0,T]there exists Ωt⊆Rd, whose in- dicator function 1Ωt(x)depends only on Λ≡Λ(x), for which P(Zt∈Ωt∀t∈[0,T])≥1−α. For everyx∈Ωtwe have∥∇φ(x)∥2→2≤L1,∥d dU∇φ(x)∥2→2≤L1,∥∇2φ(x)∥2→2≤L2, and ∥d dU∇φ(x)∥2→2≤L2. Moreover,∥d dUx∥2→2≤∥x∥2. Here∥·∥2→2denotes the operator norm andd dUxdenotes the derivative of the parameterization of x=x(U,Λ)with respect to U∈M. This assumption allows us to show that the projected reverse diffusion is well-posed and numerically stable, enabling sample quality guarantees. Verifying the assumption on common manifolds. We choose projection maps φthat satisfy Assumption 2.1 with small Lipschitz constants. For example, for Td,φ(x)[i] =x[i]mod2π,i∈[d] is1-Lipschitz on all Rd, trivially satisfying the assumption. For the sphere, φ(x) =x |x|is 2- Lipschitz outside a ball of radius1 2around the origin, where the forward diffusion remains with highprobability 1−O(2−d). For U(n)(orSO(n)),φ(X),whichcomputesthespectraldecomposition U∗ΛUofX+X∗, has derivatives with magnitude bounded by the inverse eigenvalue gaps1 λi−λj. While singularities occur at points with duplicate eigenvalues, random matrix theory shows that eigengaps are w.h.p. bounded below by1 poly(d), ensuring φsatisfies the average-case Lipschitz assumption. For the unitary group, we show that Assumption 2.1 holds for L1=O(d1.5√ Tα−1 3) andL2=O(d2Tα−2 3)(Lemma 6.4). For the sphere, it holds for L1=L2=O(α−1 d). For the torus, it holds for L1=L2= 1. These bounds, derived in Appendix C, imply the assumption holds with high probability under standard random matrix models. Theoretical guarantees. We denote by ψ(M) :=ψ(x)x∈M⊆ Rdthe pushforward of M w˙ r.t.ψ. Theorem 2.2 (Accuracy and runtime of sampling algorithm). Letε>0, and suppose that φ:Rd→Msatisfies Assumption 2.1 for some L1,L2≤poly(d)andα≤ε, andψ(M)is bounded by a ball of radius poly(d). Suppose that ˆfand ˆgare outputs of Algorithm 2, and that ˆfand ˆgminimize our training objective for the target distribution πwith objective function value < ε. 7 Then Algorithm 2, with inputs ˆfandˆg, outputs a generated sample whose probability distribution νsatisfies ∥ν−π∥TV<O/parenleftbigg ε·(d3L1+d2L2) log/parenleftbiggd ε/parenrightbigg/parenrightbigg =˜O(ε·poly(d)). Moreover, Algorithm 2 takes O/parenleftbigg (d4L1+d2L2) log/parenleftbiggd ε/parenrightbigg/parenrightbigg = poly(d)·log/parenleftbiggd ε/parenrightbigg iterations. Here, each iteration requires one evaluation of ˆfandˆg, one evaluation of the exponential map onM, plus O(d) arithmetic operations. Plugging in our bounds on the average-case Lipschitz constants in the case of the torus, sphere, SO(n), and U(n)(Lemma6.4)intoTheorem2.2, weobtainthefollowingguaranteesfortheaccuracy and runtime of our sampling algorithm for these symmetric manifolds: Corollary 2.3. Suppose thatMisTd,Sd,SO(n), orU(n)withn=√ d. Suppose that φandψare chosen as specified above for these manifolds. Suppose that ˆfandˆgare outputs of Algorithm 2, and that ˆfandˆgminimize our training objective for the target distribution πwith objective function value<ε. Then Algorithm 2, with inputs ˆfandˆg, outputs a generated sample whose distribution νsatisfies ∥ν−π∥TV≤O/parenleftbigg ε·d6log/parenleftbiggd ε/parenrightbigg/parenrightbigg for the torus and sphere , and ∥ν−π∥TV<O/parenleftbigg ε·d9log/parenleftbiggd ε/parenrightbigg/parenrightbigg forSO(n)andU(n). Moreover, Algorithm 2 takes O/parenleftbigg d4log/parenleftbiggd ε/parenrightbigg/parenrightbigg iterations for the torus
|
https://arxiv.org/abs/2505.21640v1
|
and sphere, and O/parenleftbigg d5.5log/parenleftbiggd ε/parenrightbigg/parenrightbigg forSO(n)andU(n). Each iteration requires one evaluation of ˆf, one evaluation of ˆg, one evaluation of the exponential map onM, andO(d)arithmetic operations. Comparison with prior work. Theorem 2.2 improves on the accuracy and runtime guaran- tees for sampling of [12] when Mis one of the aforementioned symmetric manifolds, since their accuracy and runtime bounds for sampling are not polynomial in the dimension d(for instance, the “constant” term C≡C(M,d)in [12] has an unspecified dependence on the manifold and its dimension). Finally, we note that [23, 18] do not provide guarantees on the accuracy and runtime of their sampling algorithm, and that the runtime bounds for the sampling algorithm in [37] are not polynomial in dimension. Improving the dependency on dimension remains an open question for future work. Extension beyond symmetric manifolds. While our theoretical guarantees focus on symmet- ric manifolds, the algorithm itself applies more broadly. In Appendix D, we describe how projection mapsφand exponential maps can be constructed for certain non-smooth or non-symmetric spaces, such as convex polytopes. Although proving Lipschitz properties in these settings is more subtle due to curvature or boundary singularities, our framework may still apply empirically. Extending theoretical guarantees to such general manifolds remains a promising direction for future work. An overview of the proof is given in Section 4; the full proof appears in Section 6. 8 3 Derivation of training and sampling algorithm Given a standard Brownian motion BtinRd, aµ:Rd→RdandR:Rd→Rd×d, a stochastic processXtsatisfies the SDE dXt=µ(Xt)dt+R(Xt)dBtwith initial condition x∈RdifXt= x+/integraltextt 0µ(Xs)ds+/integraltextt 0R(Xs)dBs. Lemma 3.1 (Itô’s Lemma). Letψ:Rd→Rkbe a second-order differentiable function, and letX(t)∈Rdbe an Itô diffusion. Then for all t≥0and alli∈[k], we have dψ(Xt)[i] = ∇ψ(Xt)[i]⊤dXt+1 2dX⊤ t∇2ψ(Xt)[i]dXt. Thetransition kernel pt|τ(y|x)is the probability (density) that Xtakes the value yat timet conditional on Xtaking the value xat timeτ. Given an initial distribution π, the probability density at time tispt(x) =/integraltext Mpt|0(x|z)π(z)dz. For any diffusion Xt,t∈[0,T], itstime-reversal Ytis the stochastic process such that Yt=XT−tfort∈[0,T].Ytis also a diffusion, governed by an SDE. In the special case where Xthas identity covariance, dXt=b(Xt)dt+ dBt, the reverse diffusion satisfies [1] dYt=−b(Yt)dt+∇logpt(Yt)dt+ dBt. (1) One can also define diffusions on Riemannian manifolds, in which case dBtis the derivative of Brownian motion on the tangent space (see [17]). Below we show the key steps in deriving our diffusion model, training algorithm (Algorithm 1), and sampling algorithm (Algorithm 2). Forward diffusion. LetZtbe a diffusion on Rdinitialized at q0=ψ(π). We choose Ztto be the Ornstein-Uhlenbeck process, dZt=−1 2Ztdt+ dBt, whose stationary distribution is N(0,Id). Ztis easy to sample as it has a closed-form Gaussian transition kernel qt|τ. LetXt:=φ(Zt), the projection of ZtontoM.Xtis our model’s forward diffusion. Reverse diffusion SDE. LetYt:=XT−tdenote the time-reversal of Xt.Ytis a diffusion onM, and its distribution at time Tequals the target distribution π. It follows the SDE: dYt=f⋆(Yt,t)dt+g⋆(Yt,t)dBt, (2) for some functions f⋆(x,t) :M× [0,T]→TxMandg⋆(x,t) :M→TxM×TxM. Here dBtis the derivative of standard Brownian motion on M’s tangent space. We write dBt≡dBx twhenx∈M is clear from context. We cannot directly apply (1) to derive a tractable SDE for the reverse diffusion YtonM, as the transition kernel pt|τof the
|
https://arxiv.org/abs/2505.21640v1
|
forward diffusion XtonMlacks a closed form expression. Instead, we first use (1) to obtain an SDE for the reverse diffusion of Zt∈Rd,dHt= (Ht/2 + 2∇logqT−t(Ht))dt+ dBt. We use Itô’s Lemma to project this SDE onto M, giving an SDE for Yt (see Section 6.1), dYt=E[(∇φ(Ht)⊤+dH⊤ t 2∇2φ(Ht))dHt/vextendsingle/vextendsingleφ(Ht)=Yt]. (3) Training algorithm’s objective function. From (3), we show one can train a model f,gfor f⋆,g⋆by solving an optimization problem (Lemma 6.2). Here, f,g∈C(Rd,Rd)are continuous functions from RdtoRdandt∼Unif[0,1], andJφdenotes the Jacobian of φ. min fEtEb∼π[∥(∇φ(ZT−t))⊤ZT−t−ψ(b)e−(T−t)/2 e−(T−t)−1+1 2tr(∇2φ(ZT−t))−f(φ(ZT−t),t)∥2|Z0=ψ(b)] and mingEtEb∼π[∥Jφ(ZT−t))⊤Jφ(ZT−t)−g(φ(ZT−t),t)2∥2 F|Z0=ψ(b)]. (4) 9 Sublinear computation of training objective. For manifolds with non-zero curvature, such as the sphere, SO(n), and U(n), our forward and reverse diffusions differ from prior works and incorporate a spatially-varying covariance term to account for curvature. This allows the forward diffusion to be computed as a projection φof the Ornstein-Uhlenbeck process in Rd≡Rn×n(or Cn×n) onto the manifold. For SO(n)orU(n),φis computed by one singular value decomposition U∗ΛUof the Gaussian matrix ZT−t+Z∗ T−t, requiringO(nω) =O(dω 2)arithmetic operations, where d= Θ(n2)is the manifold dimension. This enables computation of the drift term’s gradient (4) in O(dω 2)arithmetic operations and one gradient evaluation of f. To train the reverse diffusion’s SDE, we also need to model the covariance term (4), a d×d= n2×n2matrix. To achieve a per-iteration runtime sublinear in the d2=n4matrix entries, we leveragethespecialstructureofthecovariancematrix, whicharisesfromthemanifold’ssymmetries. For example, the forward diffusion U(t)∈SO(n)(orU(t)∈U(n)) is governed by the following system of SDEs: dui(t)=/summationdisplay j∈[n]\{i}/parenleftbigg αij(t)dBijuj(t)−βij(t) 2ui(t)/parenrightbigg dt, (5) whereαij(t) :=E[1/(λi(t)−λj(t))|φ(Zt) =U(t)]andβij(t) :=E[1/(λi(t)−λj(t))2|φ(Zt) =U(t)]∀i,j∈ [n]. To train a model for this covariance term with sublinear runtime, we exploit the symmetries of the underlying group. These symmetries ensure the covariance term in (5) is fully determined byn2scalar terms αij(t)fori,j∈[n]and then×nmatrixU. Thus, it suffices to train a model A(U,t)∈Rn×nfor thesen2terms by minimizing the objective ∥A(U,t)−A∥2 F, whereAis then×n matrix with entries Aij=1/(λi(t)−λj(t)), andλi(t)is theith diagonal entry of Λ≡Λ(t). A similar method applies to efficiently train the covariance term for the sphere (see Appendix C). Samplingalgorithm. To(approximately)samplefrom π,weapproximatethedriftandcovariance terms of the reverse diffusion (2) via trained models ˆf,ˆgobtained by solving (4) (in practice, ˆf,ˆg are neural networks ˆfθ,ˆgϕ, andθ,ϕoutputs of Algorithm 1). We initialize this SDE at φ(N(0,Id)), the pushforward of N(0,Id)ontoMwith respect to φ. dˆYt=ˆf(ˆYt,t)dt+ ˆg(ˆYt,t)dBt,ˆY0∼φ(N(0,Id)). (6) To generate samples, we numerically simulate the SDE (6) for ˆYTby discretizing it with a small time-step ∆>0: ˆyi+1= exp/parenleftig ˆyi,ˆf(ˆyi,t)∆ + ˆg(ˆyi,t)√ ∆ξi/parenrightig , (7) i∈{0,...,T/∆}, initialized at ˆy0∼φ(N(0,Id)). 10 Algorithm 1 Training algorithm Require: Oracle for a “projection” φ:Rd→M, and its gradient Require: Oracle for map ψ:M→Rds.t.φ(ψ(x)) =x∀x∈M Require: DatasetD={x1 0,...,xm 0}⊆M. Hyperparam. T >0 Require: Modelsfˆθ:M× [0,T]→TMandgˆϕ:M× [0,T]→TM×TM .ˆθ∈Ra1,ˆϕ∈Ra2 denote trainable parameters Require: Initial parameters θ0∈Ra1,ϕ0∈Ra2 Require: Hyperparameters: Number of stochastic gradient descent iterations r∈N. Step size η>0, batch size b 1:Define,∀ˆθ∈Ra1ˆz∈Rd,b,x∈ M,ˆt∈[0,T], the objective function F(ˆθ;b,ˆz,ˆx,ˆt) := ∥(∇φ(ˆz))⊤ˆz−ψ(b)e−(T−t)/2 e−(T−t)−1+1 2tr(∇2φ(ˆz))−f(ˆx,ˆt)∥2 2:Define∀ˆθ∈Ra2ˆz∈Rd,b,x∈M,ˆt∈[0,T], the objective G(ˆϕ;b,ˆz,ˆx,ˆt) :=∥Jφ(ˆz)⊤Jφ(ˆz)− (gˆϕ(ˆx,ˆt))2∥2 F 3:Setθ←θ0,ϕ←ϕ0 4:fori= 1,...,rdo 5:Sample a random batch S⊆[m]of size b, 6:Samplet∼Unif([0,T]) 7:forj∈Sdo 8:Sampleξ∼N(0,Id) 9:Setzj←ψ(xj 0)e−1 2(T−t)+/radicalbig 1−e−(T−t)ξ 10:Setxj←φ(zj) 11:end for 12:Compute Γ←1 b/summationtext j∈S∇θF(θ;xj 0,zj,xj,t) 13:θ←θ−ηΓ 14:Compute Υ←1 b/summationtext j∈S∇ϕG(ϕ;xj 0,zj,xj,t) 15:ϕ←ϕ−ηΥ 16:end for 17:output: Parameters θ,ϕfor the models fθandgϕ Algorithm 2 Sampling algorithm Require:
|
https://arxiv.org/abs/2505.21640v1
|
An oracle which returns the value of the exponential map exp(x,v)on some manifold M, for anyx∈M,v∈TxM Require: An oracle for the “projection” map φ:Rd→M Require: Modelsfˆθ:M× [0,T]→TMandgˆϕ:M× [0,T]→TM×TM .ˆθ∈Ra1,ˆϕ∈Ra2 denote trainable parameters Require: Parameters θ,ϕ(from output of Algorithm 1), Require:T >0,N∈N,∆>0such that T/∆∈NZ. 1:Samplez0∼N(0,Id), and Set ˆy0←φ(z0) 2:fori= 0,1,...,T/∆−1do 3:Sampleξ∼N(0,Id). 4:Setˆyi+1←exp/parenleftig ˆyi,ˆf(ˆyi,i∆)∆ + ˆg(ˆyi,i∆)√ ∆ξi/parenrightig 5:end for 6:output ˆyT/∆ 11 4 Overview of proof of sampling guarantees (Theorem 2.2) In this section, we first present the main highlights and novel techniques used to establish our sam- pling guarantees. Next, we provide a detailed outline of the proof, explaining how these techniques come together to validate our sampling guarantees (Section 4.1). Girsanov transformations, used in prior works to bound accuracy, do not apply to our diffusion due to its spatially varying covariance. We adopt an optimal transport approach, selecting an opti- mal coupling between the “ideal” diffusion Yt, governed by the SDE dYt=f⋆(Yt,t)dt+g⋆(Yt,t)dBt, and the diffusion ˆYt, wheref⋆,g⋆are replaced by our trained model ˆf,ˆg, within an error ε. We first construct a simple coupling between Ytand ˆYtby setting the underlying Btin their SDEs equal. Applying comparison theorems for manifolds of non-negative curvature to the coupled SDEs, we prove a generalization of Gronwall’s inequality to SDEs on manifolds (Lemma 6.3). W2(ˆYt,Yt)≤(ρ2(ˆY0,Y0) +ε)ect, (8) whereρis geodesic distance on MandW2the Wasserstein distance. (8) holds if f⋆,g⋆arec- Lipschitz on allofM. Showing “average-case” Lipschitzness. φis not in general Lipschitz. E.g., on SO(n)and U(n),φ(Z)has singularities at points where the eigengaps of Z+Z∗vanish. By using tools from random matrix theory, we instead show φsatisfies an “average-case” Lipschitzness, on a set Ωton which the diffusion Zt∈Rdremains w.h.p. (Lemma 6.4). Next, we show that for f⋆,g⋆to be c-Lipschitz everywhere onM, it is sufficient for φto only satisfy average-case Lipschitzness. To do this, we express f⋆(andg⋆) as an integral over the eigenvalues ΛofZt+Z∗ t=UΛU∗, f⋆(U,t)∝/integraldisplay/bracketleftig ∇φ(Z)⊤∇logqT−t|0(Z)+···/bracketrightig 1Ωt(Z)dΛ Due to the manifold’s symmetries, we observe Ωt(and the entire integrand) depend only on Λ, not the eigenvectors U∈U(n). This allows us to show the integral “smooths out” the singularities of φ, and that f⋆(U,t)(andg⋆) are poly(d)-Lipschitz at everyU∈U(n)on the manifold (Lemma 6.6). Improved coupling to obtain poly(d)bounds. While we have shown our model’s SDE is c= poly(d)-Lipschitz onM, after times τ >1 c=1 poly(d),our Wasserstein bound (8)grows exponentially withd. To overcome this, we define a new coupling between Yt,ˆYtwhich we “reset” after time intervals of length τ=1/cby converting our Wasserstein bound into a total variation (TV) bound aftereachinterval. Keytoconvertingtheboundistoshowthatw.h.p. theprojection φhaspoly(d)- Lipschitz Jacobian everywhere in a ball of radius 1/poly(d)around our diffusion. By alternating betweenWassersteinandTVbounds, wegeterrorboundswhichgrowproportionalto T/τ= poly(d) (Lemma 6.7). Handling instability on SO(n).These proof ideas extend easily to the torus and sphere. For SO(n), an additional challenge arises: w.h.p., gaps between neighboring eigenvalues become expo- nentiallysmallin dovershorttimeintervalsduetoweaker“electricalrepulsion”betweeneigenvalues of real-valued random matrices. During these intervals, the diffusion moves at exp(d)velocity. De- spite this, we show that a step size of 1/poly(d,1 δ)suffices to simulate a random solution to the SDE with a distribution δ-close to the correct one. During these intervals, interactions between eigenvectors nearly separate into slow-moving eigenvectors and pairs of fast-moving eigenvectors, with
|
https://arxiv.org/abs/2505.21640v1
|
a simple transition kernel from the invariant measure on SO(2)(see Section 6.7). 12 4.1 Proof outline of Theorem 2.2 In the following, for any random variable Xwe denote its probability distribution by LX. As already mentioned, previous works use Girsanov’s theorem to bound the accuracy of diffusion methods. However, Girsanov transformations do not exist for our diffusion as it has a non-constant covariance term which varies with the position x. Thus, we depart from previous works and instead use an optimal transport approach based on a carefully chosen optimal coupling between the “ideal diffusion”Ytand the algorithm’s process ˆytSpecifically, denoting by µtthe distribution of Yt and byνtthe distribution of ˆYt, the goal is to bound the Wasserstein optimal transport distance W2(µt,νt) := infκ∈K(µt,νt)E(Yt,ˆYt)[ρ2(ˆYt,Y2)]whereK(µ,ν)is the collection of all couplings of the distributions µandν, andρis the geodesic distance on M. Towards this end, we would like to find a coupling κwhich (approximately) minimizes E(Yt∼µt,ˆYt∼νt)[ρ2(ˆYt,Y2)]at any given time t. As a first attempt, we consider the simple coupling where we couple the “ideal” reverse diffusion Yt, dYt=f⋆(Yt,t)dt+g⋆(Yt,t)dBt, (9) and the reverse diffusion ˆYtgiven by our trained model ˆf,ˆg, dˆYt=ˆf(ˆYt,t)dt+ ˆg(Yt,t)dBt. (10) To couple these two diffusions, we set their Brownian motion terms dBtto be equal to each other at every time t. In a similar manner, we can also couple ˆYtand the discrete-time algorithm ˆyi by setting the Gaussian term ξiin the stochastic finite difference equation (7) to be equal to ξi=1√ ∆/integraltext∆(i+1) ∆idBtdtfor everyi. Step 1: Bounding the Wasserstein distance for everywhere-Lipschitz SDEs. To bound the Wasserstein distance W2(Yt,ˆyt)≤W2(Yt,ˆYt) +W2(ˆYt,ˆyt), we first prove a generalization of Gronwall’s inequality to Stochastic differential equations on manifolds (Lemma 6.3). Gronwall’s inequality [15] says that if R: [0,T]→Rsatisfies the differential inequalityd dR(t)≤β(t)R(t)for allt >0, where the coefficient β(t) : [0,T]→Rmay also be a function of t, then the solution to this differential inequality satisfies R(t)≤R(0)e/integraltextt 0β(s)ds. Towards this end, we first couple YtandˆYtby setting their Brownian motion terms dBtequal to each other and then derive an SDE for the squared geodesic distance ρ2(ˆYt,Yt)using Itô’s lemma. Taking the expectation of this SDE gives an ODE for E[ρ2(ˆXt,Xt)], dE[ρ2(ˆXt,Xt)] =E ∇ρ2(ˆXt,Xt)⊤/parenleftigg f⋆(Xt,t) ˆf(ˆXt,t)/parenrightigg +1 2Tr/parenleftigg g⋆(Xt,t) 0 ˆg(Xt,t) 0/parenrightigg⊤ ∇2ρ2(ˆXt,Xt)/parenleftigg g⋆(Xt,t) 0 ˆg(Xt,t) 0/parenrightigg dt.(11) To bound each term on the r.h.s., we first observe that, roughly speaking, due to the non-negative curvature of the manifold, by the Rauch comparison theorem [30], each derivative on the r.h.s. is no larger than in the Euclidean case M=Rdwhereρ2(ˆXt,Xt) =∥ˆXt−Xt∥2 2. Hence we have that /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle∇ρ2(ˆXt,Xt)⊤/parenleftigg f⋆(Xt,t) ˆf(ˆXt,t)/parenrightigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤2∥ˆXt−Xt∥×∥f⋆(Xt,t)−ˆf(ˆXt,t)∥≤2∥ˆXt−Xt∥(c∥ˆXt−Xt∥+ε), as long as we can show that f⋆isc- Lipschitz for some c >0(see Step 2 below). Bounding the covariance term in a similar manner, and applying Gronwall’s lemma to the differential inequality, we get that W2(ˆYt,Yt)≤E[ρ2(ˆYt,Yt)]≤(ρ2(ˆY0,Y0) +ε)ect. (12) 13 Step 2: Showing that our diffusion satisfies an “average-case” Lipschitz condition. To apply (12), we must first show that the drift and diffusion terms f⋆andg⋆are Lipschitz onM. Towards this end, we would ideally like to apply bounds on the derivatives of the projection map φ:Rd→Mwhich defines our diffusion Yt. Unfortunately, in general, φmay not be differentiable at
|
https://arxiv.org/abs/2505.21640v1
|
every point. This is the case for the sphere, where the map φ(z) =z ∥z∥has a singularity at z= 0. This issue also arises in the case of the unitary group and orthogonal group, since the derivative of the spectral decomposition φ(z) =U∗ΛUhas singularities at any matrix zwhich has an eigenvalue gapλi−λi+1= 0. To tackle this challenge, we show that, for the aforementioned symmetric manifolds, the forward diffusionZtinRdremains in some set Ωt⊆Rdwith high probability 1−α, on which the map φ(Zt)has derivatives bounded by poly(d)(Assumption 2.1 and Lemma 6.4). We then show how to “remove” the rare outcomes of our diffusion that do not fall inside Ωt. As our forward diffusion Xt (and thus the reverse diffusion Yt=XT−t) remains at every tinside Ωtwith probability≥1−α, removing these “bad” outcomes only adds a cost of αto the total variation error. Showing that φhaspoly(d)derivatives w.h.p. (showing that Assumption 2.1 holds). We first consider the sphere, which is the simplest case (aside from the trivial case of the torus, where the derivatives of φare allO(1)at every point). In the case when data is on the sphere, which we embed as a unit sphere in Rd, one can easily observe that e.g. ∥∇φ(z)∥≤O(1)for anyzoutside a ball of radius r≥Ω(1)centered at the origin. As the volume of a ball of radius r=αis1 rd, one can use standard Gaussian concentration inequalities to show that the Brownian motion Xtwill remain outside this ball for time Twith probability roughly 1−O(1 rdT). We next show that the Lipschitz property holds for the unitary group U(n). We first recall results from random matrix theory which allow us to bound the eigenvalue gaps of a matrix with Gaussian entries. Specifically, these results say that roughly speaking, if X0is any matrix and Xt=X0+B(t), whereB(t)is a symmetric matrix with iid N(0,t)entries undergoing Brownian motion, one has that the eigenvalues γ1(t)≥ ··· ≥γn(t)ofXtsatisfy for all s≥0(see e.g. [2, 24, 25]), P /intersectiondisplay t∈[t0,T]/braceleftigg γi+1(t)−γi(t)≤s1 poly(n)√ t/bracerightigg ≤O/parenleftig s1 2/parenrightig . (13) Thus, if we define Ωtto be the set of outcomes of such that γi+1(t)−γi(t)≤α2 1 poly(n)√ t, we have thatP(Xt∈Ωt∀t∈[t0,T])≥1−α. Our high-probability bound on Ωtallows us to show that φsatisfies a Lipschitz property at “most” points Ωt. However, if we wish to apply (12), we need to show that drift term f⋆and covariance term g⋆in our diffusion satisfy a Lipschitz property at everypoint in Rd. Towards this end, we first make a small modification to the objective function which allows us to exclude outcomes{Xt}t∈[0,T]of the forward diffusion such that Xt/∈Ωtfor somet∈[0,T]. Specifically, we multiply the objective function (4) by the indicator function 1Ωt(z). As determining whether a pointz∈Ωtrequires only checking the eigenvalue gaps (when Mis the unitary or orthogonal group), computing 1Ωt(z)can be done efficiently using the singular value decomposition. Bounding the Lipschitz constant of f⋆andg⋆.Recall that (when, e.g., Mis one of the aforemen- tioned symmetric manifolds) we may decompose any z∈Rdasz≡z(U,Λ)whereU∈M. Note that 1Ωt(z)isnota continuous function of z. However, we will show that, as 1Ωt(z(U,Λ))depends only on Λ, multiplying our objective function by 1Ωtdoes not make f⋆andg⋆discontinuous (and thus does not prevent them from being Lipschitz).
|
https://arxiv.org/abs/2505.21640v1
|
This is because f⋆andg⋆are given by condi- tional expectations conditioned on U, and can thus be decomposed as integrals over Λ. Towards 14 this end we express f⋆as an integral over the parameter Λ, f⋆(U,t) =cU/integraldisplay Λ∈A/bracketleftbigg ∇φ(z(U,Λ))⊤∇logqT−t|0(z(U,Λ)) +1 2tr∇2φ(z(U,Λ))/bracketrightbigg qT−t(z(U,Λ)) 1Ωt(Λ)dΛ, wherecUis a normalizing constant. Differentiating with respect to U, d dUf⋆(U,t) =Ez(U,Λ)∼qT−t/bracketleftbiggd dU(∇φ(z(U,Λ))⊤∇UlogqT−t|0(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))) 1Ωt(Λ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleV=U/bracketrightbigg +···,(14) where “···” includes three other similar terms. To bound the terms on the r.h.s. of (14), we apply Assumption 2.1 which says that the operator norms of ∇φ,∇2φ,d dU∇φandd dU∇2φare all bounded above by poly(d)wheneverz∈Ωt. To bound the term ∇UlogqT−t|0(z(U,Λ))we note that∇logqT−t|0(z(U,Λ))is the drift term of the reverse diffusion in Euclidean space. This term was previously shown to be dC2-Lipschitz for all t≥Ω(1 d)when the support of the data distribution in Rdlies in a ball of radius C(see, e.g., Proposition 20 of [8]). Thus, plugging in the above bounds into (14) we have that ∥d dUf⋆(U,t)∥2→2≤poly(d). A similar calculation shows that∥d dUg⋆(U,t)∥2→2≤poly(d). This immediately implies that f⋆(U,t)andg⋆(U,t)arepoly(d)- Lipschitz at everyU∈M. Step 3: Improving the coupling to obtain polynomial-time bounds. Now that we have shown that f⋆andg⋆arepoly(d)-Lipschitz, we can apply (12) to bound the Wasserstein distance: W2(ˆYt+τ,Yt+τ)≤(ρ2(ˆYt,Yt) +ε)ecτ∀τ≥0,wherec≤poly(d). Moreover, with slight abuse of notation, we may define ˆyt+τto be a continuous-time interpola- tion of the discrete process ˆy. Applying (12) to this process we get that, roughly, W2(ˆYt+τ,ˆyt+τ)≤ (ρ2(ˆyt,Yt) +ε+ ∆)ecτforτ≥0. Thus, we get a bound on the Wasserstein error, W2(Yt+τ,ˆyt+τ)≤W2(ˆYt+τ,Yt+τ) +W2(ˆYt+τ,ˆyt+τ)≤(ρ2(ˆyt,Yt) +ε+ ∆)ecτ, τ≥0.(15) Unfortunately, after times τ >1 c=1 poly(d),this bound grows exponentially with the dimension d. To overcome this challenge, we define a new coupling between Ytand ˆYtwhich we “reset” after time intervals of length τ=1 cby converting our Wasserstein bound into a total variation bound after each time interval. Towards this end, we use the fact that if at any time tthe total variation distance satisfies∥LYt−L ˆyt∥TV≤α, then there exists a coupling such that Yt=ˆYtwith probability at least 1−α. In other words, w.p. ≥1−α, we haveρ(ˆyt+τ,Yt+τ) = 0, and we can apply inequality (15) over the next time interval of τwithout incurring an exponential growth in time. Repeating this processT τtimes, we get that ∥LYT−L ˆyT∥≤α×T τ, where the TV error grows only linearly withT. Converting Wasserstein bounds on the manifold to TV bounds. To complete the proof, we still need to show how to convert the Wasserstein bound into a TV bound (Lemma 6.7). Towards this end, we begin by showing that the transition kernel ˜pt+τ+ˆ∆|t+τ(·|Ht+τ)of the reverse diffusion Htin Rdis close to a Gaussian in KL distance: DKL(N(Ht+τ+ˆ∆∇˜pT−t−τ(Ht+τ),ˆ∆Id)∥˜pt+τ+ˆ∆|t+τ(·|Ht+τ))≤ατ T. One can do this via Girsanov’s theorem, since, unlike the diffusion Yton the manifold, the reverse diffusion in Euclidean space Htdoeshave a constant diffusion term (see e.g. Theorem 9 of [8]). 15 Next, we use the fact that with probability at least 1−ατ Tthe mapφin a ball of radius 1 poly(d)about the point Ht+τhasc-Lipschitz Jacobian where c= poly(d), and that the inverse of the exponential map exp(·)hasO(1)-Lipschitz Jacobian, to show that the transition kernel ptof Yt=φ(Ht)satisfies DKL(ν1∥pt+τ+ˆ∆|t+τ(·|Yt+τ))≤(1 + ˆ∆c)dατ T≤2ατ T if we choose ˆ∆≤O(1 cd), whereν1:= expYt+τ(N(Yt+τ+ˆ∆f⋆(Yt+τ,t+τ),ˆ∆g⋆2(Yt+τ,t+τ)Id)). Next,
|
https://arxiv.org/abs/2505.21640v1
|
we plug in our Wasserstein bound W(Yt+τ,ˆyt+τ)≤O(ε)into the formula for the KL divergence between two Gaussians to bound ∥LYt+τ+ˆ∆−L ˆyt+τ+ˆ∆∥TV. Specifically, noting that Lˆyt+τ+ˆ∆|ˆyt= expˆyt+τ(N(ˆyt+τ+ˆ∆f(ˆyt+τ,t+τ),ˆ∆g2(ˆyt+τ,t+τ)Id)), we have that DKL/parenleftbigν1,Lˆyt+τ+ˆ∆|ˆyt+τ/parenrightbig= Tr/parenleftig/parenleftbigg⋆2(Yt+τ,t+τ)/parenrightbig−1g2(ˆyt+τ,t+τ)/parenrightig −d+ logdetg⋆2(Yt+τ,t+τ) detg2(ˆyt+τ,t+τ)+w⊤/parenleftbigˆ∆g⋆2(Yt+τ,t)/parenrightbig−1w. wherew:=Yt+τ−ˆyt+τ+ˆ∆(f⋆(Yt+τ,t+τ)−f(ˆyt+τ,t+τ)). Since with probability ≥1−ατ T we haveg⋆(Yt+τ)⪰poly(d), plugging in the error bounds ∥f⋆(Yt+τ,t)−f(Yt+τ,t)∥ ≤εand ∥g⋆(Yt+τ,t)−g(Yt+τ,t)∥F≤εand thec-Lipschitz bounds on f⋆andg⋆, wherec= poly(d), (Assumption 2.1), we get that DKL(ν1,Lˆyt+τ+ˆ∆)≤O(ε2c2). Thus, by Pinsker’s inequality, we have ∥LYt+τ+ˆ∆−L ˆyt+τ+ˆ∆∥TV−∥LYt−L ˆyt∥TV≤/radicalig DKL(ν1∥pt+τ+ˆ∆|t+τ(·|Yt+τ)) +/radicalig DKL(ν1∥Lˆyt+τ+ˆ∆|ˆyt)≤O(εc).(16) Step 4: Bounding the accuracy. Recall that qtis the distribution of the forward diffusion Zt in Euclidean space after time t, which is an Ornstein-Uhlenbeck process. Standard mixing bounds for the Ornstein-Uhlenbeck process imply that, ∥qt−N(0,Id)∥TV≤O(Ce−t)for allt>0(see e.g. [3]), where C≤poly(d)is the diameter of the support of ψ(π). Thus, it is sufficient to choose T= log(C ε)to ensure∥LYT−π∥TV=∥qT−N(0,Id)∥TV≤O(ε). As (16) holds for all t∈τN, the distribution ν=LˆyTof our sampling algorithm’s output satisfies, since τ=1 c, ∥π−ν∥TV=∥LYT−π∥TV+∥LYT−ν∥TV≤O/parenleftbigg ε+εcT τ/parenrightbigg =O/parenleftbigg εc2log/parenleftbiggdC ε/parenrightbigg/parenrightbigg =˜O(ε×poly(d)). Step 5: Bounding the runtime. Since our accuracy bound requires T= log(dC/ε), and re- quires a time-step size of ∆ =cd≤1 poly(d), the number of iterations is bounded byT ∆=cdT≤ O(poly(d)×log (dC/ε)). Step6: Extensionofsamplingguaranteestospecialorthogonalgroup. Similartechniques can be used in the case of the special orthogonal group. However, in the case of the special orthogonal group we encounter the additional challenge that, with high probability Ω(1), the gaps between neighboring eigenvalues γi+1(t)−γi(t)may become exponentially small in d, over very short time intervals of length O(1 ed). Over these intervals our diffusion moves at exp(d)velocity. Despite this, we show that a 1/poly(d,1 δ)step size is sufficient to simulate a randomsolution to its SDE with probability distribution δ-close to the correct distribution. Specifically, from the matrix calculus formula for φone can show that the SDE for the eigenvectors of the forward diffusion 16 satisfy (these evolution equations, discovered by Dyson, are referred to as Dyson Brownian motion [13]) dγi(t) = dBii(t) +/summationdisplay j̸=idt γi(t)−γj(t), (17) dui(t) =/summationdisplay j̸=idBij(t) γi(t)−γj(t)uj(t)−1 2/summationdisplay j̸=idt (γi(t)−γj(t))2ui(t),∀i∈[n]. (18) Roughly speaking, this implies that only the interactions in (18) between eigenvectors with neigh- boring eigenvalues which fall below O(1 n10)are significant, while interactions between eigenvectors with larger eigenvalue gaps are negligible over these short time interval. Thus, one can analyze the evolution of the eigenvectors over these short time intervals as a collection of separable two-body problems consisting of interactions between pair(s) of eigenvectors with closed-form transition ker- nel given by the invariant measure on SO(2). For a detailed sketch of how one can extend our proof to the case of the special orthogonal group, see Section 6.7. 5 Empirical results We provide proof-of-concept simulations to compare the quality and efficiency of our algorithms to key prior works. Datasets. We evaluate the quality of samples generated by our model on synthetic datasets from the torus Td, the special orthogonal group SO(n), and the unitary group U(n). The torus provides a simple, zero-curvature geometry for initial validation, while SO(n)andU(n)test the model on more complex geometries. Datasets include unimodal wrapped Gaussians on Td, multimodal Gaussian mixtures on SO(n), and time-evolution operators of a quantum oscillator with random potentials
|
https://arxiv.org/abs/2505.21640v1
|
onU(n), for varying dimensions d=n2. We also separately analyze per-iteration runtime to study scaling across dimensions, requiring only a single training step and limited computational resources. For the torus Td, following several works [12, 37], we train diffusion models on data sampled from wrapped Gaussians on tori of different dimensions d∈{2,10,50,100,1000}, with mean 0and covariance 0.2Id(seeAppendixA.1fordefinitionofwrappedGaussian). For U(n), following[37], we use a dataset on U(n)of unitary matrices representing time-evolution operators eitHof a quantum oscillator. In our simulations, we consider a wider range of n∈{3,5,9,12,15}(corresponding to manifold dimensions d=n(n−1) 2∈ {3,10,36,66,105}) when evaluating sample quality, and n∈{3,5,10,30,50}(d∈{3,45,435,1225}) when evaluating runtime. Here tis time,H=ℏ 2m∆−V is a Hamiltonian, and ∆is the Laplacian. Vis a random potential V(x) =ω2 2∥x−x0∥2with angular momentum ωsampled uniformly on [2,3]andx0∼N (0,1). As ∆,Vare infinite-dimensional operators, matrices in U(n)are obtained by retaining the (discretized) top- neigenvectors of ∆,V. ForSO(n), following [37], we use datasets sampled from a mixture of a small number kof wrapped Gaussians on SO(n). We setk= 2and use the same values of nas on U(n). Algorithm. We use Algorithm 1 to train our model, and Algorithm 2 to generate samples. Each algorithm takes as input a projection map φand restricted inverse ψ, with choices for Td,SO(n), andU(n)detailed in Section 2. For SO(n)andU(n), we use the projection map ˆφdefined in Section 2, as it suffices to generate high-quality samples in our simulations. In both algorithms, the drift function ˆf(·,·)for the reverse diffusion is parameterized by a neural network. Details on the network architecture, training iterations, batch size, and hardware are provided in Appendix A.2.1 1Our code can be found at github.com/mangoubi/Efficient-Diffusion-Models-for-Symmetric-Manifolds 17 The function ˆg(·,·), which models the covariance in our model’s reverse diffusion SDE, vanishes onTd. On SO(n),U(n),ˆgis an2×n2matrix. This matrix has a special structure (see Section 3), allowing it to be parameterized by d=n2numbers. We may thus parametrize ˆg(x,t)by a neural net with inputs xof dimension d,tof dimension 1, and output dimension d. Network architecture is the same as for ˆf(·,·). Benchmarks. We compare samples generated by our model to those from RSGM [12], TDM [37], anda“vanilla”Euclideandiffusionmodel. RSGMandTDMareincludedastheyhavedemonstrated improvedsamplequalityandruntimeoverpreviousmanifoldgenerativemodels, suchasMoserFlow [32] on TdandSO(3). TDM also outperforms RSGM and Flow Matching [7] on higher-dimensional torus datasets and shows better sample quality on SO(n)andU(n), where RSGM and RDM [18] lack experiments for n > 3. All models are trained with the same iterations, batch size, and architecture (see Appendix A.2). For the Euclidean diffusion model, samples are constrained to the manifold Mby preprocessing via˜φandpostprocessingvia ˜ψ. ForM=Td,˜φ(x)[i] =x[i] mod 2π, and ˜ψisitsinverseon [0,2π)d. ForSO(n)andU(n),˜φ(X)computes the Ufrom the singular value decomposition X=UΣV∗, and ˜ψis the usual embedding. RSGM and TDM are trained using their divergence-based ISM objective, which is prohibitively slow for large dimensions. To fully train these models and evaluate their sample quality, we follow their implementation and use a stochastic estimator for the ISM divergence. We do not compare to TDM on the torus, as their specialized heat kernel objective does not generalize beyond the torus or Euclidean space. We do not compare to SCRD [23], as their implementation is limited to efficient heat kernel ex- pansion for
|
https://arxiv.org/abs/2505.21640v1
|
SO(n)andU(n)withn= 3. Forn>3, the cost of their expansion grows exponentially withn, making it infeasible for higher dimensions. Metrics. For the torus, as in [12, 37]), we evaluate the quality of generated samples by computing their log-likelihood. For SO(n)andU(n), we use the Classifier Two-Sample Test (C2ST) metric [22]. The C2ST metric was previously used in e.g. [21] to evaluate sample quality of diffusion models on SO(n)forn= 3. It measures the ability of a classifier neural network to differentiate between generated samples and samples in a test dataset. This metric allows one to evaluate sample quality in settings where computing the likelihood may be intractable, as may be the case for diffusion models on SO(n)andU(n)for largern. We use the C2ST metric on SO(n) andU(n)instead of log-likelihood, as we found that computing log-likelihood for n > 3poses additional computational challenges, while C2ST can be computed efficiently. See Appendix A.3 for definitions of log-likelihood and C2ST, and additional details. Results for generated sample quality. For the torus, we compare our model, a Euclidean diffusion model, and RSGM on a dataset sampled from a wrapped Gaussian on the d-dimensional torus for different values of d. Our model has the lowest negative log-likelihood (NLL) for d≥10, and its NLL degrades with the dimension at a much slower rate than the Euclidean model and RSGM (Table 2; lower NLL indicates better-quality sample generation). ForU(n), we train our model, a Euclidean diffusion model, RSGM, and TDM on a dataset on U(n)comprised of discretized quantum oscillator time-evolution operators, for n∈{3,5,9,12,15}. Forn≥9, our model achieves the lowest C2ST score; a lower C2ST score indicates higher-quality sample generation (Figure 1, top). Visually, we observe our model’s generated samples more closely resemble the target distribution than benchmark models’ for n≥9(see Figure 1, bottom, for n= 15, Appendix A.4 for other n). Improvements on SO(n)were similar to those on U(n)(see Appendix A.4). 18 Table 2: Negative log-likelihood (NLL) when training on a wrapped Gaussian dataset on the torus of different dimensions d. Lower NLL indicates better-quality sample generation. Our model’s NLL increases more slowly with d, and is lower for d≥10, than Euclidean and RSGM models. Methodd= 2d= 10d= 50d= 100d= 1000 Euclidean 0.61±.02 0.61±.02 0.66±.04 8.18±.42 8.7±.37 RSGM 0.42±.03 0.49±.05 0.62±.06 1.45±.21 2.51±.17 Ours 0.49±.04 0.47±.03 0.50±.05 0.52±.09 0.97±.19 Results for runtime. We evaluate the per-iteration training runtime on U(n)on a wider range ofn= 3,5,10,30,50(corresponding to manifold dimensions of d=n(n−1) 2∈{3,10,45,435,1225}). Our model’s per-iteration runtime remains within a factor of 3of the Euclidean model’s for all n. Per-iteration runtimes of TDM and RSGM, with ISM objective, increase more rapidly with dimension and are, respectively, 45 and 57 times greater than the Euclidean model’s for n= 50 (Table 3). Similar runtime improvements were observed on SO(n)(Table 6 in Appendix A.2). Summary. We find that, as predicted by our theoretical training runtime bounds in Table 1, the per-iteration training runtime of our model is significantly faster, and grows more slowly with dimension, than previous manifold diffusion models on U(n)(similar improvements were observed forSO(n)). Our algorithm’s runtime remains within a small constant factor of the per-iteration runtimeoftheEuclideandiffusionmodel,
|
https://arxiv.org/abs/2505.21640v1
|
atleastfor n≤50(correspondingtoamanifolddimension ofd≤1225), nearly closing the gap with the per-iteration runtime of the Euclidean model. Moreover, we find that (except in very low dimensions) our model is capable of improving on the quality of samples generated by previous diffusion models, when trained on different syn- thetic datasets on the torus, SO(n)andU(n). The magnitude of the improvement increases with dimension. Methodn= 3n= 5n= 9n= 12n= 15 Euclidean.69±.03.75±.04.87±.04.97±.02 1.00±.01 RSGM.79±.04.92±.04.97±.03 1.00±.02 1.00±.01 TDM.73±.04.91±.02.99±.02 1.00±.01 1.00±.01 Ours.75±.05.80±.04.84±.04.88±.05.90±.04 Figure 1: C2ST scores when training on datasets of quantum evolution operators on U(n)(top). Lower scores indicate better-quality generated samples (range is [0.5,1]). Forn≥9, our model has the best C2ST scores. Generated samples are plotted for n= 15(bottom); axes are two matrix entries. 19 Table 3: Per-iteration training runtime in seconds on U(n). The fastest manifold-constrained diffusion model is in bold; the Euclidean model is in gray for comparison. Our model’s runtime remains within a factor of 3of the Euclidean model’s for all n. Runtimes of TDM, RSGM increase more rapidly with dimension d= Θ(n2)and are, respectively, 45 and 57 times greater than the Euclidean model’s for n= 50(d= 1225). Methodn= 3n= 5n= 10n= 30n= 50 Euclidean 0.19±.01 0.19±.01 0.20±.01 0.20±.01 0.21±.01 RSGM 1.03±.02 1.22±.08 1.51±.03 3.98±.19 11.55±.31 TDM 0.91±.08 1.07±.06 2.46±.09 3.77±.17 9.43±.23 Ours 0.36±.00 0.36±.00 0.36±.00 0.46±.01 0.60±.01 6 Full proof of Theorem 2.2 In the following, we denote by ρ(x,y)the geodesic distance between x,y∈M, and by Γx→y(v)the parallel transport of a vector v∈Txfromxtoy. For convenience, we denote φi(·) :=φ(·)[i]. Recall that we have assumed that ψ(M)is contained in a ball of radius C= poly(d). We will prove our results under the more general assumption (Assumption 6.1( ψ,π,C)), which is satisfied whenever ψ(M)≤C. Assumption 6.1 (Bounded Support ( ψ,π,C)).The pushforward of ψ(π)ofπwith respect to the mapψ:M→Rdhas support on a ball of radius Ccentered at 0. 6.1 Correctness of the training objective functions Lemma 6.2. f⋆andg⋆are solutions to the following optimization problems: min f∈C(Rd,Rd)Et∼Unif([0,1])Eb∼π/bracketleftbigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble(∇φ(ZT−t))⊤ZT−t−ψ(b)e−1 2(T−t) e−(T−t)−1 +1 2tr(∇2φ(ZT−t))−f(φ(ZT−t),t)/vextenddouble/vextenddouble/vextenddouble/vextenddouble2/vextendsingle/vextendsingle/vextendsingle/vextendsingleZ0=ψ(b)/bracketrightbigg , min g∈C(Rd,Rd×d)Et∼Unif([0,1])Eb∼π/bracketleftbigg/vextenddouble/vextenddouble/vextenddouble(Jφ(ZT−t))⊤Jφ(ZT−t)−(g(φ(ZT−t),t))2/vextenddouble/vextenddouble/vextenddouble2 F/vextendsingle/vextendsingle/vextendsingle/vextendsingleZ0=ψ(b)/bracketrightbigg , whereJφdenotes the Jacobian of φ. Proof.Step 1: Obtaining an expression for the reverse diffusion SDE in Rd.We cannot in general directly apply (1) to obtain a tractable expression for the SDE of the reverse diffusion Ytin M, since we do not have a tractable formula for the transition kernel ptof the forward diffusion Xt onM. Instead, we will first obtain an SDE for the reverse diffusion of ZtinRd, and then “project” this SDE ontoM. LetHt:=ZT−tdenote the time-reversed diffusion of Zt.Htis a diffusion in Rd. From (1), we have that the SDE for the reverse diffusion HtonRdis given by the following formula: dHt=/parenleftbigg1 2Ht+ 2∇logqT−t(Ht)/parenrightbigg dt+ dWt, (19) whereWtis a standard Brownian motion on Rd. Equation (19) can be re-written as dHt=/parenleftbigg1 2Ht+ 2Eb∼q0|t(·|Ht)[∇logqT−t|0(Ht|b)]/parenrightbigg dt+ dWt. (20) 20 The r.h.s. of (20) is tractable since we have a tractable expression for the transition kernel qT−t|0 (it is just a time re-scaling of a Gaussian kernel, the transition kernel of Brownian motion). Step 2: Obtaining an expression for the reverse diffusion SDE in M.Note that there exists a coupling between ZtandHtsuch thatHt=ZT−tand thatYt=XT−tfor allt∈[0,T]. Thus, under
|
https://arxiv.org/abs/2505.21640v1
|
this choice of coupling, we have that Yt=XT−t=φ(ZT−t) =φ(Ht)for allt∈[0,T]. In the special case when there is only one datapoint x0, the SDE for the reverse diffusion YtonM can be obtained by applying Itô’s lemma (Lemma 3.1) to Yt=φ(Ht): dYt[i] =∇φi(Ht)⊤dHt+1 2(dHt)⊤(∇2φi(Ht))dHt∀i∈[d]. (21) In the following, to simplify notation, we drop the “ i” index from the notation φiand dYt[i]. Unfortunately, the r.h.s. of (21) is not a (deterministic) function of Yt=φ(Ht), sinceφis not an invertible map. To solve this problem, we can take the conditional expectation of (21) with respect toYt=φ(Ht): dYt=E[dYt|Yt] =E[dYt|φ(Ht)] =E/bracketleftbigg ∇φ(Ht)⊤dHt+1 2(dHt)⊤(∇2φ(Ht))dHt/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ(Ht)/bracketrightbigg .(22) The drift term on the r.h.s. of (22) is a deterministic function of Yt. Denote this function by f⋆:M× [0,T]→TMfor any input x∈Mand output in the tangent space TxMofMatx. Moreover, by (1), the covariance term on the r.h.s. of (22) must be the same as the covariance term for the forward diffusion YtonM. This covariance term can be obtained from the covariance term dWtonRd, via Itô’s lemma, implying that the covariance term is E[∇φ(Ht)[i]⊤dWt|φ(Ht)], i∈[d]. Using the notation for the Jacobian, this covariance term can be written more concisely as E[Jφ(Ht)⊤dWt|φ(Ht)]. Thus, the diffusion term is also a deterministic function g⋆ofYt=φ(Ht), whereg⋆(Yt)is a symmetric d×dmatrix, E[Jφ(Ht)dWt|φ(Ht)] =g⋆(Yt,t)d˜Wt, (23) where ˜Wtis a standard Brownian motion on M. Since dWtis the derivative of a standard Brownian motion in Rd, and d˜Wtis the derivative of a standard Brownian motion on the tangent space of M, we have that E[Jφ(Ht)⊤Jφ(Ht)|φ(Ht)] = (g⋆(Yt,t))2. (24) Thus, (22) can be expressed as: dYt=E/bracketleftbigg ∇φ(Ht)⊤dHt+1 2(dHt)⊤(∇2φ(Ht))dHt/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ(Ht)/bracketrightbigg =f⋆(Yt,t)dt+g⋆(Yt,t)d˜Wt.(25) In the more general setting when there is more than one datapoint, (25) generalizes to: dYt=Eb∼π/bracketleftbigg E/bracketleftbigg ∇φ(Ht)⊤dHt+1 2(dHt)⊤(∇2φ(Ht))dHt/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ(Ht),HT=b/bracketrightbigg/bracketrightbigg (26) =f⋆(Yt,t)dt+g⋆(Yt,t)d˜Wt. (27) SinceYt=φ(Ht), we can bring f⋆(Yt,t)dtandg⋆(Yt,t)d˜Wtinside the conditional expectation: Eb∼π/bracketleftbigg E/bracketleftbigg ∇φ(Ht)⊤dHt+1 2(dHt)⊤(∇2φ(Ht))dHt−f⋆(Yt,t)dt/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ(Ht),HT=b/bracketrightbigg/bracketrightbigg =g⋆(Yt,t)d˜Wt. 21 We can re-write this as Eb∼π/bracketleftbigg Eφ(Ht)/bracketleftbigg EHt|φ(Ht)/bracketleftbigg ∇φ(Ht)⊤dHt+1 2(dHt)⊤(∇2φ(Ht))dHt−f⋆(Yt,t)dt/vextendsingle/vextendsingle/vextendsingle/vextendsingleHt,HT=b/bracketrightbigg/bracketrightbigg/bracketrightbigg =g⋆(Yt,t)d˜Wt. This simplifies to Eb∼π/bracketleftbigg ∇φ(Ht)⊤dHt+1 2(dHt)⊤(∇2φ(Ht))dHt−f⋆(Yt,t)dt/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg =g⋆(Yt,t)d˜Wt.(28) where the expectation is taken over the outcomes of Ht. Plugging in (20) into (28), and separating the drift and the diffusion terms on both sides of the equation (and noting that the higher-order differentials (dt)2anddWtdtvanish), we get that the drift terms satisfy Eb∼π/bracketleftbigg (∇φ(Ht))⊤/parenleftig Ht+ 2∇logqT−t|0(Ht|b)/parenrightig dt+1 2(dWt)⊤(∇2φ(Ht))dWt−f⋆(Yt,t)dt/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg = 0. (29) Noting that (dWt[i])2= dtanddWt[i]dWt[j] = 0for alli̸=j, we get Eb∼π/bracketleftbigg (∇φ(Ht))⊤/parenleftig Ht+ 2∇logqT−t|0(Ht|b)/parenrightig dt+1 2tr(∇2φ(Ht))dt−f⋆(Yt,t)dt/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg = 0. (30) Dividing both sides by dt, we get an expression for the drift term f⋆ Eb∼π/bracketleftbigg (∇φ(Ht))⊤/parenleftig Ht+ 2∇logqT−t|0(Ht|b)/parenrightig +1 2tr(∇2φ(Ht))−f⋆(Yt,t)/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg = 0.(31) Finally, from (24), we have that diffusion term g⋆satisfies Eb∼π/bracketleftbigg E/bracketleftbigg Jφ(Ht)⊤Jφ(Ht)−(g⋆(Yt,t))2/vextendsingle/vextendsingle/vextendsingle/vextendsingleφ(Ht)/bracketrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg = 0. (32) Step 3: Training the drift term. From (31), we have that the function f⋆is the solution to the following optimization problem: min fEt∼Unif([0,1])Eb∼π/bracketleftbigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble(∇φ(Ht))⊤/parenleftbigg1 2Ht+ 2∇logqT−t|0(Ht|b)/parenrightbigg +1 2tr(∇2φ(Ht))−f(Yt,t)/vextenddouble/vextenddouble/vextenddouble/vextenddouble2/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg . (33) where the inner expectation is taken over b∼πand over the outcomes of Htat timetconditioned onHT=b(Note that Yt=φ(Ht)is a deterministic function of Ht). Now,Ht|{HT=b}has the same probability distribution as ZT−t|{Z0=b}(and thatYt|{HT= b}has the same probability distribution as XT−t|{Z0=b}). Thus, we can re-write (33) as min fEt∼Unif([0,1])Eb∼π/bracketleftbigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble(∇φ(ZT−t))⊤/parenleftig ZT−t+ 2∇logqT−t|0(ZT−t|b)/parenrightig +1 2tr(∇2φ(ZT−t))−f(XT−t,t)/vextenddouble/vextenddouble/vextenddouble/vextenddouble2/vextendsingle/vextendsingle/vextendsingle/vextendsingleZ0=b/bracketrightbigg , (34) 22 Step 4: Training the diffusion term. From (32) we have that g⋆is the
|
https://arxiv.org/abs/2505.21640v1
|
solution to the following optimization problem: mingEt∼Unif([0,1])Eb∼π/bracketleftbigg/vextenddouble/vextenddouble/vextenddoubleJφ(Ht)⊤Jφ(Ht)−(g(Yt,t))2/vextenddouble/vextenddouble/vextenddouble2 F/vextendsingle/vextendsingle/vextendsingle/vextendsingleHT=b/bracketrightbigg , where∥·∥Fis the Frobenius norm. Since Ht|{HT=b}has the same probability distribution as ZT−t|{Z0=b}(andYt|{HT=b}has the same probability distribution as XT−t|{Z0=b}), we can re-write (33) as mingEt∼Unif([0,1])Eb∼π/bracketleftbigg/vextenddouble/vextenddouble/vextenddoubleJφ(ZT−t)⊤Jφ(ZT−t)−(g(XT−t,t))2/vextenddouble/vextenddouble/vextenddouble2 F/vextendsingle/vextendsingle/vextendsingle/vextendsingleZ0=b/bracketrightbigg . 6.2 Proof of Lemma 6.3 In the proof of Theorem 2.2 we will use the following lemma. Lemma 6.3 (Gronwall-like inequality for SDEs on a manifold of non-negative curva- ture).Suppose thatMis a Riemannian manifold with non-negative curvature, and let ρ(x,y) denote the geodesic distance between any x,y∈M. Suppose also that Xtand ˆXtare two diffusions onMsuch that dXt=b(Xt,t) +σ(Xt,t)dWt, and dˆXt=ˆb(ˆXt,t) + ˆσ(Xt,t)dWt, wherebisC1(t)-Lipschitz and σisC2(t)-Lipschitz at every time t∈[0,T]. Moreover, assume that ∥b(x,t)−ˆb(x,t)∥≤ε and ∥σ(x,t)−ˆσ(x,t)∥2 F≤ε for allx∈M,t∈[0,T]. Then there exists a coupling between Xtand ˆXtsuch that, for all t≥0, E[ρ2(ˆXt,Xt)]≤/parenleftigg E[ρ2(ˆX0,X0)] + inf s∈[0,t]5ε2 2C1(s) + 3C2(s)2+ 2/parenrightigg e/integraltextt 0(2C1(s)+3C2(s)2+2ds. Proof of Lemma 6.3. We first couple Xtand ˆXtby setting their underlying Brownian motion terms dWtto be equal to each other. Next, we compute the distance ρ2(ˆXt,Xt)using Itô’s Lemma. Lettingh(x,y) :=ρ2(x,y), by Itô’s Lemma we have that dρ2(ˆXt,Xt) = dh(ˆXt,Xt) =∇h(ˆXt,Xt)⊤/parenleftigg b(Xt,t) ˆb(ˆXt,t)/parenrightigg dt +1 2Tr /parenleftigg σ(Xt,t) 0 ˆσ(Xt,t) 0/parenrightigg⊤ [∇2h(ˆXt,Xt)]/parenleftigg σ(Xt,t) 0 ˆσ(Xt,t) 0/parenrightigg dt 23 +∇h(ˆXt,Xt)⊤/parenleftigg σ(Xt,t) 0 ˆσ(Xt,t) 0/parenrightigg d/parenleftigg Wt ˆWt/parenrightigg . Therefore, dE[ρ2(ˆXt,Xt)] = E/bracketleftigg ∇h(ˆXt,Xt)⊤/parenleftigg b(Xt,t) ˆb(ˆXt,t)/parenrightigg/bracketrightigg dt +1 2E Tr /parenleftigg σ(Xt,t) 0 ˆσ(Xt,t) 0/parenrightigg⊤ [∇2h(ˆXt,Xt)]/parenleftigg σ(Xt,t) 0 ˆσ(Xt,t) 0/parenrightigg dt +0. (35) Now, sinceMhas non-negative curvature, by the Rauch comparison theorem we have /vextendsingle/vextendsingle/vextendsingle/vextendsingle∇h(ˆXt,Xt)⊤/parenleftigg b(Xt,t) ˆb(ˆXt,t)/parenrightigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤2ρ(ˆXt,Xt)×∥ˆb(ˆXt,t)−ΓXt→ˆXt(b(Xt,t))∥ ≤2ρ(ˆXt,Xt)×/parenleftig ∥b(ˆXt,t)−ΓXt→ˆXt(b(Xt,t))∥+∥b(ˆXt,t)−ˆb(ˆXt,t)∥/parenrightig ≤2ρ(ˆXt,Xt)×(C1(t)ρ(ˆXt,Xt) +ε). (36) where the last inequality holds since bisC1(t)-Lipschitz. Moreover, since Mhas non-negative curvature, by the Rauch comparison theorem we also have that 1 2Tr /parenleftigg σ(Xt,t) 0 ˆσ(Xt,t) 0/parenrightigg⊤ [∇2h(ˆXt,Xt)]/parenleftigg σ(Xt,t) 0 ˆσ(Xt,t) 0/parenrightigg ≤/vextenddouble/vextenddouble/vextenddoubleˆσ(ˆXt,t)−ΓXt→ˆXt(σ(Xt,t))/vextenddouble/vextenddouble/vextenddouble2 F ≤/parenleftig/vextenddouble/vextenddouble/vextenddoubleσ(ˆXt,t)−ΓXt→ˆXt(σ(Xt,t))/vextenddouble/vextenddouble/vextenddouble F+/vextenddouble/vextenddouble/vextenddoubleˆσ(ˆXt,t)−σ(ˆXt,t)/vextenddouble/vextenddouble/vextenddouble F/parenrightig2 ≤3/vextenddouble/vextenddouble/vextenddoubleσ(ˆXt,t)−ΓXt→ˆXt(σ(Xt,t))/vextenddouble/vextenddouble/vextenddouble2 F+ 3/vextenddouble/vextenddouble/vextenddoubleˆσ(ˆXt,t)−σ(ˆXt,t)/vextenddouble/vextenddouble/vextenddouble2 F ≤3C2(t)2ρ2(ˆXt,Xt) + 3ε2. (37) Plugging (36) and (37) into (35), we have d dtE[ρ2(ˆXt,Xt)]≤2E[C1(t)ρ2(ˆXt,Xt) +ερ(ˆXt,Xt)] + 3C2(t)2E[ρ2(ˆXt,Xt)] + 3ε2∀t≥0.(38) Hence, d dtE[ρ2(ˆXt,Xt)]≤2E[C1(t)ρ2(ˆXt,Xt) +ρ2(ˆXt,Xt)] + 3C2(t)2E[ρ2(ˆXt,Xt)] + 5ε2 = 2E[C1(t)ρ2(ˆXt,Xt) +ρ2(ˆXt,Xt)] + 3C2(t)2E[ρ2(ˆXt,Xt)] + 5ε2 = (2C1(t) + 3C2(t)2+ 2)E[ρ2(ˆXt,Xt)] + 5ε2. Letτ∈[0,T]be some number, and define R(t) :=E[ρ2(ˆXt,Xt)] + infs∈[0,τ]5ε2 2C1(s)+3C2(s)2+2for all t∈[0,τ]. Then we have, d dtR(t)≤(2C1(t) + 3C2(t)2+ 2)R(t),∀t≥0. (39) 24 Thus, plugging (39) into Gronwall’s lemma, we have, for all t≥0, R(t)≤R(0)e/integraltextt 0(2C1(s)+3C2(s)2+2ds =/parenleftigg E[ρ2(ˆX0,X0)] + inf s∈[0,τ]5ε2 2C1(s) + 3C2(s)2+ 2/parenrightigg e/integraltextt 02C1(s)+3C2(s)2+2ds. Thus, E[ρ2(ˆXt,Xt)] + inf s∈[0,τ]5ε2 2C1(s) + 3C2(s)2+ 2 ≤/parenleftigg E[ρ2(ˆX0,X0)] + inf s∈[0,T]5ε2 2C1(s) + 3C2(s)2+ 2/parenrightigg e/integraltextt 02C1(s)+3C2(s)2+2ds. Hence, for all t≥0, E[ρ2(ˆXt,Xt)]≤/parenleftigg E[ρ2(ˆX0,X0)] + inf s∈[0,τ]5ε2 2C1(s) + 3C2(s)2+ 2/parenrightigg e/integraltextt 02C1(s)+3C2(s)2+2ds. Plugging in τ=tin the above equation, we have, for all t≥0, E[ρ2(ˆXt,Xt)]≤/parenleftigg E[ρ2(ˆX0,X0)] + inf s∈[0,t]5ε2 2C1(s) + 3C2(s)2+ 2/parenrightigg e/integraltextt 02C1(s)+3C2(s)2+2ds. 6.3 Proof that average-case Lipschitzness holds on symmetric manifolds of in- terest (Lemma 6.4) Lemma 6.4 (Average-case Lipschitzness). For the unitary group, Assumption 2.1 holds with L1=O(d1.5√ Tα−1/3)andL2=O(d2Tα−2/3). For the sphere, it holds for L1=L2=O(α−1 d). For the Torus it holds for L1=L2= 1. Proof.For the torus, the map φ(x)has∇φ(x) =Idat everyx∈Rd, which implies that Assumption 2.1 is satisfied for L1=L2= 1. Sphere.In the case of the sphere, which we embed via the map ψas a unit sphere
|
https://arxiv.org/abs/2505.21640v1
|
in Rd, one can easily observe that e.g. ∥∇φ(z)∥≤O(1)for anyzoutside a ball of radius r≥Ω(1)centered at the origin. As the volume of a ball of radius r=αis1 rdtimes the volume of the unit ball, one can use standard Gaussian concentration inequalities to show that the Ornstein-Uhlenbeck process Zt, which is a Gaussian process, will remain outside this ball for time Twith probability at least 1−41 rdT. Moreover,bystandardGaussianconcentrationinequalities[33],wehavethat ∥Zt∥≤2√ Tdlog(1 α) with probability at least 1−2αfor allt∈[0,T]. This motivates defining the set Ωt:={z∈Rd: (41 αT)1 d≤∥z∥≤2√ Tdlog(1 α)}, as we then have P(Zt∈Ωt∀t∈[0,T])≥1−α. Since∥z∥≥(41 αT)1 dfor anyz∈Ωtand anyt∈[0,T], we must have that 25 ∥∇φ(z(U,Λ))∥2→2≤3/parenleftbigg 41 αT/parenrightbigg2 d=L1, /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dU∇φ(z(U,Λ))/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤3/parenleftbigg 41 αT/parenrightbigg2 d=L1, ∥∇2φ(z(U,Λ))∥2→2≤3/parenleftbigg 41 αT/parenrightbigg3 d=L2, /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dU∇φ(z(U,Λ))/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤3/parenleftbigg 41 αT/parenrightbigg3 d=L2, /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dU(z(U,Λ))/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤∥x∥, Unitary group. We next show that the Lipschitz property holds for the unitary group U(n). Similar techniques can be used for the case of the special orthogonal group, and we omit those details. We first recall results from random matrix theory which allow us to bound the eigenvalue gaps of a matrix with Gaussian entries. Specifically, these results say that, roughly speaking, if Z0∈Cn×nis any matrix and Zt=Z0+B(t), whereB(t)is a matrix with (complex) iid N(0,t) entries undergoing Brownian motion, one has that the eigenvalues γ1(t)≥···≥γn(t)ofZt+Z∗ t satisfy (see e.g. [2, 24, 25]) P/parenleftigg inf s∈[t0,T](γi+1(t)−γi(t))≤s1 poly(d)√ t/parenrightigg ≤O(s1 2)∀s≥0. (40) Thus, if we define Ωtto be the set of outcomes of Ztsuch thatγi+1(t)−γi(t)≤α2 1 poly(n)√ t, we have that P(Zt∈Ωt∀t∈[t0,T])≥1−α. Fromthematrixcalculusformulasfor ∇φ(U⊤ΛU),d dU∇φ(U⊤ΛU),∇φ(U⊤ΛU),andd dU∇2φ(U⊤ΛU), we have that, for all z(U,Λ) =UΛU⊤∈Ωt, ∥∇φ(z(U,Λ))∥2→2≤d/summationdisplay i=11 λi+1−λi≤d1.5√ tα−1 3=L1, /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dU∇φ(z(U,Λ))/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤∥Λ∥2→2d/summationdisplay i=11 λi+1−λi ≤/parenleftbigg C+√ Tdlog/parenleftbigg1 α/parenrightbigg/parenrightbigg ×d/summationdisplay i=11 λi+1−λi≤d1.5√ tα−1 3=L1, /vextenddouble/vextenddouble/vextenddouble∇2φ(z(U,Λ))/vextenddouble/vextenddouble/vextenddouble 2→2≤d/summationdisplay i=11 (λi+1−λi)2≤d2tα−2 3=L2, /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dU∇φ(z(U,Λ))/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤∥Λ∥2→2d/summationdisplay i=11 (λi+1−λi)2 ≤/parenleftbigg C+√ Tdlog/parenleftbigg1 α/parenrightbigg/parenrightbigg ×d/summationdisplay i=11 (λi+1−λi)2≤d2tα−2 3=L2, /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dU(z(U,Λ))/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤∥Λ∥2→2 26 sinceλi+1−λi≤α1 31√ d√ tfor alli∈[d]and∥Λ∥2→2≤2√ Tdlog(1 α)wheneverz(U,Λ)∈Ωt 6.4 Proof of Lipschitzness of f⋆and g⋆on all ofM(Lemma 6.6) Recall that we denote by qt|τ(y|x)the transition kernel of the Ornstein-Uhlenbeck process Ztat anyx,y∈Rd, and byqt(x) =/integraltext Mqt|0(x|z)π(z)dzthe distribution of Ztat any time t≥0. We will use the following Proposition of [8]: Proposition 6.5 (Proposition 20 of [8]). Suppose that ψ(π)has support on a ball of radius C > 0. For anyα > 0, define the “early stopping time” t0:= min(α C,α2 d). Then the drift term ∇logqt(·)of the reverse diffusion SDE in Euclidean space is O(1 α2dC2(min(C,√ d)2))-Lipschitz at every time t>t 0. Moreover, W2(qt0,π)≤α. Recall that we denote by Γx→y(v)the parallel transport of a vector vfromxtoy. Lemma 6.6. Suppose that Assumption 2.1( φ,L 1,L2,α) and Assumption 6.1( ψ,π,C) both hold. Then for every t∈[t0,T], ∥f⋆(x,t)−Γx→y(f⋆(x,t))∥≤C×ρ(x,y),∀x,y∈M (41) and ∥g⋆(y,t)−Γx→y(g⋆(x,t))∥F≤C×ρ(x,y)∀x,y∈M (42) whereC:= (C+√ Tdlog(1 α))4×L2 3×L1+ (C+√ Tdlog(1 α))2×L3×L2andt0:= min(α C,α2 d), and L3=O(1 α2dC2(min(C,√ d)2)). Proof.Recall that (when, e.g., Mis one of the aforementioned symmetric manifolds) we may decompose any z∈Rdasz≡z(U,Λ)whereU∈M. We have the following expression for f⋆(U,t) f⋆(U,t) =cU/integraldisplay Λ∈A/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t|0(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ, wherecU=/parenleftbig/integraltext Λ∈AqT−t(z(U,Λ)) 1Ω(Λ)dΛ/parenrightbig−1is a normalizing constant. Then d dUf⋆(U,t) =cU×d dU/integraldisplay Λ∈A/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ +/parenleftbiggd dUcU/parenrightbigg ×/integraldisplay Λ∈A/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ.(43) For the first term on
|
https://arxiv.org/abs/2505.21640v1
|
the r.h.s. of (43) we have, cU×d dU/integraldisplay Λ∈A/bracketleftbigg (∇Uφ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ =cU×/integraldisplay Λ∈A/parenleftbiggd dU/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg/parenrightbigg 27 ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ +cU×/integraldisplay Λ∈A/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×d dUqT−t(z(U,Λ)) 1Ω(Λ)dΛ =cU×/integraldisplay Λ∈A/parenleftbiggd dU/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg/parenrightbigg ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ +cU×/integraldisplay Λ∈A/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×∇UlogqT−t(z(U,Λ))×qT−t(z(U,Λ)) 1Ω(Λ)dΛ =Ez(U,Λ)∼qT−t/bracketleftbiggd dU/parenleftbigg (∇φ(z(U,Λ)))⊤∇UlogqT−t|0(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/parenrightbigg 1Ω(Λ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleV=U/bracketrightbigg +Ez(U,Λ)∼qT−t/bracketleftbigg/parenleftbigg (∇φ(z(U,Λ)))⊤∇UlogqT−t(z(UΛ)) +1 2tr(∇2φ(z(U,Λ)))/parenrightbigg ×∇UlogqT−t(z(U,Λ)) 1Ω(Λ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleV=U/bracketrightbigg . For the second term on the r.h.s. of (43) we have, d dUcU=c2 U/integraldisplay Λ∈Ad dU(qT−t(z(U,Λ))) 1Ω(Λ)dΛ =c2 U/integraldisplay Λ∈Ad dU(elogqT−t(z(U,Λ))) 1Ω(Λ)dΛ =c2 U/integraldisplay Λ∈A∇UlogqT−t(z(U,Λ))(elogqT−t(z(U,Λ))) 1Ω(Λ)dΛ =c2 U/integraldisplay Λ∈A∇UlogqT−t(z(U,Λ))×qT−t(z(U,Λ)) 1Ω(Λ)dΛ =cU×Ez(U,Λ)∼qT−t/bracketleftbig∇UlogqT−t(z(U,Λ)) 1Ω(Λ)/vextendsingle/vextendsingleV=U/bracketrightbig and hence, (d dUcU)×/integraldisplay Λ∈A/bracketleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/bracketrightbigg ×qT−t(z(U,Λ)) 1Ω(Λ)dΛ =Ez(U,Λ)∼qT−t/bracketleftbig∇UlogqT−t(z(U,Λ)) 1Ω(Λ)/vextendsingle/vextendsingleV=U/bracketrightbig ×Ez(U,Λ)∼qT−t/bracketleftbigg/parenleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/parenrightbigg 1Ω(Λ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleV=U/bracketrightbigg . 28 Thus d dUf⋆(U,t) =Ez(U,Λ)∼qT−t/bracketleftbiggd dU/parenleftbigg (∇φ(z(U,Λ)))⊤∇UlogqT−t|0(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/parenrightbigg 1Ω(Λ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleV=U/bracketrightbigg , +Ez(U,Λ)∼qT−t/bracketleftbigg/parenleftbigg (∇φ(z(U,Λ)))⊤∇UlogqT−t(z(U,Λ)) +1 2tr(∇2φ(z(U,Λ)))/parenrightbigg ×∇UlogqT−t(z(U,Λ)) 1Ω(Λ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleV=U/bracketrightbigg +Ez(U,Λ)∼qT−t/bracketleftbig∇UlogqT−t(z(U,Λ)) 1Ω(Λ)/vextendsingle/vextendsingleV=U/bracketrightbig ×Ez(U,Λ)∼qT−t/bracketleftbigg/parenleftbigg (∇φ(z(U,Λ)))⊤∇logqT−t(z(U,Λ)) (44) +1 2tr(∇2φ(z(U,Λ)))/parenrightbigg 1Ω(Λ)/vextendsingle/vextendsingle/vextendsingle/vextendsingleV=U/bracketrightbigg . (45) Moreover, by standard Gaussian concentration inequalities and Assumption 6.1, without loss of generality we have that ∥z(U,Λ)∥F≤C+√ Tdlog(1 α)for allz(U,Λ)∈Ωt. From Proposition 6.5 we have that∇logpT−t|0(z(U,Λ))isL3-Lipschitz where L3:=O(1 α2dC2(min(C,√ d)2))and hence that ∥∇UlogpT−t|0(z(U,Λ))∥2→2≤∥d dU(z(U,Λ))∥2→2×∥∇ logpT−t|0(z(U,Λ))∥2→2 ≤∥d dU(z(U,Λ))∥2→2×L3×∥z(U,Λ)∥F ≤L3×∥z(U,Λ)∥2 F ≤L3×/parenleftbigg C+√ Tdlog(1 α)/parenrightbigg2 , (46) wherethethirdinequalityholdsbyAssumption2.1, andthelastinequalityholdssince ∥z(U,Λ)∥F≤ C+√ Tdlog(1 α)for allz(U,Λ)∈Ωt. Thus, plugging Assumption 2.1 and (46) into (44), we have that /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dUf⋆(U,t)/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤(C+√ Tdlog(1 α))4×L2 3×L1+ (C+√ Tdlog(1 α))2×L3×L2.(47) Replacingf⋆withg⋆in the above calculation, we also get that /vextenddouble/vextenddouble/vextenddouble/vextenddoubled dUg⋆(U,t)/vextenddouble/vextenddouble/vextenddouble/vextenddouble 2→2≤(C+√ Tdlog(1 α))4×L2 3×L1+ (C+√ Tdlog(1 α))2×L3×L2.(48) Thus, (47) and (48) imply that ∥f⋆(y,t)−Γx→y(f⋆(x,t))∥≤C×ρ(x,y),∀x,y∈M (49) and ∥g⋆(y,t)−Γx→y(g⋆(x,t))∥F≤C×ρ(y,x)∀x∈M, (50) whereC:= (C+√ Tdlog(1 α))4×L2 3×L1+ (C+√ Tdlog(1 α))2×L3×L2. 29 6.5 Wasserstein to TV conversion on the manifold (Lemma 6.7) Lemma6.7 (WassersteintoTVconversiononthemanifold). There is a number c≤poly(d) such that for every t∈[t0,T]and anyτ≤1 cwe have ∥LYt+τ+ˆ∆−L ˆyt+τ+ˆ∆∥TV−∥LYt−L ˆyt∥TV ≤/radicalig DKL(ν1∥pt+τ+ˆ∆|t+τ(·|Yt+τ)) +/radicalig DKL(ν1∥Lˆyt+τ+ˆ∆|ˆyt)≤O(εc).(51) Proof of Lemma 6.7. Now that we have shown that f⋆andg⋆arepoly(d)-Lipschitz (by Lemmas 6.4 and 6.6), we can apply Lemma 6.3 to bound the Wasserstein distance: W2(ˆYt+τ,Yt+τ)≤ (ρ2(ˆYt,Yt) +ε)ecτ∀τ≥0,wherec≤poly(d). Moreover, with slight abuse of notation, we may define ˆyt+τto be a continuous-time interpola- tion of the discrete process ˆy. Applying (12) to this process we get that, roughly, W2(ˆYt+τ,ˆyt+τ)≤ (ρ2(ˆyt,Yt) +ε+ ∆)ecτforτ≥0. Thus, we get a bound on the Wasserstein error, W2(Yt+τ,ˆyt+τ)≤W2(ˆYt+τ,Yt+τ) +W2(ˆYt+τ,ˆyt+τ)≤(ρ2(ˆyt,Yt) +ε+ ∆)ecττ≥0(52) Unfortunately, after times τ >1 c=1 poly(d),this bound grows exponentially with the dimension d. To overcome this challenge, we define a new coupling between YtandˆYtwhich we “reset” after time intervals of length τ=1 cby converting our Wasserstein bound into a total variation bound after each time interval. Towards this end, we use the fact that if at any time tthe total variation distance satisfies∥LYt−L ˆyt∥TV≤α, then there exists a coupling such that Yt=ˆYtwith probability at least 1−α. In other words, w.p. ≥1−α, we haveρ(ˆyt+τ,Yt+τ) = 0, and we can apply inequality (52) over the next time interval of τwithout incurring an exponential growth in time. Repeating this processT τtimes, we get that ∥LYT−L ˆyT∥≤α×T τ, where the TV error grows only linearly withT. Converting Wasserstein bounds on the manifold to TV bounds. To complete the proof, we still need to show how to convert the Wasserstein bound into a TV bound. Towards this end, we begin by showing that the
|
https://arxiv.org/abs/2505.21640v1
|
transition kernel ˜pt+τ+ˆ∆|t+τ(·|Ht+τ)of the reverse diffusion Htin Rdis close to a Gaussian in KL distance over short time steps ˆ∆: DKL(N(Ht+τ+ˆ∆∇˜pT−t−τ(Ht+τ),ˆ∆Id)∥˜pt+τ+ˆ∆|t+τ(·|Ht+τ))≤ατ T. One can do this using Girsanov’s theorem, since, unlike the diffusion Yton the manifold, the reverse diffusion in Euclidean space Htdoeshave a constant diffusion term (see e.g. Theorem 9 of [8]). Next, we use the fact that with probability at least 1−ατ Tthe mapφin a ball of radius 1 poly(d)about the point Ht+τhasc-Lipschitz Jacobian where c= poly(d), and that the inverse of the exponential map exp(·)hasO(1)-Lipschitz Jacobian, to show that the transition kernel ptof Yt=φ(Ht)satisfies DKL(ν1∥pt+τ+ˆ∆|t+τ(·|Yt+τ))≤(1 + ˆ∆c)dατ T≤2ατ T, if we choose ˆ∆≤O(1 cd), whereν1:= expYt+τ(N(Yt+τ+ˆ∆f⋆(Yt+τ,t+τ),ˆ∆g⋆2(Yt+τ,t+τ)Id)). Next, we plug in our Wasserstein bound W(Yt+τ,ˆyt+τ)≤O(ε)into the formula for the KL divergence between two Gaussians to bound ∥LYt+τ+ˆ∆−L ˆyt+τ+ˆ∆∥TV. Specifically, noting that Lˆyt+τ+ˆ∆|ˆyt= expˆyt+τ(N(ˆyt+τ+ˆ∆f(ˆyt+τ,t+τ),ˆ∆g2(ˆyt+τ,t+τ)Id)), we have that DKL(ν1,Lˆyt+τ+ˆ∆|ˆyt+τ) = Tr/parenleftig (g⋆2(Yt+τ,t+τ))−1g2(ˆyt+τ,t+τ)/parenrightig 30 −d+ logdetg⋆2(Yt+τ,t+τ) detg2(ˆyt+τ,t+τ)+w⊤(ˆ∆g⋆2(Yt+τ,t))−1w, wherew:=Yt+τ−ˆyt+τ+ˆ∆(f⋆(Yt+τ,t+τ)−f(ˆyt+τ,t+τ)). Since with probability ≥1−ατ T we haveg⋆(Yt+τ)⪰poly(d), plugging in the error bounds ∥f⋆(Yt+τ,t)−f(Yt+τ,t)∥ ≤εand ∥g⋆(Yt+τ,t)−g(Yt+τ,t)∥F≤εand thec-Lipschitz bounds on f⋆andg⋆due to Lemmas 6.4 and 6.6, wherec= poly(d), we get that DKL(ν1,Lˆyt+τ+ˆ∆)≤O(ε2c2). Thus, by Pinsker’s inequality, we have ∥LYt+τ+ˆ∆−L ˆyt+τ+ˆ∆∥TV−∥LYt−L ˆyt∥TV ≤/radicalig DKL(ν1∥pt+τ+ˆ∆|t+τ(·|Yt+τ)) +/radicalig DKL(ν1∥Lˆyt+τ+ˆ∆|ˆyt)≤O(εc).(53) 6.6 Completing the proof of Theorem 2.2 Bounding the accuracy. Recall that qtis the distribution of the forward diffusion Ztin Eu- clidean space after time t, which is an Ornstein-Uhlenbeck process. Standard mixing bounds for Ornstein-Uhlenbeck process imply that ∥qt−N(0,Id)∥TV≤O(Ce−t) for allt>0(see e.g. [3]). Thus, it is sufficient to choose T= log(1 Cε)to ensure that ∥LYT−π∥TV=∥qT−N(0,Id)∥TV≤O(ε). As Lemma 6.7 holds for all t∈[t0,T], the distribution ν=LˆyTof our sampling algorithm’s output satisfies ∥π−ν∥TV=∥LYT−π∥TV+∥LYT−ν∥TV≤O(ε) +O(εc×T τ) =O(ε×poly(d)). Bounding the runtime of the sampling algorithm. Since our accuracy bound requires T= log(d εC), and requires a time-step size of ∆≤1 poly(d), the number of iterations is bounded by T ∆≤O/parenleftbigg poly(d)×log/parenleftbiggd εC/parenrightbigg/parenrightbigg . 6.7 Proof sketch for extension of sampling guarantees to special orthogonal group Similar techniques to those used in the case of the complex unitary group can be used to bound the accuracy and runtime of our sampling algorithm in the case of the real special orthogonal group. However, in the case of the special orthogonal group we encounter the additional challenge that, due to weaker “electrical repulsion” between eigenvalues of real-valued random matrices, with high probability Ω(1)the gaps between neighboring eigenvalues γi+1(t)−γi(t)may become exponentially small ind, over very short time intervals of length O(1 ed). To overcome this challenge, we first note that, one can show that the gaps between non-neighboring eigenvalues do satisfy a polynomial lower bound at every time tw.h.p. (see e.g. [2, 24, 25]): P /intersectiondisplay t∈[t0,T]/braceleftigg γi+2(t)−γi(t)≤s1√n√ t/bracerightigg ≤O/parenleftig s1.5/parenrightig . (54) 31 Moreover, one can also show that, except over at most O(n1.5)“bad” time intervals [τj,τj+ ∆j], each of length e.g. O(1 n5), the gaps between all neighboring eigenvalues are at least1 n10. From the matrix calculus formula for φone can show that the SDE for the eigenvectors of the forward diffusion satisfy (these evolution equations, discovered by Dyson, are referred to as Dyson Brownian motion [13]) dγi(t) = dBii(t) +/summationdisplay j̸=idt γi(t)−γj(t), (55) dui(t) =/summationdisplay j̸=idBij(t)
|
https://arxiv.org/abs/2505.21640v1
|
γi(t)−γj(t)uj(t)−1 2/summationdisplay j̸=idt (γi(t)−γj(t))2ui(t)∀i∈[n]. (56) From (54), one can see that over the “bad” time intervals [ai,bi], each eigenvalue γi(t)has at most one neighboring eigenvalue, say γi+1(t), with small gap γi(t)−γi+1(t)≤O(1√ d)w.h.p. Roughly speaking, this implies that only the interactions in (56) between eigenvectors with neighboring eigenvalues which fall below O(1 n10)are significant, while interactions between eigenvectors with larger eigenvalue gaps are negligible over these short time intervals. Thus, one can analyze the evolution of the eigenvectors (40) over these short time intervals as a collection of separable two- body problems consisting of interactions between pair(s) of eigenvectors. More precisely, using (55), one can show that over the bad time intervals [τj,τj+ ∆j], any eigenvaluegapwhichfallsbelow1 n10, alsoremainsbelow1 n8overatimesub-intervaloflengthatleast Ω(1 (n8)2)w.h.p. This is because eigenvalues γi(t)andγi+1(t)in (55) repel with an “electrical force” proportional to1 γi(t)−γi+1(t), which implies that the eigenvalues gaps expand at a rate proportional to√ t(the stochastic term dBii(t)also leads to the same√ texpansion rate). Thus, using the evolution equations (56), one can show that, over the short bad intervals, the distribution of [ui(τj+ ∆j),ui+1(τj+ ∆j)]|U(τj)is1 poly(d)-close in Wasserstein distance to the invariant (Haar) measure with respect to the action of SO(2)on[ui(τj),ui+1(τj)]. This is because (by the Itô isometry) the time-averaged variance in ui(t)over this time interval is proportional to1 ∆j/integraltextτj+∆j τj1 (γi(t)−γj(t))2dt≈ /integraltextn8 n101 (√ t)2dt = log(1 n8)−log(1 n10) = (10−8) log(n) = Ω(log( n)). But the diameter of the 2- dimensional manifold (which is isomorphic to SO(2)) on which ui(t)andui+1(t)(approximately) lie isO(1). Thus, after the time interval [τj,τj+ ∆j], the two neighboring eigenvectors [ui(τj+ ∆j),ui+1(τj+ ∆j)]|U(τj)are (approximately) distributed according to the Haar measure with respect to the action of SO(2)on[ui(τj),ui+1(τj)]. Thus, one can show that as long as one uses a numerical step size ∆≤O(1 poly(d)), the transition kernel of both the continuous-time reverse diffusionYtand the numerical simulation ˆytover the time interval [τj,τj+ ∆j]will be very close (withinWassersteindistance∆ poly(d))totheHaarmeasureontheactionof SO(2)on[ui(τj),ui+1(τj)]. As the Lipschitz property does hold outside the bad intervals [τj,τj+ ∆j], the remainder of the proof follows in the same way as for the case of U(n). 7 Conclusion and future work We introduce a new diffusion model with a spatially varying covariance structure, enabling efficient training on symmetric manifolds with non-zero curvature. By leveraging manifold symmetries, we ensure the reverse diffusion satisfies an “average-case” Lipschitz condition, which underpins both the accuracy and efficiency of our sampling algorithm. Our approach improves training runtime and sample quality on symmetric manifolds, signif- icantly narrowing the gap between manifold-based diffusion models and their Euclidean counter- 32 parts. Furthermore, the model naturally extends to conditional generation: given a conditioning variabley, one can feed yas an additional input to the learned drift and covariance functions, mirroring conditional diffusion models in Euclidean settings. Severalopendirectionsremain. Oneistoextendourframeworktomoregeneralmanifolds—such as the manifold of positive semi-definite matrices, or other domains admitting a projection ora- cle satisfying suitable average-case smoothness properties (see Appendix D). Another direction is to handle distributions supported on a union of manifolds with varying dimensions, such as the GEOM-DRUGS dataset [19], which lies on a union of tori. Finally, while our method yields polynomial-in- dbounds
|
https://arxiv.org/abs/2505.21640v1
|
on sampling accuracy—improving upon prior works that lacked such guarantees—tightening this dependence remains an important challenge for future research. Acknowledgments OM was supported in part by a Google Research Scholar award. NV was supported in part by NSF CCF-2112665. 33 References [1] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications , 12(3):313–326, 1982. [2] Greg W Anderson, Alice Guionnet, and Ofer Zeitouni. An introduction to random matrices . Number 118. Cambridge university press, 2010. [3] DominiqueBakry,IvanGentil,MichelLedoux,etal. Analysis and geometry of Markov diffusion operators , volume 103. Springer, 2014. [4] HeliBen-Hamu, SamuelCohen, JoeyBose, BrandonAmos, MaximillianNickel, AdityaGrover, Ricky TQ Chen, and Yaron Lipman. Matching normalizing flows and probability paths on manifolds. In International Conference on Machine Learning , pages 1749–1763. PMLR, 2022. [5] Joe Benton, Valentin De Bortoli, Arnaud Doucet, and George Deligiannidis. Nearly d-linear convergence bounds for diffusion models via stochastic localization. In The Twelfth Interna- tional Conference on Learning Representations , 2024. [6] Hongrui Chen, Holden Lee, and Jianfeng Lu. Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions. In International Conference on Machine Learning , pages 4735–4763. PMLR, 2023. [7] Ricky TQ Chen, Meta FAIR, and Yaron Lipman. Flow matching on general geometries. In ICLR, 2024. [8] Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. In The Eleventh International Conference on Learning Representations , 2023. [9] Xiang Cheng, Jingzhao Zhang, and Suvrit Sra. Theory and algorithms for diffusion processes on Riemannian manifolds. arXiv preprint arXiv:2204.13665 , 2022. [10] YuCheng, YongshunGong, Yuansheng Liu, Bosheng Song, and Quan Zou. Molecular designin drug discovery: a comprehensive review of deep generative models. Briefings in bioinformatics , 22(6):bbab344, 2021. [11] Kyle Cranmer, Gurtej Kanwar, Sébastien Racanière, Danilo J Rezende, and Phiala E Shana- han. Advances in machine-learning-based sampling motivated by lattice quantum chromody- namics.Nature Reviews Physics , 5(9):526–535, 2023. [12] Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, and Arnaud Doucet. Riemannian score-based generative modelling. Advances in Neural In- formation Processing Systems , 35:2406–2422, 2022. [13] Freeman J Dyson. A Brownian-motion model for the eigenvalues of a random matrix. Journal of Mathematical Physics , 3(6):1191–1198, 1962. [14] Wendelin Feiten, Muriel Lang, and Sandra Hirche. Rigid motion estimation using mixtures of projected Gaussians. In Proceedings of the 16th International Conference on Information Fusion, pages 1465–1472. IEEE, 2013. 34 [15] Thomas Hakon Gronwall. Note on the derivatives with respect to a parameter of the solutions of a system of differential equations. Annals of Mathematics , pages 292–296, 1919. [16] JonathanHo, AjayJain, andPieterAbbeel. Denoisingdiffusionprobabilisticmodels. Advances in neural information processing systems , 33:6840–6851, 2020. [17] Elton P Hsu. Stochastic analysis on manifolds . Number 38. American Mathematical Soc., 2002. [18] Chin-Wei Huang, Milad Aghajohari, Joey Bose, Prakash Panangaden, and Aaron C Courville. Riemannian diffusion models. Advances in Neural Information Processing Systems , 35:2750– 2761, 2022. [19] Bowen Jing, Gabriele Corso, Jeffrey Chang, Regina Barzilay, and Tommi Jaakkola. Torsional diffusion for molecular conformer generation. Advances in neural information processing
|
https://arxiv.org/abs/2505.21640v1
|
sys- tems, 35:24240–24253, 2022. [20] Jaehyeong Jo and Sung Ju Hwang. Generative modeling on manifolds through mixture of Riemannian diffusion processes. In International Conference on Machine Learning , 2024. [21] Adam Leach, Sebastian M Schmon, Matteo T Degiacomi, and Chris G Willcocks. Denoising diffusion probabilistic models on SO(3) for rotational alignment. In ICLR 2022 Workshop on Geometrical and Topological Representation Learning , 2022. [22] David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. In ICLR, 2017. [23] Aaron Lou, Minkai Xu, Adam Farris, and Stefano Ermon. Scaling Riemannian diffusion models.Advances in Neural Information Processing Systems , 36, 2024. [24] Oren Mangoubi and Nisheeth K Vishnoi. Private covariance approximation and eigenvalue- gap bounds for complex Gaussian perturbations. In The Thirty Sixth Annual Conference on Learning Theory , pages 1522–1587. PMLR, 2023. [25] Oren Mangoubi and Nisheeth K Vishnoi. Private low-rank approximation for covariance matri- ces, Dyson Brownian motion, and eigenvalue-gap bounds for Gaussian perturbations. Journal of the ACM , 72(2):1–88, 2025. [26] E. Mathieu and M. Nickel. Riemannian continuous normalizing flows. In Advances in Neural Information Processing Systems , 2020. [27] John Nash. The imbedding problem for Riemannian manifolds. Annals of mathematics , 63(1):20–63, 1956. [28] OpenAI. Video generation models as world simulators, 2023. [29] Peter Petersen. Riemannian geometry , volume 171. Springer, 2006. [30] Harry Ernest Rauch. A contribution to differential geometry in the large. Annals of Mathe- matics, 54(1):38–55, 1951. [31] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. 35 [32] Noam Rozen, Aditya Grover, Maximilian Nickel, and Yaron Lipman. Moser flow: Divergence- based generative modeling on manifolds. Advances in Neural Information Processing Systems , 34:17669–17680, 2021. [33] Mark Rudelson and Roman Vershynin. Hanson-Wright inequality and sub-Gaussian concen- tration.Electronic Communications in Probability , 8(82):1–9, 2013. [34] Nisheeth K. Vishnoi. Geodesic convex optimization: Differentiation on manifolds, geodesics, and convexity. CoRR, abs/1806.06373, 2018. [35] Joseph L Watson, David Juergens, Nathaniel R Bennett, Brian L Trippe, Jason Yim, Helen E Eisenach, Woody Ahern, Andrew J Borst, Robert J Ragotte, Lukas F Milles, et al. De novo design of protein structure and function with rfdiffusion. Nature, 620(7976):1089–1100, 2023. [36] Jason Yim, Brian L Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, and Tommi Jaakkola. Se (3) diffusion model with application to protein backbone generation. In International Conference on Machine Learning , pages 40001–40039. PMLR, 2023. [37] Yuchen Zhu, Tianrong Chen, Lingkai Kong, Evangelos A Theodorou, and Molei Tao. Triv- ialized momentum facilitates diffusion generative modeling on Lie groups. In International Conference on Learning Representations , 2025. 36 A Additional simulation details A.1 Datasets Given ad-dimensional Riemannian manifold M, a number of mixture components k∈N, points m1,...,mk∈Mand covariance matrices C1,...,Ck∈Rd×d, we say that a random variable is distributed according to a wrapped Gaussian distribution with means m1,···,mkand covariances C1,···,Ck(with equal weights on each component) if its distribution is equal to that of a random variableXsampled as follows: 1. Sample an index iat random from{1,...,k}. 2. SampleZ∼N(0,Ci) 3. SetX=
|
https://arxiv.org/abs/2505.21640v1
|
expmi(Z), where expx(·)denotes the exponential map at any point x∈M. Datasets on the Torus Td.The synthetic dataset is sampled from a single-wrapped Gaussian distribution, with mean at the origin, (0,..., 0)Tand covariance matrix 0.2Id. A total of 30,000 points were sampled as the training dataset, and 10,000 were sampled as a test dataset to compute the log-likelihood of the generative model outputs. Datasets on the special orthogonal group SO(n).The dataset is constructed by first picking 2 random means m1,m2∈SO(n)sampled from the uniform measure on the orthogonal group SO(n) (i.e., the invariant measure with respect to actions by the orthogonal group). We then sample 40,000 matrices from the wrapped Gaussian mixture distribution on SO(n)with means m1,m2and covariance 0.2Id. 30,000 of these matrices are used for training, and the remaining 10,000 matrices comprise the test dataset used to evaluate the C2ST score of the generative model outputs. Datasets on the unitary group U(n).We use a dataset on U(n)of unitary matrices repre- senting time-evolution operators eitHof a quantum oscillator. Here tis time,H=ℏ 2m∆−Vis a Hamiltonian, and ∆is the Laplacian. Vis a random potential V(x) =ω2 2∥x−x0∥2with angu- lar momentum ωsampled uniformly on [2,3]andx0∼N (0,1). As ∆,Vare infinite-dimensional operators, matrices in U(n)are obtained by retaining the (discretized) top- neigenvectors of ∆,V. A.2 Neural Network architecture, Training Hyperparameters, and hardware Torus.In the case of the torus, the neural network architecture consists of a 4-layer MLP with a hidden dimension of k, with a sinactivation function. We set k= 512ford<1000andk= 2048 ford= 1000. The models were trained with a batch size of 512, with an appropriate variance scheduler. For each model, we trained the neural networks for 50K iterations when d <1000, and for 100K iterations when d= 1000. Special Orthogonal Group. In the case of the special orthogonal group, the neural network architecture consists of a 4-layer MLP with a hidden dimension k= 512, with a sinactivation function. For each model, the neural networks were trained for 100K iterations, with a batch size of 512, and an appropriate variance scheduler. Unitary Group. Following TDM [37], when training each model on the unitary group, we use a more complicated neural network than on the torus and special orthogonal group, to accommodate the more complicated quantum evolution operator datasets used in our simulations on the unitary 37 group. For both the drift and diffusion terms, let (Xi,ti)be inputs into the neural network, where Xi∈U(n)andti∈[0,T]is time. The output of the neural network is then given by ˆXi= MLP(N G(MLP S(Xi),Emb sin(ti))) where MLPis a 2-layer multi-layer perceptron of dimension D,MLP Siskskip-connected MLP layers of dimension D,Emb sinis a sinusoid embedding of dimension D, and NGdenotes group normalization. In our simulations, we set k= 8andD= 512. For the drift term ˆf(·,·)in each of the models, the final output is given by Xdrift= projTXiU(n)(ˆX), where projTXiU(n)is the projection onto the tangent space at Xi. For the diffusion term ˆg(·,·)in our model, the neural network outputs a vector of dimension d. For each model, the neural networks were trained for 80K iterations, with a batch size of 512, and an appropriate variance scheduler.
|
https://arxiv.org/abs/2505.21640v1
|
Hardware. Simulations evaluating sample quality on the Torus were run on an Apple M1 chip with 10 cores. Simulations on the special orthogonal group and unitary group were run on a single RTX 3070. All simulations evaluating per-iteration training runtime were run on a single RTX 3070 as well. A.3 Evaluation metrics In this section, we define the metrics used in our simulations. Log-likelihood metric. LetD={x1,x2,...,xn}be a synthetic dataset arising from a target distribution with density function g. We train the generative model Aon the dataset D. Next, we generate points yA 1,...,yA nwhich are outputs of the trained model A. Since the points yA 1,...,yA n are generated independently, the likelihood of generating the points yA 1,...,yA ngiven the target distribution gis given by n/productdisplay i=1g(yA i) (57) The (average) log-likelihood of the generated points yA 1,...,yA nwith respect to the target density gis therefore 1 nn/summationdisplay i=1logg(yA i) (58) C2ST metric. Suppose we have two distributions P,Qand sampled points SP,SQwhereSP∼ P,SQ∼Q. We denote the number of sampled points by m:=|SP|=|SQ|. One motivation behind the Classifier Two-Sample Test (C2ST) metric [22] is to perform a hypothesis test, where one wishes to decide whether to reject or accept the null hypothesis P=Q. By reporting the test statistic from this hypothesis test, the C2ST metric can also be used to evaluate the quality of samples generated by a generative model, we do in our simulations. To compute the C2ST metric, construct a dataset Dwhere D={(xi,0)}m i=1∪{(yi,1)}m i=1:={(zi,li)}2m i=1. and wherexi∈SPandyi∈SQ. Partition Drandomly into training and test datasets DtrandDte wheremte:=|Dte|denotes the number of points in the test dataset. Suppose f:D→[0,1]is a binary classifier trained on Dtrwheref(zi) =P(li= 1|zi), then the value for the C2ST metric is computed as ˆt=1 mte/summationdisplay (zi,li)∈Dte1/bracketleftbigg 1/parenleftbigg f(zi)>1 2/parenrightbigg =li/bracketrightbigg (59) 38 Table 4: C2ST scores when training on a wrapped Gaussian mixture dataset on SO(n). Lower scores indicate better-quality sample generation (range is [0.5,1], and 0.5is optimal). For n≥9, our model achieves the best C2ST scores. Methodn= 3n= 5n= 9n= 12n= 15 Euclidean.51.±.01.51.±.01.62.±.02.64.±.02.72.±.02 RSGM.51.±.01.57.±.02.74.±.01.81.±.02.90.±.02 TDM.52.±.01.53.±.01.69.±.03.73.±.02.79.±.03 Ours.55.±.01.56.±.02.60.±.03.61.±.02.67.±.03 where 1is the indicator function. The null distribution is approximately N/parenleftig 1 2,1 4mte/parenrightig . Then if P=Qwe would have ˆt→0.5. Whether to reject the null hypothesis can be done by performing ap-value analysis using 59 and the null hypothesis. For the sake of comparison between model performance, we report the value of ˆtinstead. More specifically, as the statistic ˆtcan take values greater than or smaller than 0.5, we report the value /vextendsingle/vextendsingle/vextendsingle/vextendsingleˆt−1 2/vextendsingle/vextendsingle/vextendsingle/vextendsingle+1 2(60) where ˆtis computed using 59. A.4 Additional results In this section, we provide additional empirical results. Visual results on the torus Td.In 2 we show points generated on the torus by the Euclidean diffusion model, the RSGM, and our model when trained on a dataset sampled from a wrapped Gaussian distribution. For the 2D torus, we plot the result as a 2D scatter plot. For higher dimensional tori, for any sampled point x∈Td, the plot shows the first two angle coordinates (x0,x1)of each generated point on the torus. We observe that, for dimensions d≥100, our model appears
|
https://arxiv.org/abs/2505.21640v1
|
to generate points that visually resemble those of the target distribution more closely than the points generated by the Euclidean diffusion model or the RSGM model. C2ST score and visual results on the special orthogonal group SO(n).We train our model, a Euclidean diffusion model, RSGM, and TDM on a dataset sampled from a mixture of two wrapped Gaussian distributions on SO(n)forn∈{3,5,9,12,15}. Forn≥9, our model achieves the lowest C2ST score; a lower C2ST score indicates higher-quality sample generation (Table 4). The visual results for our model, the Euclidean model, the RSGM, and TDM are shown in 3. Here we plot the first and second entries of the first row of each generated matrix in SO(n)Note that the target distribution is bimodal, yet the two modes appear visually as a single mode as we are only plotting two of the matrix coordinates. Additional visual results on the unitary group U(n).Visual sample generation results on U(n)were shown in Figure 1 of Section 5 for n= 15. In Figure 4, we give visual results for additional values of n. Runtime on the torus Td.5 gives the per-iteration runtime of the Euclidean model, our model, TDM, and RSGM on the torus, for dimensions d∈{2,10,50,100,1000}. We observe that for each of these dimensions, our method has similar runtime to the Euclidean model, whereas RSGM is roughly 9 times slower than the Euclidean model. 39 d = 2 d = 10 d = 50 d = 100 TargetOursEuclideanRSGMd = 1000Figure 2: Points generated by different models when training on a dataset sampled from a wrapped Gaussian target distribution on the torus of different dimensions d∈{2,10,50,100,1000}. 40 d = 3n = 5 TargetOursEuclideanRSGMn = 15n = 12 TDMFigure 3: Points generated by different models trained on a Gaussian mixture dataset on SO(n) for different values of n. Figure 4: Points generated on U(n)for different values of n, when training on datasets comprising time-evolution operators of quantum harmonic oscillators with random potentials. For n= 9and n= 15, we observe that our model generates samples resembling the data distribution, while the Euclidean, RSGM, and TDM models generate lower-quality samples. 41 Table 5: Per-iteration training runtime in seconds on the Torus. The manifold-constrained diffusion model with the fastest runtime is in bold; the Euclidean model is in gray for comparison. For each dimensiond, our model achieves a similar runtime to the Euclidean model, whereas RSGM is roughly 9 times slower. Methodd= 2d= 10d= 50d= 100d= 1000 Euclidean 0.16±.00 0.17±.00 016±.00 0.17±.00 0.18±.01 RSGM 1.36±.09 1.35±.11 1.39±.07 1.36±.06 1.42±.08 Ours 0.15±.01 0.15±.01 0.16±.01 0.15±.01 0.16±.01 Table 6: Per-iteration training runtime in seconds on SO(n). The manifold-constrained diffusion model with the fastest runtime is in bold; the Euclidean model is in gray for comparison. Our model’s runtime remains within a factor of 1.3of the Euclidean model for all n. Runtimes of TDM and RSGM increase more rapidly with dimension and are 51 and 66 times greater than the Euclidean model for n= 50. Methodn= 3n= 5n= 10n= 30n= 50 Euclidean 0.13±.01 0.12±.01 0.13±.00 0.12±.01 0.15±.00 RSGM 0.73±.01 0.96±.08 1.18±.01 2.99±.12 9.89±.09 TDM 0.62±.02 0.78±.05 1.67±.04
|
https://arxiv.org/abs/2505.21640v1
|
2.85±.09 7.63±.12 Ours 0.13±.01 0.13±.01 0.14±.00 0.14±.01 0.20±.01 Runtime on the special orthogonal group SO(n).6 shows the per-iteration runtime of the Euclidean model, our model, RSGM and TDM, on the special orthogonal group SO(n), forn∈ {3,5,10,30,50}. Weobservethatourmodel’sper-iterationtrainingruntimeremainswithinafactor of1.3of the Euclidean model for all n. However, the per-iteration training runtimes of TDM and RSGM increase more rapidly with dimension and are, respectively, 51 and 66 times greater than the Euclidean model for n= 50. B Challenges encountered when applying Euclidean diffusion for generatingpointsconstrainedtonon-Euclideansymmetricman- ifolds The following examples illustrate why using Euclidean diffusion models to enforce symmetric man- ifold constraints may be insufficient. Example1. Considertheproblemofgeneratingpointsfromadistribution µonthed-dimensional torusTd=S1×···× S1, given a dataset Dsampled from µ. A naive approach is to map the dataset Dfrom the torus to Euclidean space via the map ψ, which maps each point on the torus to its angles in [0,2π)d⊆Rd. One can then train a Euclidean diffusion model on the dataset ψ(D). However, the map ψcan greatly distort the geometry of µ. To see why, let µbe a unimodal distribution on Tdwith mode cenetered near (0,..., 0). The pushforward of µunderψconsists of a distribution with 2dmodes, each near the 2dcorners of the d-cube [0,2π)d(see Figure 5). Thus, a Euclidean diffusion model needs to learn a multimodal distribution, which may be much harder than learning a unimodal distribution. 42 Example 2. Another example is the problem of generating samples from a distribution on the manifold SO(3)of rotation matrices. There is a natural map ψfrom SO(3)toR3which maps anyM∈SO(3)to its three Euler angles (a,b,c )∈[−π,π]×[−π 2,π 2]×[−π,π]⊆R3. However, ψ has a singularity at b=π 2, which may make it harder to learn distributions with a region of high probability density passing through this singularity, as ψmay separate this region into multiple disconnected regions. Additionally, it has been observed empirically that applying Euclidean diffusion models to generate Euler angles in R3leads to samples of lower quality than those generated by diffusion models on the manifold SO(3); see e.g. [21], and [35]. Figure 5: A probability density µwith one mode (blue) on the torus. The map ψ, which maps points in the d-dimensional torus Tdto Euclidean space Rd, may break up the single mode on the torus into up to 2dseparated modes in Rd. This can make the task of learning the pushforward of the target distribution on Rdmuch more challenging than the task of learning the original target distribution on the torus, as the distribution in Rdmay have exponentially-in- dmore modes. C IllustrationofourframeworkforEuclideanspace, torus, special orthogonal group, and unitary group 1.Euclidean space Rd.In the Euclidean case, our algorithm (with the above choice of φ,ψ) recoversthealgorithmsofdiffusionmodelson Rdfrompriorworks(e.g., [16,31]). Theforward diffusion is the Ornstein-Uhlenbeck process with SDE dZt=−1 2Ztdt+ dBtinitialized at the target distribution π, whereBtis the standard Brownian motion. The training objective for the drift term f(z,t)of the reverse diffusion is given by ∥(ˆz⊤ˆz−be−1 2(T−t) e−(T−t)−1−f(ˆz,t)∥2where bis a point sampled from the dataset and ˆzis a point sampled from ZT−t|{Z0=b}which is Gaussian distributed as N(be−1 2(T−t),/radicalbig 1−e−(T−t)Id)(see Section 3). The number of arithmetic operations to compute the training objective is therefore the same as for
|
https://arxiv.org/abs/2505.21640v1
|
previous diffusion models in Euclidean space. 2.Torus Td.For the torus, the forward and reverse diffusion of our model are the same as the models used in previous diffusion models on the torus [12] [23]. The Forward diffusion is given by the SDE dXt=−1 2Xtdt+ dBton the torus, initialized at the target distribution π. The only difference is in the training objective function. To obtain our objective function, we observe that Xtis the projection Xt=φ(Zt)of the Ornstein-Uhlenbeck diffusion on Rdvia our choice of projection map φfor the torus. The drift term ffor the reverse diffusion can be trained by minimizing the objective function ∥ˆz⊤ˆz−ψ(b)e−1 2(T−t) e−(T−t)−1−f(φ(ˆz),t)∥2, where ˆz∼ N(be−1 2(T−t),/radicalbig 1−e−(T−t)Id). Our objective function can be computed in O(d)arithmetic 43 operations, improving by an exponential factor on the per-iteration training runtime of [12] which relies on an inefficient expansion of the heat kernel which requires an exponential-in- d number of arithmetic operations to compute, and matching the per-iteration training runtime of [23] who derive a more efficient expansion for the heat kernel in the special case of the torus. 3.Sphere Sd−1. Forward diffusion. We first choose the projection map φ:Rd→Sd−1to beφ(x) =x ∥x∥ forx∈Sd−1, andψ:Sd−1→Rdto be the usual embedding of the unit sphere into Rd. We define our forward diffusion to be the projection Xt=φ(Zt)of the Euclidean-space Ornstein- Uhlenbeckdiffusion Ztontothemanifold M, whereZtisinitializedatthepushforward ψ(π)of the target distribution πontoRd. Since the Ornstein-Uhlenbeck distribution Ztis a Gaussian process, each sample from our forward diffusion can be computed by drawing a single sample from a Gaussian distribution and computing the projection map φonce. The forward and reverse diffusion of our model on the sphere are different than those of prior diffusion models on the sphere. The evolution of our forward diffusion Xton the sphere is gov- ernedbytheSDE dXt=α(Xt,t)(−1 2Xtdt+dBt)initializedatthetargetdistribution π, where the coefficient α(t)is given by the conditional expectation α(Xt,t) :=E/bracketleftig 1 ∥Zt∥/vextendsingle/vextendsingleφ(Zt) =Xt/bracketrightig . Our forward (and reverse) diffusion has a (time-varying and) spatially-varying covariance termα(Xt,t)dBtnot present in prior models [12] [23]. This covariance term, which accounts for the curvature of the sphere, allows our forward diffusion to be computed as a projection of Euclidean Brownian motion onto the sphere despite the sphere’s non-zero curvature. Training the model. The SDE for the reverse diffusion of our model has both a drift and a covariance term. To train a model ffor the drift term, we first sample a point bfrom the datasetDat a random time t∈[0,T], and point ˆzfrom the Ornstein-Uhlenbeck diffusion Zt initialized at ψ(b), which is Gaussian distributed. Next, we project this sample ˆzto obtain a sampleφ(ˆz)from our forward diffusion Xton the manifold. Finally, we plug in the point φ(ˆz), and the datapoint binto the training objective function for the drift term f, which is given by the closed-form expression/vextenddouble/vextenddouble/vextenddouble/vextenddouble1 ∥ˆz∥(I−1 ∥ˆz∥2ˆzˆz⊤)ˆz−ψ(b)e−1 2(T−t) e−(T−t)−1−f(φ(ˆz),t)/vextenddouble/vextenddouble/vextenddouble/vextenddouble2 . The model for the drift termfis trained by minimizing the expectation of this objective function over random samples ofb∼Dandˆz∼Zt. To learn the SDE of the reverse diffusion, we must also train a model for the spatially-varying covariance term, which is given by a d×dcovariance matrix. Learning a dense matrix model
|
https://arxiv.org/abs/2505.21640v1
|
for this covariance term would require at least d2arithmetic operations. However, as a result of the symmetries of the sphere, the covariance matrix has additional structure: it is a multiple α(Xt,t)of thed×didentity matrix. Thus, to learn this covarianceterm, itissufficienttotrainamodel ˆα(Xt,t)forα(Xt,t). Thiscanbeaccomplished by minimizing the objective function (ˆα(φ(ˆz),t)−1 ∥ˆz∥)2. Evaluating our objective functions for the drift term and covariance terms can thus be accomplished via a single evaluation of the projection map φ(x) =x ∥x∥, which requires O(dlog1 δ)arithmetic operations to compute within accuracy δ>0when generating the input to our training objective function, which is sublinear in the dimension d2of the covariance term. In contrast, the forward diffusion used in prior diffusion models on the sphere [12] [23] cannot becomputedastheprojectionofaEuclideanBrownianmotionandmustinsteadbecomputed by solving an SDE (or probability flow ODE) on the sphere. This requires a number of arithmetic operations, which is a higher-order polynomial in the dimension dand in the desired accuracy1 δ(the order of the polynomial depends on the specific SDE or ODE solver 44 used). As their training objective function requires samples from the forward diffusion as input, the cost of computing their objective function is therefore at least a higher-order polynomial in dand1 δ(for [12] it is exponential in d, since their training objective relies on an inefficient expansion for the heat kernel which takes 2darithmetic operations to compute). Sample generation. Once the models f(x,t)andg(x,t)for the drift and covariance terms of our reverse diffusion are trained, we use these models to generate samples. First, we sample a pointzfrom the stationary distribution of the Ornstein-Uhlenbeck process ZtonRd, which is Gaussian distributed. Next, we project this point zonto the manifold to obtain a point y=φ(z), and solve the SDE dYt=f(Yt,t)dt+g(Yt,t)dBtgiven by our trained model for the reverse diffusion’s drift and covariance over the time interval [0,T], starting at the initial point y. To simulate this SDE we can use any off-the-shelf numerical SDE solver. The point yT computed by the numerical solver at time Tis the output of our sample generation algorithm. 4.Special orthogonal group SO(n)and unitary group U(n). For the special orthogonal group SO(n)and unitary group U(n), the forward and reverse diffusion of our model are also different from those of previous works, as our model’s diffu- sions have a spatially-varying covariance term to account for the non-zero curvature of these manifolds. As a result of this covariance term, our forward diffusion can be computed as a projectionφof the Ornstein-Uhlenbeck process in Rd≡Rn×n(orCn×n) onto the manifold SO(n)(U(n)). This projection can be computed via a single evaluation of the singular value decomposition of a n×nmatrix, which requires at most O(nω) =O(dω 2)arithmetic oper- ations, where ω≈2.37is the matrix multiplication exponent and d=n2is the manifold dimension. The forward diffusion U(t)∈SO(n)(orU(t)∈U(n)) of our model is given by the system of stochastic differential equations dui(t) =/summationdisplay j∈[n],j̸=iαij(t)dBijuj(t)−1 2/summationdisplay j∈[n],j̸=iβij(t)ui(t)dt, (61) whereαij(t) :=E/bracketleftig 1 λi−λj|φ(Zt) =U(t)/bracketrightig andβij(t) :=E/bracketleftig 1 (λi−λj)2|φ(Zt) =U(t)/bracketrightig for every i,j∈[n]. A model for the drift term ffor the reverse diffusion can be trained by minimizing the objective function ∥R−1 2DU−f(φ(ˆz),t)∥2 FwhereRis the matrix with i’th column Ri= e−1 2(T−t) e−(T−t)−1U(λiI−Λ)+U∗ψ(b)uiforeachi∈[n], andDisthediagonalmatrixwith i’thdiagonal entryDii=/summationtext
|
https://arxiv.org/abs/2505.21640v1
|
j∈[n],j̸=i1 λi−λjfor eachi∈[n]. Here, ˆz=be−1 2(T−t)+/radicalbig 1−e−(T−t)Gwhere Gis a Gaussian random matrix with i.i.d. N(0,1)entries and UΛU∗denotes the spectral decomposition of ˆz+ ˆz∗. To learn the SDE of the reverse diffusion, we must also train a model for the covariance term, which is given by a d×d=n2×n2covariance matrix. To train a model for this covariance term with runtime sublinear in the number of matrix entries n4, we observe that as a result of the symmetries of the orthogonal (or unitary) group, the covariance term in (5) is fully determined by the n2scalar terms αij(t)fori,j∈[n]and then×nmatrixU. Thus, to learn the covariance term, it is sufficient to train a model A(U,t)∈Rn×nfor thesen2terms, which can be done by minimizing the objective function ∥A(U,t)−A∥2 F, whereAis then×nmatrix with (i,j)’th entryAij=1 λi−λjfori,j∈[n], andλidenotes the i’th diagonal entry of Λ. 45 The training objective function for both the drift and covariance term can thus be computed via a singular value decomposition of an n×nmatrix (and matrix multiplications of n×n matrices), which requires at most O(nω) =O(dω 2)arithmetic operations, where ω≈2.37is the matrix multiplication exponent and d=n2is the manifold dimension. In contrast, the training objectives in prior works, including [12] [23], require an exponential in dimension number of arithmetic operations to compute, as they rely on the heat kernel of themanifold, whichlacksanefficientclosed-formexpression. Instead, theirtrainingalgorithm requires computing an expansion for the heat kernel of these manifolds, which is given as a sum of terms over the d-dimensional lattice, and one requires computing roughly 2dof these terms to compute the heat kernel within an accuracy of O(1). D Generalization to non-symmetric manifolds While our theoretical framework and guarantees are developed for symmetric Riemannian mani- folds, it is natural to ask whether the approach can extend to more general geometries. In this section, we outline the minimal set of geometric and analytic conditions required for our guar- antees—on runtime, simulation accuracy, and Lipschitz continuity—to continue holding on non- symmetric manifolds. We also illustrate, via a concrete example, how two of these conditions can be satisfied even on non-smooth, non-Riemannian domains, and discuss the challenges that arise in the absence of continuous symmetries. Our guarantees rely on the following three key properties: 1.Exponential map oracle. An oracle for computing the exponential map on the manifold M. 2.Projection map oracle. A projection map φ:Rd→M, whered=O(dim(M)), along with efficient computation of its Jacobian Jφ(x)and the trace of its Hessian tr(∇2φ(x)), both of which appear in our training objective. 3.Lipschitz SDE on M.The projection Yt=φ(Ht)of the time-reversed Euclidean Brownian motionHtmust satisfy a stochastic differential equation on Mwhose drift and diffusion coefficients are L-Lipschitz everywhere on M, withLgrowing at most polynomially in d. This is essential for the reverse diffusion process to be simulated accurately and efficiently. Conditions (1) and (2) may hold even when Mlacks the high degree of symmetry assumed in our main results. For example, suppose Mis the boundary of a compact convex polytope K⊆Rd, which contains a ball of radius r >0centered at a point p. Although such a polytope is not a smooth manifold due to singularities at vertices and edges, its
|
https://arxiv.org/abs/2505.21640v1
|
boundary is composed of piecewise flat(d−1)-dimensional faces. Geodesics restricted to a single face are linear and computable efficiently, satisfying the spirit of property (1). For property (2), one can define a projection φ:Rd→Mthat maps any x∈Rdto the point where the ray emanating from pand passing through xintersects the boundary M. This projection can be computed efficiently, e.g., via binary search or ray-casting techniques. However, property (3) is significantly harder to satisfy in such domains. The drift of the reverse SDE projected onto Mexhibits discontinuities at the vertices and lower-dimensional faces of the polytope. In our analysis, we crucially rely on the continuous symmetries of the manifold to “smooth out” such irregularities and to prove average-case Lipschitz continuity (see the discussion on “average-case” Lipschitzness on page 6). 46 EvenamongsmoothRiemannianmanifolds, generalizingbeyondsymmetricspacesremainsnon- trivial. Examples include surfaces of revolution with varying curvature (e.g., tori with non-uniform cross-sections) or higher-genus manifolds such as a double torus. These lack the homogeneous structure exploited in our proofs and pose new challenges for both analysis and algorithm design. Extending our framework to such settings represents a promising direction for future research. E Notation 1.Tangent space TxM.Given a smooth manifold Mand a point x∈M, the tangent space atxis denoted byTxM. 2.Riemannian manifold. A Riemannian manifold is a smooth manifold Mequipped with a Riemannian metric g, which assigns to each point x∈Ma positive definite inner product gx:TxM×TxM→R. 3.Exponential map exp(x,v).Givenx∈Mandv∈TxM, there exists a unique geodesic γ such thatγ(0) =xandγ′(0) =v. The exponential map is defined as exp(x,v) :=γ(1), i.e., the point reached by traveling along the geodesic for unit time. 4.Parallel transport Γx→y(v).Forx,y∈Mandv∈TxM,Γx→y(v)denotes the parallel transport of valong the (unique) distance-minimizing geodesic from xtoy. This transport yields a vector in TyM. 5.Geodesic distance ρ.The geodesic distance ρ(x,y)between points x,y∈Mis the length of the shortest path (geodesic) connecting them on the manifold. 6.JacobianJφ.Letφ:M→N be a differentiable map between Riemannian manifolds. The Jacobian (or differential) at x∈Mis the linear map Jφ:TxM→Tφ(x)N, defined by the directional derivative of φatx. In coordinates, Jφ(∂xi)j=∂xiφj, where∂xidenotes the ith basis vector ofTxM. In the special case M=RmandN=Rn,Jφis the matrix whose (i,j)th entry is ∂φi/∂xj. 7.Indicator function 1A(x).Given a set A⊆X, the indicator function 1A:X→{ 0,1}is defined by: 1A(x) =/braceleftigg 1ifx∈A, 0otherwise. 8.Total variation distance. Given two probability measures µandνon a measurable space X, the total variation distance between them is defined as ∥µ−ν∥TV:= sup A⊆X|µ(A)−ν(A)|, where the supremum is taken over all measurable subsets A⊆X. 9.KL divergence. For probability measures µandνon a measurable space X, withµ≪ν (i.e.,µis absolutely continuous with respect to ν), the Kullback–Leibler (KL) divergence fromνtoµis defined as DKL(µ∥ν) :=/integraldisplay Xlog/parenleftbiggdµ dν(x)/parenrightbigg dµ(x), wheredµ dνdenotes the Radon–Nikodym derivative of µwith respect to ν. 47 10.Pinsker’s inequality. For any two probability measures µandν, ∥µ−ν∥TV≤/radicalig 2DKL(µ∥ν). This inequality provides an upper bound on the total variation distance in terms of the KL divergence. 11.Wasserstein distance Wk(µ,ν).Letµandνbe probability measures on a metric space (M,ρ), and letk∈N. Thek-Wasserstein distance is defined as: Wk(µ,ν) := inf π∈Φ(µ,ν)/parenleftig E(X,Y)∼π[ρk(X,Y )]/parenrightig1/k, where Φ(µ,ν)denotes the set of all couplings of µandν. 12.Operator norm ∥A∥2→2.Given a multilinear map A:V1×···×Vk→Wbetween normed vector spaces, the operator norm is:
|
https://arxiv.org/abs/2505.21640v1
|
∥A∥2→2:= sup v1∈V1\{0},...,vk∈Vk\{0}∥A(v1,...,vk)∥2 ∥v1∥2···∥vk∥2. 13.Partial derivatived dU.In parameterizations of the form x=x(U,Λ), we writed dUx(U,Λ) for the derivative with respect to U∈M. For example, if M= SO(n)andx(U,Λ) =UΛU⊤, then this derivative corresponds to projecting UΛ + ΛU⊤onto the tangent space of SO(n). F Primer on Riemannian geometry and diffusions on manifolds LetMbe a topological space equipped with an open cover {Uα}α∈Aand a corresponding collection of homeomorphisms ϕα:Uα→Rd. Each pair (Uα,ϕα)is called a chart, and the collection {(Uα,ϕα)}α∈Ais referred to as an atlasfor the manifold. For an optimization-oriented overview of smooth manifolds, geodesics, and differentiability, see [34]. We say thatMis asmooth manifold if thetransition maps ϕβ◦ϕ−1 αareC∞-smooth functions on their domain for all overlapping chart pairs α,β∈A. A real-valued function f:M→Ris differentiable at a point x∈Mif it is differentiable in some chart (Uα,ϕα)containingx. Similarly, a curve γ: [0,1]→Mis differentiable if ϕα(γ(t))is a differentiable curve in Rdfor alltsuch thatγ(t)∈Uα. The derivative of a differentiable curve passing through x∈Mdefines a tangent vector at x. The collection of all tangent vectors at xis thetangent spaceTxM, which is isomorphic to Rd. ARiemannian manifold is a pair (M,g)consisting of a smooth manifold Mand a smooth functiong, called the Riemannian metric , which assigns to each point x∈Ma positive-definite inner product gx:TxM×TxM→R. By the fundamental theorem of Riemannian geometry (see, e.g., Theorem 2.2.2 in [29]), there exists a unique torsion-free affine connection ▼on(M,g), known as the Levi-Civita connection , which enables isometric parallel transport between tangent spaces. The Riemannian metric ginduces a length on any differentiable curve γvia: length(γ) =/integraldisplay1 0/radicalig gγ(t)(γ′(t),γ′(t)) dt. 48 The distance between two points x,y∈Mis defined as the infimum of the lengths of all curves joining them: ρ(x,y) := inf γ(0)=x,γ(1)=ylength(γ). Ageodesic is a curveγ(t)such that parallel transport of the initial velocity γ′(0)alongγyields the velocity vector γ′(t)at all times. Given any initial velocity v∈TxM, there exists a unique geodesic γwithγ(0) =xandγ′(0) =v. The endpoint at unit time defines the exponential map : exp(x,v) :=γ(1). Given a smooth map φ:M→N between Riemannian manifolds, the Jacobian (or differential) at x∈Mis a linear map Jφ:TxM→Tφ(x)Ndefined as the directional derivative of φatx. In local coordinates, we may write: Jφ(∂xi)j=∂xiφj, where∂xiis theith coordinate basis vector in TxMandφjis thejth component function of φ. In the special caseM=RmandN=Rn, the Jacobian becomes the matrix whose (i,j)th entry is ∂φi ∂xj. Riemannian manifolds possess a notion of curvature that is intrinsic to the manifold and in- dependent of any ambient embedding. This curvature is described by the Riemannian curvature tensor, a multilinear map that encodes how the manifold bends locally. At any point x∈M, the Riemannian curvature tensor is a multilinear map Rx:TxM×TxM×TxM→TxM, which assigns to each pair of tangent vectors u,v∈TxMa linear operator R(u,v) :TxM→TxM. This operator acts on a third tangent vector w∈TxMaccording to the formula: R(u,v)w=▼u▼vw−▼v▼uw−▼[u,v]w, where ▼is the aformentioned Levi-Civita connection and [u,v]is the Lie bracket of vector fields u andv. Intuitively, this expression measures the failure of second covariant derivatives to commute, and hence captures the intrinsic curvature of the manifold. LetMbe a Riemannian manifold and let x∈Mbe a point. For any two linearly independent tangent vectors u,v∈TxM,
|
https://arxiv.org/abs/2505.21640v1
|
thesectional curvature K(u,v)is defined as the Gaussian curvature of the 2-dimensional surface in Mobtained by exponentiating the plane spanned by uandvatx. Formally, the sectional curvature of the plane Π = span{u,v}⊆TxMis given by: K(u,v) :=⟨R(u,v)v,u⟩ ∥u∥2∥v∥2−⟨u,v⟩2, whereRis the Riemann curvature tensor, and ⟨·,·⟩is the Riemannian metric on TxMand∥·∥its associated 2-norm. A vector field J(t)along a geodesic γ(t)on a Riemannian manifold Mis called a Jacobi field if it satisfies the second-order differential equation: D2J dt2+R(J(t),γ′(t))γ′(t) = 0, 49 whereD dtdenotes the covariant derivative along γ, andRis the Riemann curvature tensor. Jacobi fields describe the infinitesimal variation of geodesics and are used to analyze how nearby geodesics converge or diverge. The behavior of Jacobi fields encodes information about the curvature of the manifold. A fundamental result relating curvature to the behavior of geodesics is the Rauch comparison theorem. It states that the rate at which geodesics deviate from one another depends on the sectional curvature of the manifold. Formally, let M1andM2be two Riemannian manifolds of the same dimension, and suppose that along corresponding geodesics the sectional curvatures satisfy K1≤K2. Then, for Jacobi fields J1(t),J2(t)orthogonal to the geodesics with the same initial length and vanishing initial derivative, we have: ∥J1(t)∥≥∥J2(t)∥for allt>0. Intuitively, this means that geodesics spread apart more quickly in spaces with lower curvature. In particular, manifolds with non-negative sectional curvature constrain the divergence of nearby geodesics, a fact that we use in our analysis of diffusion processes on symmetric manifolds. Given two probability measures µ,νonMand an integer k∈N, thek-Wasserstein distance betweenµandνis: Wk(µ,ν) := inf π∈Φ(µ,ν)/parenleftig E(X,Y)∼π/bracketleftig ρk(X,Y )/bracketrightig/parenrightig1/k, where Φ(µ,ν)is the set of all couplings (joint distributions) with marginals µandν. Diffusions on manifolds can be defined analogously to Euclidean settings, by interpreting dBtas infinitesimal Brownian motion in the tangent space (see [17]). In particular, Itô’s Lemma extends to mapsψ:M→N between Riemannian manifolds via the Nash embedding theorem [27], which ensures that any d-dimensional Riemannian manifold can be isometrically embedded in R2d+1. 50
|
https://arxiv.org/abs/2505.21640v1
|
arXiv:2505.21652v1 [cs.RO] 27 May 2025PartInstruct: Part-level Instruction Following for Fine-grained Robot Manipulation Yifan Yin∗1Zhengtao Han∗2Shivam Aarya1Jianxin Wang1Shuhang Xu1Jiawei Peng1 Angtian Wang1Alan Yuille1Tianmin Shu1 1Johns Hopkins University2ShanghaiTech University https://partinstruct.github.io Pick up the bottle and show me the cap Grasp the left body of the bottleMove the bottle upwardsRotate the bottle so the cap faces front Figure 1: An example fine-grained robot manipulation task in PartInstruct . To successfully perform the task described in the instruction (e.g., showing the cap without occluding it), the robot needs to reason about what object parts are relevant, ground the parts to its 3D visual perception, and plan for a sequence of part-level manipulation skills (e.g., the bottom sequence). Native object manipulation without a detailed understanding of object parts will fail to achieve the intended goal (e.g., the top sequence). These tasks thus pose challenges for robust 3D vision and part-level grounding and reasoning. We show more examples on the project website. Abstract —Fine-grained robot manipulation, such as lifting and rotating a bottle to display the label on the cap, requires robust reasoning about object parts and their relationships with in- tended tasks. Despite recent advances in training general-purpose robot manipulation policies guided by language instructions, there is a notable lack of large-scale datasets for fine-grained manipulation tasks with part-level instructions and diverse 3D object instances annotated with part-level labels. In this work, we introduce PartInstruct, the first large-scale benchmark for both training and evaluating fine-grained robot manipulation models using part-level instructions. PartInstruct comprises 513 object instances across 14 categories, each annotated with part-level information, and 1302 fine-grained manipulation tasks organized into 16 task classes. Our training set consists of over 10,000 expert demonstrations synthesized in a 3D simulator, where each demonstration is paired with a high-level task instruction, a chain of base part-based skill instructions, and ground-truth 3D information about the object and its parts. Additionally, we designed a comprehensive test suite to evaluate the gen- eralizability of learned policies across new states, objects, and tasks. We evaluated several state-of-the-art robot manipulation approaches, including end-to-end vision-language policy learning and bi-level planning models for robot manipulation on our *Equal contribution. Zhengtao Han completed this work during an intern- ship at JHU.benchmark. The experimental results reveal that current models struggle to robustly ground part concepts and predict actions in 3D space, and face challenges when manipulating object parts in long-horizon tasks. I. I NTRODUCTION There has been an increasing interest in training general- purpose vision-language policies for robot manipulation guided by language instructions [24, 17, 50, 16, 32, 51], particularly with the recent advances in large generative models [3, 41]. These models represent a promising type of method for solving general robot manipulation problems, as they have the potential to follow natural language instructions to complete any described task. Prior works on language- guided robot manipulation have been mainly focused on high- level manipulation tasks involving simple objects (such as rearranging blocks). However, in the real world, robots often need to perform fine-grained manipulation of diverse everyday objects, in which the robots need to not only identify the target
|
https://arxiv.org/abs/2505.21652v1
|
object but also understand and interact with specific parts of that object to perform the intended task as instructed. This involves reasoning about the relationship between the Table I: Comparison of PartInstruct with existing tabletop robot manipulation benchmarks based on: the number of distinctive part-level instructions, the number of part labels, the number of fine-grained part-level tasks, availability of training demonstrations, and whether these demonstrations include part-level annotations such as 2D and 3D segmentation masks. Name # Part Instruct # Part Labels # Part-level Tasks Demo 2D Part Mask 3D Part Mask CALVIN 6 - 6 ! % % RLbench 136 - 64 % % % VIMAbench - - - ! % % LoHoRavens - - - ! % % ManiSkill (SAPIEN) - 14,068 - ! % % PartManip - 8,489 1,432 % % % Open6DOR 2,447 - 1,419 % % % PartInstruct (ours) 4,043 4,653 1,302 ! ! ! part and the task, and grounding that understanding into precise motion planning. For instance, to successfully perform the manipulation task defined in the instruction as shown in Figure 1, the robot needs to identify crucial parts of the object relevant to the task (e.g., the label on the cap of the bottle) and reason about a chain of base part-based skills that would lead to the desired goal state implied by the instruction, which is to display the label clearly to the human user without occlusion. Despite the importance of part-level perception and rea- soning for robot manipulation, existing robot manipulation benchmarks on instruction following lack comprehensive in- tegration of part-level semantics in both task instructions and object ground-truth annotations [e.g., 16, 17, 24, 46, 50]. These benchmarks focus on object instance-level manipulation tasks but do not include fine-grained, part-level manipulation tasks like the example in Figure 1. There have been recent benchmarks that evaluate fine-grained, part-level manipulation tasks, but they either lack language instructions [e.g., 27, 9] or do not provide training data for policy learning [e.g., 7]. To address these gaps, we introduce PartInstruct , the first large-scale fine-grained robot manipulation benchmark for vision-language policy learning that incorporates part-level semantics. Our core idea is to develop part-level skills that enable robots to perform complex, fine-grained object manip- ulation tasks, including those requiring long-horizon motion plans. We developed a robot manipulation simulator for part- level instruction following tasks, PartGym . Built upon the PartGym simulator, our PartInstruct benchmark supports both training and evaluation models on part-level manipulation tasks. Specifically, we provide a large set of 3D assets of everyday objects richly annotated with part-level information. Using these object assets and detailed annotations, we created a large-scale training dataset of expert demonstrations. Each demonstration is paired with a task instruction as well as a chain of base skill instructions (such as touching or grasping an object part) necessary for performing the overall task. This dataset allows training models for both long-horizon manipu- lations guided by task instructions for long-horizon planning and base manipulation skills guided by skill instructions. Additionally, we developed a comprehensive evaluation suite consisting of five test sets, each corresponding
|
https://arxiv.org/abs/2505.21652v1
|
to a differenttype of generalization test. Together, these tests assess how well a learned policy performs in unseen scenarios, including new states, objects, and tasks. We compare PartInstruct with several existing table-top manipulation benchmarks in Table I. We evaluated multiple state-of-the-art vision-language pol- icy learning methods designed for language-guided robot manipulation. We also combined recent learning-based low- level action policy planning models and VLM-based high- level task planners to create strong bi-level planning baselines for fine-grained manipulation tasks, which explicitly reason about object parts relevant to a task and how to interact with them to achieve the final goal. Our experimental results demonstrate that state-of-the-art methods still struggle with complex fine-grained manipulation tasks. We also show that visual representations based on robust part-level 3D perception can significantly improve model performance. These results help reveal the fundamental building blocks for fine-grained task manipulation. In summary, our main contribution includes (1) the first part- level instruction following benchmark for both training and evaluating fine-grained robot manipulation models’ capacity for part-level grounding, reasoning, and planning; (2) a large training dataset with diverse assets and detailed annotations; (3) a comprehensive evaluation of state-of-the-art vision- language policy learning and bi-level planning baselines, re- vealing limitations of current robot manipulation models. II. R ELATED WORK A. Instruction Following Benchmarks for Table-Top Robot Manipulation Early benchmarks in robot manipulation primarily concen- trated on object-level and object-scene interactions without delving into the manipulation of specific object parts. Notable examples include CALVIN [24], RLbench [16], VIMAbench [17], and LoHoRavens [50]. These benchmarks typically in- volve tasks such as object placement, scene arrangement, and basic interaction with objects in their entirety. For instance, CALVIN incorporates spatial semantics but lacks explicit part- level semantics, treating components like a “door handle” as standalone objects rather than parts of a larger entity. This limitation restricts the granularity of instructions and the complexity of manipulation tasks that can be evaluated. Rotate the mug for its handle to face the opposite directionReorient the front part of the mug to face rightGrasp the mug by its backReorient the right part of the mug to face backGrasp the handle of the kettle Move to the right Release gripperPush the bucket’s left part, then release Grasp the kettle by its handle Touch the bucket by its left part Grasp the mug by its left part Release gripper Figure 2: Example tasks and expert demonstrations in the dataset. Each task is defined by a task instruction. Each demonstration is annotated with a chain of base skills and the corresponding skill instructions (the instructions following the task instructions). Specifically, in this figure, the demonstrations for the three tasks have 1, 3, and 5 annotated skill instructions, respectively. To bridge this gap, several benchmarks have introduced object part manipulation, including ManiSkill [27], PartMa- nip [9], and Open6DOR [7]. These benchmarks introduce tasks that require finer control and understanding of object components. ManiSkill extends manipulation tasks to include interactions with articulated objects, whereas PartManip fo- cuses explicitly on part-level manipulation within a structured environment. Notably, Open6DOR is the only benchmark identified that incorporates spatial
|
https://arxiv.org/abs/2505.21652v1
|
semantic part-level instruc- tions. However, it does not support policy learning; instead, it outputs final goal positions and orientations, relying on an oracle planner to plan for intermediate actions. There have been recent approaches supporting part-level manipulation, such as Composable Part-based Manipulation (CPM) [22], RoboPoint [47], and SAGE [10]. RoboPoint leverages point-based representations to facilitate precise part interactions, but focuses more on spatial relationships. SAGE employs semantic grasping techniques to enhance manipula- tion accuracy, mainly for articulated objects. These methods underscore the importance of integrating detailed object part information to achieve more sophisticated manipulation. B. Vision-Language Policies for Robot Manipulation The integration of vision and language in robot manipu- lation has given rise to various policy frameworks designed to interpret and execute instructions. Generalist approaches such as RT-1 [3], OpenVLA [19], and Octo [41] strive to create versatile policies capable of handling a wide range of tasks by leveraging large-scale vision-language models. These models are pretrained on large-scale datasets, enabling them to leverage extensive vision-language knowledge to interpret natural language instructions and translate them into actionable manipulation strategies. Key-pose based manipulation meth- ods, such as PerAct [37], Act3D [11], and RVT series [12][13], focus on identifying and executing key poses that align with the desired manipulation objectives. These approaches typically involve detecting pivotal positions or configurations that the robot must achieve to successfully complete a task, thereby simplifying the policy learning process. Additionally, frameworks like DP [5] and DP3 [49] formulate visuomotor robot policies using Denoising Diffusion Probabilistic Models (DDPM), enabling these policies to capture multimodal action distributions and generate high-dimensional action sequences. By leveraging the strengths of generative models, these meth- ods can predict expressive and flexible robot actions. C. Robot Planning with LLMs and VLMs. The integration of Large Language Models (LLMs) and Vision Language Models (VLMs) into embodied planning has revolutionized the capabilities of robotic systems by enhancing their understanding, reasoning, and execution of complex tasks. For instance, TaPA [44] and LLM-Planner [38] focus on leveraging the contextual and generative capabilities of LLMs to decompose high-level instructions into actionable sub-tasks. SayCan [1] presents a framework that anchors linguistic instructions in the physical affordances of objects. By aligning language understanding with the robot’s physical capabilities, it ensures that generated actions are both feasible and contextually appropriate. These approaches enable robots to interpret complex, multi-step instructions by breaking them down into manageable components, thereby facilitating more coherent and structured action planning. III. P ARTINSTRUCT BENCHMARK A. Problem Setup We define an object part as a geometric sub-component of an object that is either functionally manipulable (e.g., handle) Table II: Example task instructions and goal states. Row A corresponds to the task illustrated in Figure 1, while rows B to D correspond to the three tasks shown in Figure 2. Task Instruction Goal States A Rotate the part of the object to face direction while lifting it GRASPING(obj) ,FACING(part, dir) , AT_POSITION(obj, POS_INIT_OBJ+VEC(UP)) B Grasp the object by the part GRASPING(gripper, part) ,ON(obj, table) C Move the object todirection by pushing it at the part, then free it Phase1 :TOUCHING(part) , AT_POSITION(obj,
|
https://arxiv.org/abs/2505.21652v1
|
POS_INIT_OBJ+VEC(dir)) Phase2 :GRIPPER_OPEN ,MIN_DISTANCE(gripper, obj) D Rotate the part of the object to face the opposite direction FACING(part, ∼DIR_INIT(part)) ,ON(obj, table) Table III: Definitions of base skills. Skill Description grasp_obj (obj, part)Robot grasps objatpart. move_gripper (dir, dis=UNIT, grasping=false)Robot moves gripper along dir dis . rotate_obj (obj, part, dir)Robot rotates obj, such that part is facing dir. touch_obj (obj, part)Robot touches objatpart. release_gripper (obj) Robot releases the gripper and moves away from obj. or spatially distinct (e.g., front). As shown in Figure 1, a natu- ral language instruction Itaskdescribes a part-level instruction following a task if it requires that a robot perform a fine- grained manipulation where the robot must interact with a list of object parts in a certain manner to achieve the intended goalg. Critically, the relevant object parts and how the robot needs to interact with them are often not explicitly described in the instructions. Thus, the robot must learn to reason about relevant parts and plan how to manipulate them to perform the task successfully. To define g, we first establish a set of goal predicates that specify the states of the object, its parts, the robot’s end effector, and their relationships. For example, ON(obj, part, surface) represents physical contact between an object part and a given surface; FACING (obj, part, dir) indicates the orientation of an object part from a third-person perspective; and GRASPING (obj, part) denotes a “grasp” interaction between the object part and the robot’s end effector. Given these goal predicates, each task goal is defined by a set of goal predicates. Examples of tasks are presented in Table II. For instance, in the task illustrated in Figure 1, the goal is represented by the predicate set {GRASPING (bottle, ∼cap),FACING (bottle, cap, front) ,AT_POSITION (bottle, INIT_POS +VEC(UP))}, where ∼cap is any part other than the cap. Note that some tasks consist of multiple phases, where the next phase can only begin after completing the previous one, as the order of interactions is crucial for these tasks. For full task definitions, refer to Appendix A3. To develop an embodied agent capable of executing tasks defined by g, we hypothesize that it would be beneficial to start with a set of base skills that can be combined to handle a wide range of fine-grained manipulation tasks. In particular, we con- sider five types of base skills: grasp_part ,touch_part , rotate_obj ,move_gripper , andrelease_gripper . As detailed in Table III and Appendix A2, each skill isparameterized by (1) the object part it interacts with and the type of interaction (e.g., touching or grasping), (2) the degree of rotation required for the part, and (3) the distance and direction in which the gripper or object should be moved. This information is summarized in a skill instruction Iskill associated with that skill. As illustrated in Figure 2, a task given by an overall task instruction can be decomposed into a sequence of base skill executions, each described by a skill instruction. For example, the second task shown in Figure 2, “Push the bucket’s left part, then release”, involves three skill executions.
|
https://arxiv.org/abs/2505.21652v1
|
To “push” the bucket’s left part, the robot must first touch the left side of the bucket by executing touch_part (bucket, left) , then move the end effector to the right via move_gripper (right) . Following the “push” action, the robot executes release_gripper ()to complete the task. We hypothesize that structuring fine-grained manip- ulation tasks into sequences of base skills can facilitate the training of hierarchical planning models to compose complex plans with base skills for long-horizon tasks that an end-to-end vision-language policy would struggle with. B. Simulation Environment To train and evaluate language-guided part-level manipula- tion models, we introduce PartGym, a realistic robot simulator for fine-grained manipulation tasks requiring part-level under- standing. PartGym provides (1) rich 3D assets of everyday objects, (2) part-level 3D ground-truth annotations, and (3) a large task set for fine-grained robot manipulation with natural language instructions. We used Pybullet [6] as the backbone physics engine to simulate the physical interactions between a robot arm and different objects and their parts. Specifically, the environment includes a 7-DoF Franka Emika Panda robot with a two-finger parallel gripper. Observations. As shown in Figure 3, we provide multi- modal observations for a robot, including RGB images, depth maps, and point clouds. Additionally, we provide object and part annotations. Lastly, proprioception robot states, like joint states and end-effector poses, are also available as part of the observations. Action Space. The Panda robot takes a 7D action vector at each step. The first 6 dimensions represent the end-effector’s Cartesian pose, parameterized by a 3D coordinate as well as the roll, pitch, and yaw angles. The final dimension controls the gripper’s position. We provide more details about PartGym in Appendix B. RGB Depth Scene PCD Obj Mask Part Mask Part PCD Obj PCD Figure 3: PartGym supports multimodal observations, including RGB images, depth maps, and scene point clouds (PCDs). It also provides object and part annotations, including object segmentations, 2D part segmentation for each object part (part mask), 3D object instance segmentation (obj PCDs), and 3D part segmentations on point clouds (part PCDs) for each object. BackBaseBladeBottomFront HandleHead KeyboardLeftLegLid MouthNeckRightScreenScrewSurfaceT op PartBottle Box Bucket Dispenser Display Eyeglasses Kettle Kitchenpot Knife Laptop Mug Pliers Scissors StaplerObject62 0 030 57 0 0 0111 0498 95213 290 0 0 0318 85 0 016103 0 0 0211 0104 0 0358 0 0 0385 180 0 031183 94 0 0270 0 0 0 0385 0 0 0256 243 0 032 45 8218 0285 0260 0 0115 0 0 0422 130 70 014 95 0 0 0331 0 0 0 0277 37 013658 110 0 023 42 0 0 0122 25 0 0 097 0 0 087 33 0 0 2 664 0 055 021 0 052 0 0 053 84 0 0 662 0 0 066 0114 0 0165 0 0 0267 41 0135 25 43 0 0 0117 0 0 0 0117 0 0 0185 31 0 0 019 0 0 253 0 0 0 034 89 0 041 636 0 037136 336 0 0569 0 0 0 0513 0 0 0479 26 0 014 49 0 0
|
https://arxiv.org/abs/2505.21652v1
|
034 84 0 0 063 0 0 0404 13 028 5 927 0 057 0 0 0 091 0215 059 89 0 053 73 0 0 0187 0376 0 0246 0 0 0470 0100200300400500600 Frequency Figure 4: Annotated parts grouped by object categories. The horizontal axis stands for different part names, and the vertical axis gives different object categories. The value in the heatmap indicates the frequency of each part for an object category in PartInstruct. A darker color shows a higher frequency. Spatial part names are highlighted in light gray to distinguish them from semantic part names. C. Dataset 1) PartInstruct Dataset: Built upon the PartNet Mobility dataset [45, 26, 4], PartInstruct contains 14 categories of table- top everyday objects annotated with different part labels. In total, there are 513 object instances and 4,653 part labels. Figure 5 shows the object instance distribution across object categories. We also show the distribution of annotated parts for each object category in Figure 4. Figure 6 illustrates the visual diversity of objects and parts in PartInstruct. Each part of an object is unique in terms of its shape, size, texture, and position on the object. Objects of the same class also have different part compositions. For example, there are 7 types of part compositions for bottles, including “(body, closure, neck)”, “(body, handle, lid, neck)”, “(body, handle, mouth)”. Leveraging the richly annotated objects and parts, we procedurally generate a large collection of demonstrations ScissorsKitchenpotLaptopEyeglasses Bucket Display Pliers Bottle Knife Stapler Kettle MugBoxDispenser10 20 30 40 50 60102030405060 102030405060 102030405060 10 20 30 40 50 6010 20 30 40 50 6010 20 30 40 50 60Figure 5: Number of object instances in each object category. for vision-language imitation learning. PartInstruct includes 10,000 demonstrations for training and over 1,800 annotated episodes for evaluation. See Figure 2 for several example episodes in PartInstruct. Each episode contains an observation set with different modalities, an expert action trajectory, a natural language description of the overall task, referred to as the task instruction Itask, as well as a sequence of skill instructions Iskillthat specify the part- level manipulation subgoals sgrequired to complete the task. Each skill instruction contains zero or one object part that the robot is manipulating. It is important to note that skill instructions are provided only during model training. For evaluation, models receive only the overall task instruction as the language input. 2) Task Categories: PartInstruct has 16 task categories, including 10 seen categories for training, and 6 unseen cat- egories for testing. Each category is defined by tasks that require the robot to execute a specific combination or sequence of part-level interactions. Some categories require the agent to physically interact with a specific part of the object. For example, “Hold [ part] the object and shift it in [ direction ].” For such tasks, the agent must ground the part mentioned in the task instruction to specific visual representations and predict the actions needed to directly manipulate that part. Other task categories require the agent to change the state of a part. For example, “Rotate the object
|
https://arxiv.org/abs/2505.21652v1
|
such that [ part] is Figure 6: Representative object assets from PartInstruct. facing [ direction ].” To perform these tasks, the model needs not only to know the location of the part but also to infer its final state. The agent must manipulate some part of the object to achieve that state, even when the part being directly manipulated differs from the part mentioned in the instruction. In the 5 test task categories, we have also designed more challenging part-level manipulation tasks. One focus is on long-horizon tasks that require the manipulation of multiple parts in sequence. For instance, “Push the object toward [direction ] while touching [ part], lift the object by holding [part], then rotate [ part] to face [ direction ].” Another focus is on tasks that demand more complex reasoning about parts, the environment, and their spatial relationships. For example, consider the task, “Rotate [ part] of the object on the table so that it points to the opposite direction.” Here, instead of explicitly naming the final state (e.g., a specific direction), the task requires the robot to have additional knowledge about the current direction of a certain part, identify its opposite direction, and manipulate the object so that the part points in that direction. 3) Demonstration Generation: Each demonstration is a sequential execution of Oracle high-level plans of base skills defined in Table III. To generate the trajectories in the demon- strations, we detect the grasping point using [2] and then leverage a sampling-based motion planner, BiRRT [20], to generate the motion plan for each base skill. To generate the task instruction for each task, we first create template-based instructions (Appendix A3). To enrich the language diversity, we prompt GPT-4o with the template-based instruction, task definition, and object metadata to paraphrase the task instruction. This yields between 3 – 8 natural-language variants per template, greatly increasing the language diversity of the dataset. For each base skill, we follow the template in Table III to generate skill instructions. 4) Evaluation Protocol: As defined in Section III-C, each part-level skill has a binary success criterion. A completion of the entire task means the agent manages to complete every single skill defined in the skill chain. To systematically evaluate the performance of the learned policy, we designed a five-level evaluation protocol (see Table IV). Each test set evaluates a policy in one type of general- ization condition. Specifically, they focus on generalizabilityTable IV: Summary of the five test sets and the type of generalization each one addresses. Test Set Type of Generalization Test 1 (OS) Novel object positions and rotations Test 2 (OI) Novel object instances within the same cate- gory Test 3 (TP) Novel part combinations within the same task categories Test 4 (TC) Novel part-level manipulation task categories Test 5 (OC) Novel object categories over object initial states (OS), novel object instances (OI), novel part combinations in the same task type (TP), novel task categories (TC), and novel object categories (OC). Detailed visualization can be viewed in Appendix B. IV. E XPERIMENTS To achieve general-purpose robot manipulation, there
|
https://arxiv.org/abs/2505.21652v1
|
have been two common types of approaches: (1) end-to-end policy learning that directly maps observation and instruction to actions (e.g., [48, 8, 23, 32, 41, 11, 13, 5, 49]) and (2) bi- level planning that first generates high-level plans (typically subgoals), then compute and execute the low-level action plans to achieve the subgoals [44, 38, 1, 10, 43]. In our benchmark, we evaluate both types of approaches. A. End-to-End Policy Learning 1) Baselines: We evaluate the following state-of-the-art end-to-end robot manipulation policy learning methods: Octo [41] is a transformer-based generalist robot policy pretrained in diverse large-scale robotic episodes. At each time step, the model outputs an action vector that contains the translation and rotation of the robot end effector, along with one dimension that indicates whether the gripper is open or closed. Act3D [11] is a 3D feature field transformer for multi-task 6-DoF robotic manipulation. Unlike Octo, it employs a key- frame-based approach to complete tasks. These key poses will then be executed using a motion planner. RVT2 [13] is a multi-task transformer-based 3D manipula- tion model. Similar to Act3D, it also applies key-frame-based manipulation. 3D Diffuser Actor (3D-DA) [18] trains a policy that is jointly conditioned on a tokenized 3D scene, proprioceptive feedback, and a natural-language instruction. It uses diffusion to generate 3D pose trajectories. Diffusion Policy (DP) [5] represents a visuomotor policy as a conditional denoising diffusion process in the action space, which allows it to effectively handle multimodal action distributions and high-dimensional action sequences. 3D Diffusion Policy (DP3) [49] combines 3D visual rep- resentations with diffusion-based policies, leveraging compact 3D point cloud data for efficient and generalizable visuomotor policy learning. Note that the original DP and DP3 models do not support language instruction inputs. To fit the setup of PartInstruct, RGB ImagePoint Cloud Robot States Part Seg Env Low-Level Action Policy Pour out the water in the mug, then put it back on table Grasp the mug by its handle Skill InstructionHuman Instruction High-Level Task Planner Actions RGB+Robot State Selected Obs ObservationFigure 7: Overview of the bi-level planning framework. The High-Level Task Planner generates a skill instruction as a subgoal for the low-level action policy based on the task instruction and the current observation. Given the subgoal described in the skill instruction, the low-level action policy then generates actions for achieving that subgoal. The high-level task planner updates the skill instruction once every nsteps, while the low-level action policy updates the action at every step. End-to-End Bi-Level05101520253035Success Rate (%)Average Success Rate Octo Act3D RVT2 3D-DA DP DP3 CaP+Oracle Motion Planner GPT4o+DP3-S Gemini-1.5 Flash+DP3-S Gemini-2.0 Flash+DP3-S Figure 8: Success Rates of all baselines. The left group represents end-to-end learning policies, while the right group corresponds to bi-level planning models. Error bars denote the standard errors calculated across all evaluation rollouts. we modify them to incorporate language inputs. Specifically, we use a pre-trained T5 language encoder to get the language embedding [31]. The embedding is then concatenated with other features and used as the observation condition for the denoising diffusion process. We trained the baselines DP, DP3, Act3D, RVT2, 3D-DA from
|
https://arxiv.org/abs/2505.21652v1
|
scratch and fine-tuned the pretrained baseline Octo on our training data. Our hypothesis is that fine-tuning Octo will improve its performance on our benchmark by leveraging its large-scale pretraining on Open X-Embodiment [28]. The implementation details can be found in Appendix D. 2) Results: To evaluate each learned policy, we follow the common practice outlined in recent works [17, 5, 49]. Specif- ically, we select the top two checkpoints for each baseline and conduct approximately 20 rollouts per object class across all test splits, resulting in over 1,000 rollouts per baseline. We report the Success Rate (SR, %) for all end-to-end policy baselines in the left part of Figure 8 and in the top block of Table V. The low success rate across all baselines suggests that it remains challenging to train an end-to-end generalist policy for fine-grained object manipulation tasks given part- level instructions. They particularly struggle with long-horizontasks (Test 4) and generalizing to unseen object types (Test 5). B. Bi-level Planning 1) Baselines: We hypothesize that it would be easier to train action policies with skill instruction annotations com- pared to directly training a policy for the whole task. Such low- level action policies can then be combined with a high-level planner that generates skill instructions given a task instruction to solve the manipulation task intended by the user. To evaluate the efficacy of bi-level planning on our benchmark, we extend common bi-level planning frameworks (e.g., [10]) as shown in Figure 7. Specifically, the bi-level planner consists of two modules: (1) a high-level task planner and (2) a low-level action policy. We describe each module below. High-level Task Planner. We leverage a VLM for high- level task planning. At step t, we prompt the VLM with the task instruction Itaskto generate the skill instruction for the current step as the subgoal sgt, i.e., πVLM(sgt|ot, Itask), where otis the observation at step t. We constrain the skill instructions to the space of base skills defined in Section III-C and Appendix A2, which is also specified in the prompt for the VLM. To facilitate decision-making, we also provide additional observations when prompting the VLM, such as RGB images of the workspace, robot states, etc. See Appendix D3 for the detailed prompt. sgtwill be passed to the low- level action policy for execution and will be updated every nsteps. Here, nis estimated by the typical length of a skill execution in the training set. It is worth noting that we could potentially incorporate an additional VLM to assess the completion of the current skill and trigger updates to the skill instruction. However, based on our study [42, 25] and preliminary experiments, current VLMs are not yet robust enough to reliably estimate this using multi-modal inputs. We evaluate GPT-4o [15], Gemini-1.5 Flash [40] and Gemini-2.0 Flash [40] for the high-level task planner. Low-level Action Policy. The low-level action policy is a vision-language policy that generates low-level manipulation actions based on a subgoal and the current observation, i.e., Table V: Success Rates (%) of baselines across five test sets. Baselines are categorized into end-to-end policy
|
https://arxiv.org/abs/2505.21652v1
|
learning and bi-level planning. Standard errors are reported alongside each value. The best-performing results are highlighted in bold. Baselines Test 1 (OS) Test 2 (OI) Test 3 (TP) Test 4 (TC) Test 5 (OC) All End-to-End LearningOcto 1.82±1.3 0.0 0.91±0.1 0.0 3.33±3.2 1.11±1.5 Act3D 6.25±1.8 5.68±1.7 4.55±1.6 0.0 2.08±2.1 3.88±1.8 RVT2 4.55±2.0 4.55±2.0 6.36±2.3 0.91±0.9 3.33±3.3 4.04±2.1 3D-DA 8.08±2.7 5.05±2.2 4.04±1.9 0.0 3.70±3.6 4.26±1.0 DP 7.27±1.8 8.64±1.9 8.18±1.8 3.75±2.1 6.67±3.2 5.96±2.2 DP3 23.18±2.8 23.18±2.8 18.18±2.6 7.73±1.8 6.67±3.2 15.40±2.6 Bi-Level PlanningCaP + Oracle Motion Planner 22.58±4.9 27.93±9.1 25.95±11.0 6.99±12.2 19.38±9.8 21.90±6.2 GPT4o+DP3-S 33.64±3.2 32.73±3.2 25.91±3.0 10.00±2.0 23.33±5.5 25.12±3.4 Gemini-1.5 Flash+DP3-S 30.48±4.5 25.45±4.2 27.62±4.4 1.82±1.8 26.67±8.1 22.41±4.6 Gemini-2.0 Flash+DP3-S 40 .58±4.234 .56±4.133 .33±4.111 .90±2.9 38 .24±8.331 .72±4.7 Table VI: Performance of low-level action policies when paired with ground-truth high-level plans. Baselines OS OI TP TC OC All Octo 3.64 5 .50 5 .90 0 .00 6 .67 4 .34 Act3D 0.45 2 .80 3 .33 0 .00 0 .00 1 .32 3D-DA 7.27 5 .45 3 .64 0 .00 6 .67 4 .61 RVT2 1.82 3 .64 1 .82 0 .00 3 .33 1 .91 DP 6.47 11 .42 16 .53 0 .06 6 .94 8 .28 DP3 19.34 13 .61 17 .89 0 .00 12 .08 12 .58 DP-S 20.00 16 .36 25 .45 0 .00 6 .67 13 .70 DP3-S 23.64 29 .09 23 .64 1 .82 26 .67 20 .97 π(at|ot, sgt), where atis the action at step t. We can train such policies using the skill instructions annotated for training demonstrations in our dataset. We can train the end-to-end policy learning models evaluated in Section IV-A on skill instructions to create low-level action policies. We hypothesize that an explicit visual understanding of object parts can facilitate part-level instruction grounding. It is difficult to visualize all the object parts due to occlusion. However, in our tasks, the robot needs to interact with at most one part for the subgoal sgtdefined in each skill instruction, making it possible to give additional vision inputs about the target object part to the low-level action policies. We select the best-performing end-to-end policy learning baselines, DP and DP3, to train the low-level action policies with object part segmentation as part of the input. For DP, we provide a part segmentation mask as an extra vision input. There have been general-purpose segmentation models like Segment Anything Model 2 (SAM 2) [33]. We adopt the approach of Grounded-SAM-2 [35] to leverage SAM 2 to segment and track object parts. Specifically, given an RGB image and language input, we first utilize a VLM, e.g. Florence-2 [34], to ground the language onto the target part, then prompt SAM 2 to generate segmentation masks and track the object part in real-time. At each step, we add the obtained part segmentation mask as an extra channel on top of the original RGB, making the input a 4-channel image. The image is then encoded using a ResNet18 [14] encoder before being fed into the DP model. We refer to this model as DP-S . For DP3, we use a part point cloud as
|
https://arxiv.org/abs/2505.21652v1
|
an additional vision input. Since there has not been a general-purpose object part segmentation model on 3D point cloud [39, 36], we obtain the 3D part segmentation using a lift-to-3D method.In detail, we first apply the same method in DP to obtain a 2D segmentation mask tracked using SAM2. We then lift the 2D mask into 3D with the depth map using the pinhole camera model and camera intrinsics. To represent a 3D part mask, we append a binary mask channel to the original point cloud observation. This modified point cloud is encoded using an MLP, following the approach described in the original implementation [49]. Additionally, as outlined in the original work, the point cloud was cropped to match the minimum workspace, which includes only the robot arm and the object. We refer to this action policy as DP3-S . We train the low-level action policies using the training demonstrations and the skill instructions annotations, where each demonstration is truncated into clips corresponding to individual skill instructions. The implementation details of bi- level planning baselines can be found in Appendix D3. Additionally, we evaluate Code-as-Policies (CaP) [21] as an alternative bi-level planning framework. CaP leverages an LLM to compose API calls to generate robot policy code. In our experiment, we define API calls as the skill primitives implemented by the oracle motion planner as described in Section III-C3. We use GPT-4o for the LLM. 2) Results: We adopt the same evaluation protocol de- scribed in Section IV-A2 for bi-level planning baselines. To evaluate different low-level action policies without considering the effect of high-level task planners, we first pair each low- level action policy with ground-truth skill instructions. As shown in Table VI, DP3-S has the highest success rate across all test sets. Given this result, we then adopt DP3-S as the low-level action policy and pair it with different high-level planners to create bi-level planning baselines. The results are reported in the right part of Figure 8 and the bottom block of Table V. We can see from the results that the bi-level planning baselines outperform the end-to-end learning in every test set by a large margin. This demonstrates the effectiveness of training a separate low-level action policy for base skills and using VLM as a high-level task planner. Among all high- level planning baselines, Gemini-2.0 Flash paired with DP3-S performs the best. However, bi-level planning still struggles with many tasks, particularly when the tasks require longer chains of base skills (e.g., Test 4). In these longer-horizon tasks, there is a higher chance for the high-level task planner Table VII: Impact of high-level task planners on bi-level planning models. We pair each high-level task planner with an oracle motion planner to execute the skill instructions. Baselines OS OI TP TC OC All Gemini-1.5 Flash 20.41 19 .07 19 .36 0 .15 29 .24 17 .65 Gemini-2.0 Flash 27.73 25 .94 26 .75 0 .00 32 .70 22 .62 GPT4o 20.08 18 .87 14 .87 0 .79 22 .45 17 .94 to make mistakes. Errors from the low-level action policy are also more
|
https://arxiv.org/abs/2505.21652v1
|
likely to accumulate. C. Ablation Studies In Section IV-B, we demonstrate that bi-level planning models with low-level action policies informed by part seg- mentation perform significantly better than state-of-the-art end-to-end policies. To evaluate the effect of each component of the high-level planning models, we conduct the following ablation studies. 1) Effects of High-level Planners: To evaluate the effective- ness of different VLMs as high-level planners on the overall task performance, we construct bi-level planners by combining each VLM with an oracle motion planner to perform the skill instructions generated by the VLM. Specifically, we use the same oracle planner used for generating the training demonstrations. Unlike the full version of the bi-level planning baselines, here the oracle planner can decide when the subgoal for a skill instruction is achieved. Thus, instead of updating the skill instruction at a fixed frequency, the high-level planner will generate the next skill instruction when the oracle planner has reached the subgoal of the current skill instruction. We report the results in Table VII. Interestingly, compared with the results in Table V, every bi-level planner that uses a VLM for high-level task planning performs worse when paired with an oracle motion planner. This is likely because the oracle motion planner has to finish the entire execution of a subgoal, even if it is incorrect. In such cases, VLM- based high-level task planners struggle to recover from earlier mistakes. In contrast, when we used a learned low-level action policy, the same instruction would only be executed for a few steps (as described in Section IV-B). Consequently, the VLM has a better chance of correcting those mistaken instructions in subsequent steps. 2) Effects of Different Visual Inputs: To examine the impact of different visual representations, particularly 2D and 3D part masks, on policy learning, we conduct another ablation study, where we evaluate the low-level action policies with various visual inputs. Specifically, in addition to DP-S SAM2 and DP3-S SAM2 , we also trained low-level action policies using ground-truth mask information, DP-S GT and DP3- S GT , as well as the vanilla models without any part-level mask, DPandDP3. The results are summarized in Table VIII. With part segmentations, either 2D or 3D, the low-level action policies can achieve significantly better performance. The performance gap between the policies trained with ground- truth part segmentation and SAM2-based part segmentation also suggests that there is improvement in both the VLM’sTable VIII: Impact of various vision inputs on low-level action policies. We pair low-level action policies using different vision inputs with ground-truth high-level plans. Baselines OS OI TP TC OC All DP 6.47 11 .42 16 .53 0 .06 6 .94 8 .28 DP-S GT 15.45 20 .91 26 .36 0 .91 13 .33 15 .39 DP-S SAM2 20.00 16 .36 25 .45 0 .00 6 .67 13 .70 DP3 19.34 13 .61 17 .89 0 .00 12 .08 12 .98 DP3-S GT 45.45 36 .36 36 .36 1 .82 40 .00 32 .00 DP3-S SAM2 23.64 29 .09 23 .64 1 .82 26 .67 20 .97 ability to ground fine-grained parts and
|
https://arxiv.org/abs/2505.21652v1
|
in the capacity of state- of-the-art segmentation methods to accurately segment object parts. V. D ISCUSSION How well can current vision-language policies perform in our part-level manipulation tasks? The experimental results on our benchmark systematically reveal the perfor- mance of current vision-language policies in our part-level manipulation tasks. Specifically, we find that vision-language policies perform adequately on object-level tasks but struggle with precise part-level grounding. While they can follow simple part-based instructions such as “grasp” or “touch,” instructions like “touch the left part” introduce fine-grained spatial reasoning that these models have not fully mastered. We observed that these policies can learn the broad action of “touch” but neglect the exact location of “left.” Second, zero-shot inference using pretrained generalist vision-language policies on our benchmark fails to achieve any success (see Appendix D2). This is likely due to the absence of part-level skills and detailed spatial reasoning in their training data. Current large-scale robotic datasets do not adequately capture the detailed spatial and part-specific annotations required for fine-grained part-level manipulation. This suggests the value of our training dataset and PartGym simulator in training part-level manipulation policies, as we provide detailed part annotations as well as fine-grained manipulation tasks that require part grounding and reasoning. Why is part-level instruction following challenging for vision-language policy learning? Our experimental results demonstrate that the part-level instruction following tasks in our PartInstruct benchmark remain extremely difficult for state-of-the-art end-to-end vision-language policy learning methods. There are several main challenges that these methods cannot yet solve for part-level instruction following. First, learned policies must recognize and track object parts over time, which can be difficult as parts of the same kind can have distinctive appearances. For instance, the object part “lid”, may look different across object categories (e.g., the lid of a bottle vs. the lid of a pot, the top of a stapler vs. the top of a mug). This variability requires the model to correctly associate the same part name with distinct visual representations based on context. Second, relevant objects and the corresponding manipulation of these objects for performing a task may not be explicitly defined in the task instructions. Thus, a policy must reason about what parts to interact with and in what manner. Third, the fine-grained nature of these tasks imposes a stricter success criterion than typical object-level manipulation. For instance, in a general mug-picking task, any point on the mug’s surface might serve as a grasping point, whether on the handle, top, or body. In contrast, a task requiring grasping specifically by the handle demands precision in targeting the handle area alone, with the need for detailed semantic and spatial awareness. Why is bi-level planning helpful? One important feature of bi-level planning is that it decomposes a complex task into a chain of subgoals, each interacting with at most one object part at a time. Focusing on a single part-level skill at a time simplifies the training of low-level action policies, as the policy only needs to ground the skill instruction into a relatively simple manipulation of the specified object part. Part-level manipulation also requires
|
https://arxiv.org/abs/2505.21652v1
|
more fine-grained vision grounding than object-level tasks, since the part-level information is much more detailed and changes dynamically over time (e.g., the front of a mug at the current step may no longer be the front in future steps after rotation). By decomposing the task into part-level tasks, we reduce the burden of grounding and tracking different parts over time, enabling the low-level action policy to focus on the most relevant visual information at the moment. Additionally, separating reasoning from action execution allows us to incorporate pretrained foundation mod- els. Specifically, high-level task planning is performed by VLMs pretrained on internet-scale data, which endows them with extensive prior knowledge and proficiency in high-level reasoning, planning, and language-guided decision making. As multimodal foundation models continue to advance in vision- language reasoning, their built-in knowledge is expected to further boost overall performance. What kinds of visual representations are useful in fine- grained manipulation? As the tasks in PartInstruct require a model to have a detailed visual understanding of object parts, visual representations of the scene and objects may play a central role in a model’s performance. Our ablation study on the effect of visual representations in the model input reveals the following findings. First, 3D representations, such as point clouds, are more effective than 2D images. Unlike 2D methods, which can misinterpret depth and lead to positioning errors, point clouds provide precise 3D shape and location information, improving action success. Second, explicit object part segmentation provides a significant performance boost in the performance of part-level policy learning, as shown in Table VIII. The improvement is particularly noticeable for 3D part segmentation. In fact, DP3-S outperforms DP3 by approximately 20%, more than doubling the performance. What are the difficulties of learning part-level skills, and what kind of skills are harder to learn than others? We found that part-level manipulation skills can be particularly challenging when they require indirect actions to achieve a goal state for a target part. To analyze which part-level skills are generally difficult to learn and which object parts tend to pose challenges for the robot, we conducted an impact study, detailed in Appendix C. The study shows that “grasp” and “touch” achieve success rates over 50%, likely becausethey involve direct physical contact with the part. By contrast, “Rotate” achieves only 18.2%, since it specifies the orientation of the part rather than direct manipulation. For example, to show a bottle’s “cap,” the robot may avoid grasping the cap directly (which obstructs the view) and must instead grasp another part of the bottle, making it a more complex skill to learn. Additionally, compared to common parts (e.g., “handle” and “lid,”), spatial parts (e.g., “left” or “right”) are much more challenging because these references can change as the object moves. For instance, an instruction like “Rotate the bottle, so that the left part faces the opposite direction” requires the policy to remember the original “left” region while also recognizing the updated orientation as the bottle rotates. Maintaining both the original reference and the changing spatial context makes these tasks particularly difficult. How well can
|
https://arxiv.org/abs/2505.21652v1
|
current VLMs perform in the planning for fine-grained manipulation tasks? Our experiments show that bi-level planning baselines significantly outperform end- to-end policy learning approaches, as indicated in Table V. This suggests that current VLMs possess certain capabilities in understanding and reasoning about part-level manipulation tasks, as well as generalizing pretrained knowledge to perform high-level task planning across diverse object- and part-related scenarios. However, VLM-based planners can still fail during task planning, particularly in tasks that require a long chain of skill instructions (e.g., tasks in Test 4). This poses challenges for future research to further improve VLMs’ reasoning and planning capacities for fine-grained manipulation tasks. VI. L IMITATIONS Our current study focuses on part-level manipulation tasks in a controlled 3D simulator, which has certain limitations when considering real-world deployment. First, we have not fully evaluated sim-to-real generalization. Although the dataset includes diverse objects and tasks, there is no guarantee that the learned policies will transfer seamlessly to physical robot platforms. Exploring techniques such as domain randomization or policy fine-tuning on real-world data could improve the robustness of the policies. Second, the demonstrations in our training set are generated by an oracle motion planner, which may have limited behavioral diversity. In the future, we plan to integrate a teleoperation interface in PartGym to collect human demonstrations. Third, our current benchmark focuses on single-object manipulation. Studying scenarios where multiple objects are present or clustered closely together is another important future direction. Finally, while we have diverse object instances, we can further enrich the object assets by including articulated objects (e.g., cabinets, drawers), which can be used for evaluating part-level manipulation under dynamic constraints. VII. C ONCLUSION In this work, we introduced PartInstruct, a large-scale benchmark designed to advance fine-grained robot manipula- tion using part-level instructions. By curating a diverse set of objects, tasks, and expert demonstrations, PartInstruct provides a foundation for training and evaluating robot manipulation models that require reasoning about object parts and their relationships with tasks. Our evaluations of state-of-the-art models highlight critical challenges in grounding part concepts and executing long-horizon tasks. With comprehensive experi- ments and ablation studies, our work provides key insights for future research, highlighting the need for further innovation in perception, reasoning, and planning to enable robots to effectively perform fine-grained, part-aware manipulation. REFERENCES [1] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Haus- man, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 , 2022. [2] Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Sieg- wart, and Juan Nieto. V olumetric grasping network: Real- time 6 dof grasp detection in clutter. In Conference on Robot Learning , pages 1602–1611. PMLR, 2021. [3] Anthony Brohan, Noah Brown, Justice Carbajal, Yev- gen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817 , 2022. [4] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran
|
https://arxiv.org/abs/2505.21652v1
|
Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012 , 2015. [5] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Dif- fusion policy: Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023. [6] Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016–2021. [7] Yufei Ding, Haoran Geng, Chaoyi Xu, Xiaomeng Fang, Jiazhao Zhang, Songlin Wei, Qiyu Dai, Zhizheng Zhang, and He Wang. Open6dor: Benchmarking open- instruction 6-dof object rearrangement and a vlm-based approach. In First Vision and Language for Autonomous Driving and Robotics Workshop , 2024. [8] Peter Florence, Lucas Manuelli, and Russ Tedrake. Self- supervised correspondence in visuomotor policy learn- ing. IEEE Robotics and Automation Letters , 5(2):492– 499, 2019. [9] Haoran Geng, Ziming Li, Yiran Geng, Jiayi Chen, Hao Dong, and He Wang. Partmanip: Learning cross-category generalizable part manipulation policy from point cloud observations. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition , pages 2978–2988, 2023. [10] Haoran Geng, Songlin Wei, Congyue Deng, Bokui Shen, He Wang, and Leonidas Guibas. Sage: Bridging semanticand actionable parts for generalizable articulated-object manipulation under language instructions. arXiv preprint arXiv:2312.01307 , 2023. [11] Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, and Katerina Fragkiadaki. Act3d: 3d feature field transform- ers for multi-task robotic manipulation. In 7th Annual Conference on Robot Learning , 2023. [12] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. Rvt: Robotic view transformer for 3d object manipulation. In Conference on Robot Learning , pages 694–710. PMLR, 2023. [13] Ankit Goyal, Valts Blukis, Jie Xu, Yijie Guo, Yu- Wei Chao, and Dieter Fox. Rvt-2: Learning precise manipulation from few demonstrations. arXiv preprint arXiv:2406.08545 , 2024. [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. [15] Raisa Islam and Owana Marzia Moushi. Gpt-4o: The cutting-edge advancement in multimodal llm. Authorea Preprints , 2024. [16] Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning bench- mark & learning environment. IEEE Robotics and Automation Letters , 5(2):3019–3026, 2020. [17] Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, and Linxi Fan. Vima: Robot manipulation with multimodal prompts. arXiv preprint arXiv:2306.02060 , 2023. [18] Tsung-Wei Ke, Nikolaos Gkanatsios, and Katerina Fragkiadaki. 3d diffuser actor: Policy diffusion with 3d scene representations. arXiv preprint arXiv:2402.10885 , 2024. [19] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. [20] James J Kuffner and Steven M LaValle. Rrt-connect: An efficient approach to single-query path planning. In Proceedings 2000 ICRA. Millennium conference. IEEE international conference on robotics and automation. Symposia proceedings (Cat. No. 00CH37065) , volume 2, pages 995–1001. IEEE, 2000. [21] Jacky Liang, Wenlong Huang, Fei
|
https://arxiv.org/abs/2505.21652v1
|
Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for em- bodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA) , pages 9493–9500. IEEE, 2023. [22] Weiyu Liu, Jiayuan Mao, Joy Hsu, Tucker Hermans, Animesh Garg, and Jiajun Wu. Composable part-based manipulation. arXiv preprint arXiv:2405.05876 , 2024. [23] Ajay Mandlekar, Danfei Xu, Roberto Mart ´ın-Mart ´ın, Silvio Savarese, and Li Fei-Fei. Learning to generalize across long-horizon tasks from human demonstrations. arXiv preprint arXiv:2003.06085 , 2020. [24] Oier Mees, Lukas Hermann, Erick Rosete-Beas, and Wolfram Burgard. Calvin: A benchmark for language- conditioned policy learning for long-horizon robot ma- nipulation tasks. IEEE Robotics and Automation Letters , 7(3):7327–7334, 2022. [25] Aoran Mei, Jianhua Wang, Guo-Niu Zhu, and Zhongxue Gan. Gamevlm: A decision-making framework for robotic task planning based on visual language models and zero-sum games. arXiv preprint arXiv:2405.13751 , 2024. [26] Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, and Hao Su. PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In The IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR) , June 2019. [27] Tongzhou Mu, Zhan Ling, Fanbo Xiang, Derek Yang, Xuanlin Li, Stone Tao, Zhiao Huang, Zhiwei Jia, and Hao Su. Maniskill: Generalizable manipulation skill bench- mark with large-scale demonstrations. arXiv preprint arXiv:2107.14483 , 2021. [28] Abby O’Neill, Abdul Rehman, Abhiram Maddukuri, Ab- hishek Gupta, Abhishek Padalkar, Abraham Lee, Acorn Pooley, Agrim Gupta, Ajay Mandlekar, Ajinkya Jain, et al. Open x-embodiment: Robotic learning datasets and rt-x models. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 6892–6903. IEEE, 2024. [29] Charles R. Qi, Li Yi, Hao Su, and Leonidas J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space, 2017. URL https://arxiv.org/abs/ 1706.02413. [30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural lan- guage supervision. In International conference on ma- chine learning , pages 8748–8763. PMLR, 2021. [31] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020. [32] Rouhollah Rahmatizadeh, Pooya Abolghasemi, Ladislau B¨ol¨oni, and Sergey Levine. Vision-based multi-task manipulation for inexpensive robots using end-to-end learning from demonstration. In 2018 IEEE international conference on robotics and automation (ICRA) , pages 3758–3765. IEEE, 2018. [33] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Rong- hang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R ¨adle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, NicolasCarion, Chao-Yuan Wu, Ross Girshick, Piotr Doll ´ar, and Christoph Feichtenhofer. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714 , 2024. URL https://arxiv.org/abs/2408.00714. [34] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Rong- hang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R ¨adle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas
|
https://arxiv.org/abs/2505.21652v1
|
Carion, Chao-Yuan Wu, Ross Girshick, Piotr Doll ´ar, and Christoph Feichtenhofer. Sam 2: Segment anything in images and videos, 2024. URL https://arxiv.org/abs/2408. 00714. [35] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kun- chang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, Zhaoyang Zeng, Hao Zhang, Feng Li, Jie Yang, Hongyang Li, Qing Jiang, and Lei Zhang. Grounded sam: Assembling open-world models for di- verse visual tasks, 2024. URL https://arxiv.org/abs/2401. 14159. [36] Sushmita Sarker, Prithul Sarker, Gunner Stone, Ryan Gorman, Alireza Tavakkoli, George Bebis, and Javad Sattarvand. A comprehensive overview of deep learning techniques for 3d point cloud classification and semantic segmentation. Machine Vision and Applications , 35(4): 67, 2024. [37] Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Perceiver-actor: A multi-task transformer for robotic ma- nipulation. In Conference on Robot Learning , pages 785– 799. PMLR, 2023. [38] Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 2998–3009, 2023. [39] Yuliang Sun, Xudong Zhang, and Yongwei Miao. A review of point cloud segmentation for understanding 3d indoor scenes. Visual Intelligence , 2(1):14, 2024. [40] Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Bur- nell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [41] Octo Model Team, Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Tobias Kreiman, Charles Xu, et al. Octo: An open-source generalist robot policy. arXiv preprint arXiv:2405.12213 , 2024. [42] Beichen Wang, Juexiao Zhang, Shuwen Dong, Irving Fang, and Chen Feng. Vlm see, robot do: Human demo video to robot action plan via vision language model. arXiv preprint arXiv:2410.08792 , 2024. [43] Lionel Wong, Jiayuan Mao, Pratyusha Sharma, Zachary S Siegel, Jiahai Feng, Noa Korneev, Joshua B Tenenbaum, and Jacob Andreas. Learning adaptive planning representations with natural language guidance. arXiv preprint arXiv:2312.08566 , 2023. [44] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. Embodied task planning with large language models. arXiv preprint arXiv:2307.01848 , 2023. [45] Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang, Leonidas J. Guibas, and Hao Su. SAPIEN: A simulated part-based interactive environment. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2020. [46] Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, et al. Sapien: A simulated part-based interactive environment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 11097–11107, 2020. [47] Wentao Yuan, Jiafei Duan, Valts Blukis, Wilbert Pumacay, Ranjay Krishna, Adithyavairavan Murali, Ar- salan Mousavian, and Dieter Fox. Robopoint: A vision- language model for spatial affordance prediction for robotics. arXiv preprint arXiv:2406.10721 , 2024. [48] Maryam Zare, Parham M Kebria, Abbas Khosravi, and Saeid
|
https://arxiv.org/abs/2505.21652v1
|
Nahavandi. A survey of imitation learning: Al- gorithms, recent developments, and challenges. IEEE Transactions on Cybernetics , 2024. [49] Yanjie Ze, Gu Zhang, Kangning Zhang, Chenyuan Hu, Muhan Wang, and Huazhe Xu. 3d diffusion policy. arXiv preprint arXiv:2403.03954 , 2024. [50] Shengqiang Zhang, Philipp Wicke, L ¨utfi Kerem S ¸enel, Luis Figueredo, Abdeldjallil Naceri, Sami Haddadin, Barbara Plank, and Hinrich Sch ¨utze. Lohoravens: A long-horizon language-conditioned benchmark for robotic tabletop manipulation. arXiv preprint arXiv:2310.12020 , 2023. [51] Tianhao Zhang, Zoe McCarthy, Owen Jow, Dennis Lee, Xi Chen, Ken Goldberg, and Pieter Abbeel. Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEE international conference on robotics and automation (ICRA) , pages 5628–5635. IEEE, 2018. APPENDIX A. PDDL Definitions 1) Predicate Definitions: This subsection gives the definition of the basic predicates utilized by the motion planner. Table IX: Definition of Predicates Predicate Description ON(obj, part, contact) Whether obj is on the contact . TOUCHING(obj, part) Whether the gripper is in contact withobj atpart ;part can be empty to indicate a general touch on any parts. GRASPING(obj, part) Whether the gripper is carrying obj atpart ;part can be empty to indicate a general grasp with any parts. FACING(obj, part, dir) Whether part ofobj is facing or pointing dir. AT_POSITION(obj, pos) Whether obj is at position pos = [x,y,z]. 2) Skill Definitions: This subsection shows the detailed definition of the five skills. Table X: Definition of Base Skills Skill Description Preconditions Effects grasp_obj(obj, part)Robot grasps objatpart. ON(table, obj) ; ∼GRASPING(obj) ; ∼TOUCHING(obj)GRASPING(obj, part) move_gripper(dir, dis=UNIT, grasping=false)Robot moves gripper along dir dis .Ifgrasping== True: GRASPING(obj)AT_POSITION( gripper, last_gripper_pos + vec(dir) ×dis) ; Ifgrasping== True: GRASPING(obj) rotate_obj(obj, part, dir)Robot rotates obj, such that part is facing dir.GRASPING(obj) GRASPING(obj) ; FACING(part, dir) touch_obj(obj, part)Robot touches objatpart. ON(table, obj) ; ∼GRASPING(obj) ; ∼TOUCHING(obj)TOUCHING(obj, part) release_gripper (obj)Robot releases the gripper and moves away from obj.ON(table, obj) ; GRASPING(obj) or TOUCHING(obj)ON(table, obj) ; ∼GRASPING(obj) ; ∼TOUCHING(obj) 3) Task Definitions: This subsection shows the detailed definition of different task types in PartInstruct. Table XI: Seen Task Instructions and Goal States Seen (10) Order Example Task Instruction Goal States 1 Grasp the object by the part GRASPING(gripper, part) ,ON(obj, table) 2 Touch the object at the part TOUCHING(part) ,ON(obj, table) 3 Hold the part of the object and move it to directionGRASPING(part) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir)) 4 Push the object towards direction by touching partTOUCHING(part) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir)) 5 Slide the object on the table to- wards direction while keeping hold ofpart, then release itPhase1 :GRASPING(part) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir)) Phase2 :GRIPPER_OPEN ,MIN_DISTANCE(gripper, obj) 6 Move the object todirection by pushing it at part, then free itPhase1 :TOUCHING(part) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir)) Phase2 :GRIPPER_OPEN ,MIN_DISTANCE(gripper, obj) 7 While keeping hold of part, move theobject towards direction in the airGRASPING(part) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir)+VEC(UP)) 8 Rotate part of the object to face direction while lifting itGRASPING(obj) ,FACING(part, dir) , AT_POSITION(obj, POS_INIT_OBJ+VEC(UP)) 9 Move the object towards direction after raising it, while keeping hold ofpart, then put it downPhase1 :GRASPING(part) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir)+VEC(UP)) Phase2 :GRASPING(part) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir)) 10 Move the object towards direction1 in the
|
https://arxiv.org/abs/2505.21652v1
|
air, then rotate part to point towards direction2GRASPING(obj) , AT_POSITION(obj,POS_INIT_OBJ+VEC(UP)+VEC(dir1)) , FACING(part, dir2) Table XII: Unseen Task Instructions and Goal States Unseen (6) Order Example Task Instruction Goal States 11 Rotate part in the air so it points towards direction , then put it downPhase1 :GRASPING(obj) ,FACING(part, dir) , AT_POSITION(obj, POS_INIT_OBJ+VEC(UP)) Phase2 :GRASPING(obj) ,FACING(part, dir) , AT_POSITION(obj, POS_INIT_OBJ) 12 Shift the object towards direction1 in the air while grasping part1 , turn part2 todirection2 , then set it downPhase1 :GRASPING(part1) ,FACING(part, dir) , AT_POSITION(obj,POS_INIT_OBJ+VEC(dir1)+VEC(UP)) Phase2 :GRASPING(part1) ,FACING(part, dir) , AT_POSITION(obj,POS_INIT_OBJ+VEC(dir1)) 13 Turn part of the object to point to direction1 while keeping it on the table, then push it towards direc- tion2Phase1 :ON(obj, table) ,FACING(part, dir1) ; Phase2 :ON(obj, table) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir2)) 14 While keeping it on the table, push theobject towards direction1 while touching part1 , then rotate part2 to face direction2Phase1 :ON(obj, table) ,TOUCHING(part1) , AT_POSITION(obj, POS_INIT_OBJ+VEC(dir1)) Phase2 :ON(obj, table) ,FACING(part2, dir2) 15 Rotate part of the object to face the opposite directionFACING(part, ∼DIR_INIT(part)) ,ON(obj, table) 16 Push the object todirection1 and rotate part to point towards direc- tion2 in the air, finally place it downPhase1 :FACING(part, dir2) , AT_POSITION(obj,POS_INIT_OBJ+VEC(dir1)+VEC(UP)) Phase2 :FACING(part, dir2) , AT_POSITION(obj,POS_INIT_OBJ+VEC(dir1)) B. PartInstruct Benchmark Details 1) Observation and Action Space: Table XIII shows the observation and action space available in PartGym . Table XIII: Observation and Action Space details. Observation Space Static View - RGB 300×300×3 Static View - Depth 300×300 Static View - PCD 3×1024 Static View - Semantic 300×300 Static View - Traget Part PCD 3×1024 Static View - Traget Part Mask 300×300 Wrist View - RGB 300×300×3 Wrist View - Depth 300×300 Wrist View - PCD 3×1024 Wrist View - Semantic 300×300 Wrist View - Traget Part Mask 300×300 Wrist View - Traget Part PCD 3×1024 Proprioceptive state EE position (3) EE orientation (3) Joint positions (7) Gripper action (1) Action Space Absolute cartesian pose (w.r.t. world frame) EE position (3) EE orientation (3) Gripper action (1) RGB Semantic Touch the mug at its leftPush the mug to the right, then rotate it so that the bottom faces to the left. Move to the rightGrasp the mug at its leftReorient the top of the mug to face right Part Mask Depth Scene PCDPart PCD Figure 9: Selected Visual Modalities in PartGym. 2) Key Features of PartGym: The aim of PartGym is to boost embodied AI research related to interaction with table-top object parts. PartGym supports real-time rendering of different visual modalities (see Figure 9). In addition to the typical modalities like RGB, depth, and object segmentation, PartGym also provides part-related visions like part masks and part point cloud, including spatial parts and semantic parts of an object. These part-related vision modalities are rendered by PyBullet [6] simulation engine using the ground-truth part assets given by PartNet Mobility [45] [26] [4]. Additionally, PartGym provides a framework to implement bi-level planning models for part-level manipulation tasks in simulation environments. It provides a template skill instruction generator, an oracle skill execution checker, as well as a systematic way to render part-related modalities shown in any
|
https://arxiv.org/abs/2505.21652v1
|
skill instruction. 3) Visualization of Test Splits: We provide the visualization of all 5 test sets in this section. Figure 10: Left: Training set. Right: Test 1(OS). Figure 11: Left: Training set. Right: Test 2(OI). Figure 12: Above: Training set. Below: Test 3(TP). Figure 13: Above: Training set. Below: Test 4(TC). Figure 14: Left: Training set. Right: Test 5(OC). 4) Statistics of PartInstruct Episodes: We provided detailed statistics about parts within each object type. T opRightLeft LidFront Back Bottom050100150200250300350400CountBox BackLeftRightT op HandleFront Bottom0100200300400500600CountMug LidT opRightNeckLeft MouthBackFront Bottom0100200300400500CountBottle T opLeftLidBackHeadRightFrontOutlierBottomHandle050100150200250300350400CountDispenser T op Blade RightLeftFront Back Bottom0255075100125150175CountKnife T op Lid RightLeftBack Front Bottom0100200300400CountStapler T opLeftRight BackFrontBaseScreenBottomSurface0100200300400500600CountDisplay RightLeft T opFront BackHandle Bottom050100150200250300350400CountBucket T opRightLidBackLeftFront Bottom050100150200250CountKitchenpot T op LegRight FrontLeftBack Bottom050100150200250300350400CountPliers Screw RightT opLeftBladeHandleBackFront Bottom050100150200CountScissorsCounts for Parts in the Seen Objects Figure 15: Parts in PartInstruct episodes, grouped by seen object types. leftback righttopfrontleg bottom020406080100120CountEyeglasses handleleft topright backlid front bottom0102030405060CountKettle screenleft topright back front keyboard020406080CountLaptopCounts for Parts in the Unseen Objects Figure 16: Parts in PartInstruct episodes, grouped by unseen object types. C. Skill and Object Part Impact Study Here, we selected the rollout logs of the best-performing policy and analyzed the impact of different skill types and object parts. Specifically, we evaluated the success rate and failure causes for each skill and part. The Success Rate was calculated by dividing the number of successful executions of each skill or part by the number of times it appeared in the skill chain. The Failure Cause was calculated by dividing the number of times a skill chain failed because of a specific skill or part by the total number of skill chain failures. Table XIV: Average success rate and failure cause for the three part-level skills Skill Grasp Object Rotate Object Touch Object Success Rate (%) 53.55 18 .18 54 .55 Failure Cause (%) 43.51 6 .11 16 .79 Table XV: Average success rate and failure cause for selected parts. Part Blade Left Neck Top Screen Mouth Bottom Success Rate (%) 46.67 45.10 66.67 52.78 60.00 66.67 30.00 Failure Cause (%) 4.60 13.79 1.15 24.14 2.30 0.00 0.00 Part (Continued) Handle Leg Lid Front Right Back Screw Head Success Rate (%) 64.29 33.33 63.64 29.03 36.49 41.03 0.00 16.67 Failure Cause (%) 4.60 2.30 4.60 5.75 21.84 9.20 1.15 3.45 D. Implementation Details 1) Training Details in End-to-End Policy Learning: We trained the baseline models, including Diffusion Policy (DP) [5], 3D Diffusion Policy (DP3) [49], and Act3D [11], from scratch. For RVT2 [12] and Octo [41], we implemented both fine-tuning of the pretrained models and training from scratch on our dataset. All trained models are using vision modalities from a static-view camera with the same extrinsics in the workspace, as well as the real-time robot states information. Experiments were conducted on cluster nodes of A100 or H100 using Distributed Data Parallel (DDP). Training from scratch generally took about two days, while fine-tuning required one day. Diffusion Policy (DP): We train a CNN-based DP from scratch on our dataset. The action prediction horizon is set to 16 steps, with an observation horizon of 2 steps and action steps
|
https://arxiv.org/abs/2505.21652v1
|
of 8. The input RGB images are cropped to a size of 76×76. For language instructions, we use a pre-trained T5-small language encoder to obtain a language embedding of 512 dimensions. This language embedding is then concatenated with other features to form the final feature representation. 3D Diffusion Policy (DP3): The DP3 model is trained under a similar setup as DP, with an action prediction horizon of 16 steps, an observation horizon of 2 steps, and action steps of 8. For the point cloud observations, we use an input size of 1024 points, which are downsampled from the original point cloud using the Iterative Farthest Point Sampling algorithm [29]. The language instructions are processed in DP3 following the same approach as in DP. Act3D: Act3D takes an image input size of 256×256. The action prediction horizon is set to 6 steps, and the observation horizon is 1 step. Following the raw work [11], we use ResNet50[14] as the vision encoder, and use CLIP [30] embeddings for vision-language alignment. For 3D action map generation, the number of “ghost” points is set to be 10,000, with a number of sampling level of 3. 3D Diffuser Actor (3D-DA): For 3D-DA, we use the front-view RGB and scene point cloud as vision inputs. The RGB image has a resolution of 256×256. Following Ke et al. [18], we extract visual features with a pre-trained CLIP ResNet-50 encoder and use CLIP [30] embeddings for vision–language alignment. We use an interpolation length of 5 steps and an observation history of 3 steps. Octo: For fine-tuning, we use the released checkpoint of the octo-base-1.5 model and fine-tune its output head for 20,000 iterations. We use both the static view camera and the wrist view camera. The input image sizes are 256×256for the static view and 128×128for the wrist view. The window size is set to 2 steps, and the action horizon is set to 16 steps. RVT2: To adapt RVT2 in our benchmark settings, we first convert the depth map from the static camera view into a point cloud in the camera coordinates, then apply camera extrinsic to transfer the point cloud into the world coordinates, where the action heat maps will be generated, and apply supervision. The action prediction horizon is chosen to be 6 steps, and the observation horizon is set to be 1 step. 2) Zero-Shot Evaluation of the Generalist Policy: We selected several popular generalist policies, including RT-1, Octo, and OpenVLA, and evaluated their zero-shot performance on our test sets. For RT-1, we followed the implementation of Open X-Embodiment project and used the released rt_1_x_tf_trained_for_002272480_step checkpoint for inference. For Octo, we used octo-base-1.5 model, following the same setup as described in section D.1. For OpenVLA, we used the pretrained model openvla-7b . We followed the same evaluation protocol as other baselines, and our results show that these generalist policies fail to achieve any success on our test sets. 3) Design Details of Bi-Level Planning: We outline the bi-level planning pipeline’s implementation here as a supplement to Section IV-B. Implementation of the High-Level Task Planner: The high-level
|
https://arxiv.org/abs/2505.21652v1
|
task planner features a skill inference mechanism that leverages comprehensive contextual information, including user task instructions, previously executed skill chains, and real- time state data such as vision and pose information, to determine the next appropriate action. Recall that the high-level task planner updates the skill instruction once every nsteps. Here, nis determined by the average number of steps typically required for each skill in the training dataset. Specifically, we use 130 for grasp_obj , 30 for move_gripper , 68 for touch_obj , 40 for release_obj , and 22 for rotate_obj . Once the execution counter reaches these average values, the VLM is prompted to infer the subsequent skill based on the current state. This design ensures that the decisions are grounded in both historical and real-time data. In addition, the planner incorporates an exception handling measure to maintain output consistency and reliability. Any unacceptable terms generated by the VLM—such as directional indicators that are out of our definition, will be normalized to their prescribed equivalents. Also, despite embedding all acceptable part names within the prompt, a dedicated mechanism cross-references any inferred part names against a stored list of valid part names. For VLM baselines, we use gemini-1.5-flash-002 andgemini-2.0-flash-exp for Gemini [40], and GPT4o for OpenAI models [15]. Training of the Low-Level Action Policies: We trained the low-level action policy using skill instructions retrieved from our training data, assuming the presence of an oracle planner that decomposes the overall task. Apart from the skill instructions, the training setup remains identical to the end-to-end learning approach described in Section D.1. Part Grounding and Tracking: We selected sam2_hiera_small as our mask generation and tracking model due to its fastest tracking time among all configurations of SAM 2. For language grounding, we chose Florence-2-large as our Vision-Language Model (VLM). To evaluate the performance, we used the rollout logs of DP-S SAM2 . The performance was assessed using two key metrics: Grounding Success and Intersection over Union (IoU). Grounding Success is calculated as the ratio of successfully grounded parts to the total number of parts during a task. A grounding is considered successful if: 1) after language grounding, the prompt points given by the VLM consist of one positive and one negative point (to prompt SAM 2), and 2) the IoU of the generated mask is greater than zero. If either of these conditions is not met, the grounding is deemed a failure. The IoU measures the overlap between the predicted mask generated by SAM 2 and the ground-truth mask retrieved from the PartGym environment. It is defined as the area of intersection divided by the area of the union of the predicted and true regions. The results across different test sets are summarized in Table XV. Table XV: Performance of part grounding and tracking across different test sets. Metric Test1 Test2 Test3 Test4 All Grounding Success (%) 25.26 35 .73 36 .87 15 .06 27 .58 IoU 0.15 0 .18 0 .19 0 .25 0 .20 VLM Prompts: We provide an example VLM prompt used in our bi-level planning pipeline below. Prompt Example You are
|
https://arxiv.org/abs/2505.21652v1
|
an expert at planning manipulation tasks. You will be given one task instruction for each manipulation task. Each task instruction can be divided into a chain of skill instructions. Your job is to infer the next skill instruction (you only need to output one immediate next skill instruction each time, even if the entire task requires multiple skills) for the robot to execute, based on the following information and the attached image (current rgb frame) without outputting any intermediate inference and explanation: -Task Instruction: {user input} -Executed Skill Instructions: {executed skill instructions } -Gripper State: gripper state (The gripper is open when the value is around 0.04, and it is closed when the value is around 0.00.) -Previous TCP Pose: {previous tcppose} -Current TCP Pose: {current tcppose} -Previous RGB Observation: {prev image desc} -Current RGB Observation: The{current image position }image in the contents array shows the current state. Here is relevant information about the task. 1. The task instruction helps you understand the overall task goal. The executed skill instructions show the sequence of actions taken so far. 2. The gripper state shows whether the gripper is open or closed. The TCP poses and images together illustrate the state transitions of the previous action. 3. The current object state, relative to the gripper, and the object motion (from TCP and images) can help determine if the last action was successful. Skill Descriptions: 1.grasp obj: -Description: This skill grasps an object by a specific part. -Parameters: part grasp: The exact part of the object to be grasped. Must match the user’s input (e.g., ’blade’, ’lid’). -Format: Grasp the {objclass}at its{part grasp} 2.move gripper: -Description: This skill moves the gripper in a specified direction while optionally keeping an object grasped. -Parameters: dirmove: Direction to move the gripper. Can only be ’top’, ’bottom’, ’left’, ’right’, ’front’, or ’back’. -Format: Move{dirstr}, where ’dir str’ is mapped from ’dir move’ by: - ’front’ →’forwards’ - ’back’ →’backwards’ - ’top’→’upwards’ - ’bottom’ →’downwards’ - ’left’ →’to the left’ - ’right’ →’to the right’ 3.rotate obj: -Description: This skill rotates an object in a specific direction based on a given part. -Parameters: dirrotate: Direction to rotate the object. Must be one of ’top’, ’bottom’, ’left’, ’right’, ’front’, ’back’. part rotate: The part of the object that should be rotated. -Format: Reorient the {part rotate}of the {objclass}to face {dirstr}, where ’dir str’ is mapped from ’dir rotate’. 4.touch obj: -Description: This skill touches a part of an object. -Parameters: part touch: The part of the object to be touched. -Format: Touch the {objclass}at its{part touch} 5.release obj: -Description: This skill releases an object from the gripper. -Parameters: None. -Format: Release Prompt Example (Continued) Part Names: -Scissors: blade, handle, screw, left, right, top, bottom, front, back -Kitchen Pot: base body, lid, left, right, top, bottom, front, back -Laptop: base frame, screen, touchpad, keyboard, screen frame, left, right, top, bottom, front, back -Eyeglasses: base body, leg, left, right, top, bottom, front, back -Bucket: handle, base body, left, right, top, bottom, front, back -Display: base support, surface, frame, screen, left, right, top, bottom,
|
https://arxiv.org/abs/2505.21652v1
|
front, back -Pliers: base body, leg, outlier, left, right, top, bottom, front, back -Bottle: mouth, lid, body, neck, left, right, top, bottom, front, back -Knife: base body, translation blade, rotation blade, left, right, top, bottom, front, back -Stapler: base body, lid, body, left, right, top, bottom, front, back -Kettle: handle, base body, lid, left, right, top, bottom, front, back -Mug: handle, body, containing things, left, right, top, bottom, front, back -Box: rotation lid, base body, left, right, top, bottom, front, back -Dispenser: base body, pressing lid, head, handle, outlier, left, right, top, bottom, front, back Task Splitting Example: Break down the task: Split the task instruction into individual steps. Example: “Move the box in the air towards the right while keeping in touch with the right, then put it down.” Steps: (1) Grasp the box at its right, (2) Move upwards, (3) Move to the right, (4) Move downwards. Return only the next skill instruction in the specified format. Notes: - Do not modify or assume alternate names for object parts. - The task sequence should follow the user’s input as strictly as possible. - Do not replace object parts with similar or inferred names.
|
https://arxiv.org/abs/2505.21652v1
|
arXiv:2505.21657v1 [cs.CL] 27 May 2025EXPLAINABILITY OF LARGE LANGUAGE MODELS USING SMILE: S TATISTICAL MODEL -AGNOSTIC INTERPRETABILITY WITH LOCAL EXPLANATIONS Zeinab Dehghani University of Hull United Kingdom z.dehghani-2023@hull.ac.ukKoorosh Aslansefat University of Hull United Kingdom k.aslansefat@hull.ac.ukAdil Khan University of Hull United Kingdom A.M.Khan@hull.ac.uk Mohammed Naveed Akram Fraunhofer IESE Germany naveed.akram@iese.fraunhofer.de May 29, 2025 ABSTRACT Large language models like GPT, LLAMA, and Claude have become incredibly powerful at generating text, but they are still black boxes, so it is hard to understand how they decide what to say. That lack of transparency can be problematic, especially in fields where trust and accountability matter. To help with this, we introduce SMILE, a new method that explains how these models respond to different parts of a prompt. SMILE is model-agnostic and works by slightly changing the input, measuring how the output changes, and then highlighting which words had the most impact. Create simple visual heat maps showing which parts of a prompt matter the most. We tested SMILE on several leading LLMs and used metrics such as accuracy, consistency, stability, and fidelity to show that it gives clear and reliable explanations. By making these models easier to understand, SMILE brings us one step closer to making AI more transparent and trustworthy. Figure 1: Highlighting the contribution of each word in the input prompt of a Commercial LLM APREPRINT - MAY29, 2025 1 Introduction Large Language Models (LLMs) have become powerful tools in recent years. They can perform tasks ranging from answering questions with remarkable accuracy [ 1] to drafting essays and other written content with coherence and style [ 2], creating art that spans various mediums and styles [ 3], coding across multiple programming languages [ 4], debugging software [4], writing technical documentation [5], and even generating algorithms [1]. LLMs function by leveraging deep learning architectures, particularly the Transformer model. This model uses self-attention mechanisms to process and generate text by analysing relationships between words in large data sets [ 6]. These models are trained in various corpora, including books, articles, code, and web content, allowing them to learn language patterns, context, and syntax on an unheard scale [7]. Their capabilities continue to grow, turning them into essential tools for a wide range of tasks [ 2]. These models, trained in various texts, have transformed how we interact with technology and how machines understand and generate human language [ 6]. LLMs such as GPT, BERT, and Claude leverage the Transformer architecture to process and generate language based on patterns learned from extensive datasets [ 7]. Although they excel in many NLP tasks, understanding their strengths and limitations is the key to using them effectively and responsibly [8]. These models continue to evolve, pushing the boundaries of AI applications in language understanding and generation [2]. However, the nature of these models is known as black boxes and lacks transparency [9]. One of their important problems is how these models decide which words or parts of input to focus on, which remains a complex and fascinating process [10]. At their core, LLMs rely on patterns and relationships learned during
|
https://arxiv.org/abs/2505.21657v1
|
training to decide which input elements are the most important [ 6]. This process is more than just identifying individual keywords; it is about picking up sophisticated connections and context that shape the model’s responses [ 10]. Understanding how this works is crucial: it explains why these models excel in some scenarios but struggle in others. It also highlights why they can occasionally produce responses that are biased, irrelevant, or even harmful [11]. By studying how LLMs emphasise words and phrases, we can better understand how they work [ 10]. This can help us make them more transparent and reliable, ensuring they focus on what truly matters in a given context. It also allows us to address potential problems, such as biases or alignment problems, making these tools safer and more ethical [ 11]. As LLMs continue to integrate into our daily lives and decision-making processes, understanding their inner workings is more important than ever [ 9]. Research on how they prioritise language helps us refine their performance, ensure ethical use, and unlock their full potential to benefit society [11] responsibly. To solve this problem, we propose a novel approach of SMILE (Statistical Model-agnostic Interpretability with Local Explanations) to extract a visual heat map that shows the influence of each word in responding to these models [ 12]. This method extends the foundation laid by LIME [ 13] by incorporating Empirical Cumulative Distribution Function (ECDF) statistical distances, significantly improving robustness and making interpretability more effective. SMILE produces visual heatmaps that reveal the impact of individual words or phrases in text input on the model’s output as illustrated in Fig 1. For example, when a language model processes an instruction or query, the heatmap highlights the importance of specific terms in influencing the outcome. This offers users a clear visual representation of how the model interprets and prioritises different parts of the input text [10]. To ensure a thorough evaluation of our approach, we employ a suite of metrics, including accuracy, fidelity, stability, and consistency [14]. These metrics demonstrate that SMILE provides reliable insights into model behaviour [15]. The contributions of our work include the following. •Introducing a New Interpretability Framework: SMILE is a cutting-edge method that visualises the role of individual words in shaping model output, offering a deeper understanding of how LLMs process language [ 12]. •Promoting Transparency and Trust in LLMs: By visually linking text input to model decisions, SMILE enhances transparency, making LLMs more trustworthy and enabling their practical use in high-stakes domains such as healthcare, education, and legal systems. •Comprehensive Evaluation for Robustness: Our evaluation leverages metrics such as stability, fidelity, accuracy, and consistency, establishing a new standard for evaluating the interpretability and reliability of language models [16]. 2 APREPRINT - MAY29, 2025 2 Literature review This section summarises the most recent advances in large language models (LLMs) and their growth in various applications. We also discuss Explainable Artificial Intelligence (XAI), a field dedicated to making AI systems more transparent and easier to understand. Finally, we explore how these two areas come together, focusing on the ongoing efforts
|
https://arxiv.org/abs/2505.21657v1
|
to create explainable large language models. We discuss the key challenges involved and the techniques being developed to improve how we interpret and trust these powerful models. 2.1 Large Language Models Large language models (LLMs) have become increasingly common thanks to significant strides in deep learning, better access to data, and more powerful computing resources. These models are trained on vast amounts of text and can generate responses that sound natural, answer questions, and support a wide range of language-based tasks. Over time, several notable LLMs have helped push the field forward, each bringing new ideas and improvements. GPT (OpenAI) [ 1] is one of the most well-known autoregressive transformer-based models, capable of generating coherent and contextually rich text. Google’s T5 [ 17] introduced the text-to-text framework, where all NLP tasks are framed as text generation problems, improving transfer learning and efficiency. Similarly, PaLM (Google) [ 18] leverages the Pathways architecture for large-scale language modelling, optimising multitask learning across different NLP tasks. Anthropic’s Claude [ 19] focuses on ethical AI and safe interactions designed to minimise harmful outputs while maintaining interpretability. Google’s Gemma 2 [ 20] is optimised for logical reasoning and structured data interpretation, making it a strong candidate for academic and enterprise use cases. Meta’s LLaMA [ 5] is an efficient, open-source model tailored for various NLP applications, offering researchers an accessible yet powerful alternative to proprietary models. EleutherAI’s GPT-NeoX [ 21] is an open-source alternative to GPT, providing robust text generation capabilities while allowing customisation. AI21 Labs’ Jurassic-2 [ 22] is designed for large-scale enterprise applications, emphasising contextual understanding and user intent. Cohere’s Command R+ [ 23] integrates retrieval-augmented generation, improving fact-based responses and minimising hallucinations. Furthermore, models like BERT (Google) [ 24] have NLP through bidirectional context understanding, significantly improving sentiment analysis and question-answering tasks. Microsoft’s Phi-4 focuses on efficiency and cost- effectiveness, delivering competitive performance with fewer parameters. Mistral and Alibaba’s Qwen 2.5 push the boundaries of model adaptability and fine-tuning capabilities, making them suitable for multilingual and domain- specific applications. DeepSeek-LLM [ 25] is another promising open-source model, competing with top-tier models regarding benchmark performance. Models such as Mixtral and StableLM have been developed with efficiency and specific NLP task performance in mind, ensuring a balance between accuracy and computational demands. Recent developments have introduced models such as DeepSeek-R1, which focused on advanced reasoning capabilities, and Qwen2.5-Max, which explores large-scale Mixture-of-Experts (MoE) models to enhance efficiency and performance. OpenAI’s o3-mini pushes the frontier of cost-effective reasoning, balancing performance with computational efficiency. DeepSeek-V3, the first open-sourced GPT-4o level model, represents a significant step forward in open-source LLM development. The following models in Fig. 2 are explicitly designed for text-to-text tasks, meaning they take text input and generate text output across various NLP applications: 3 APREPRINT - MAY29, 2025 Category Models Description General-PurposeGPT-4 [ 26], GPT-4o [ 27], Claude 3 [ 28], Gemini 1.5 [ 29], Command R+ [ 30], Mistral [ 31]Versatile for wide range of NLP tasks Open-SourceLLaMA 3 [ 32], Mistral [ 31], Mixtral [ 33], Falcon 180B [ 34], DeepSeek-V3 [ 35], OpenChat [
|
https://arxiv.org/abs/2505.21657v1
|
36]Customizable models for research applications Retrieval-AugmentedCommand R+ [ 30], GPT-4 Bing [ 26], Claude 3 RAG [ 28], Gemini Search [ 29]Enhance information retrieval accuracy, reduce misinformation Long-Text ProcessingClaude 3 [ 28], Gemini 1.5 [ 29], GPT-4o [ 27], Grok-1.5 [ 37]Specialized in processing long-form text Advanced ReasoningGPT-4 [ 26], Claude 3 Opus [ 28], Gemini 1.5 [ 29], Mixtral [ 33], Grok-1.5 [ 37]Logical reasoning, multi-task learning, problem-solving Ethically-AlignedClaude 3 [ 28], Gemini [ 29], GPT-4 [ 26], Grok [ 37]Emphasize safety, reducing bias, ethical AI Mixture-of-Experts (MoE)Mixtral [ 33], Qwen2.5-Max [ 38], GShard [ 39], Switch Transformer [ 40]Improved efficiency and performance via MoE MultimodalGPT-4o [ 27], Gemini 1.5 [ 29], Claude 3 [ 28], Kosmos-2 [ 41], Grok-1.5V [ 37]Capable of processing both text and visual inputs MultilingualBLOOM [ 42], XGLM [ 43], mBERT [ 44], XLM-R [ 45], ByT5 [ 46]Optimized for cross-lingual understanding and translation Domain-SpecificMed-PaLM 2 [ 47], Galactica [ 48], Legal-BERT [ 49], StarCoder2 [ 50], Code LLaMA [ 51]Tailored for specific tasks such as medical, legal, or code Instruction-TunedAlpaca [ 52], Vicuna [ 53], ChatGLM [ 54], Baize [ 55], OpenChat [ 36]Tuned for better instruction-following and chat capabilities Lightweight / EdgePhi-2 [ 56], TinyLLaMA [ 57], DistilGPT2 [ 58], GPT-2 [ 59], LLaMA 2 7B [ 60]Designed for fast inference and deployment on edge devices Figure 2: Extended LLM Categories, Models, and Descriptions4 APREPRINT - MAY29, 2025 2.2 Explainable AI (XAI) The increasing complexity of machine learning models has made understanding how they make decisions more important than ever. Ensuring transparency, fairness, and reliability is crucial, so explainability has become a key focus in AI research [61, 14]. Explainability methods can be broadly categorised as intrinsic or post-hoc. Intrinsic methods involve inherently interpretable models due to their simplicity, such as linear regression or decision trees [ 62]. These models are often preferred in applications where transparency is critical, but they may lack the predictive power of more complex models [9]. In contrast, post-hoc methods, such as Local Interpretable Model-agnostic Explanations (LIME) [ 13] and SHAP (Shapley Additive Explanations) [ 134], are applied after model training to explain the decisions of complex, often ‘‘black-box’’ models like neural networks and ensemble methods. These approaches provide valuable insights without altering the underlying model structure [ 63]. Post-hoc explainability methods can be classified into global and local explanations. Global explanations provide an overarching view of the model’s decision-making patterns across an entire dataset, helping to identify trends and biases [ 62,64]. In contrast, local explanations focus on specific predictions, providing a detailed breakdown of how individual inputs influence model outputs. LIME and its numerous enhancements fall under local explainability methods and are designed to provide granular, case-specific interpretations [13]. LIME has become one of the most widely used explainability techniques, but over time, researchers have introduced various improvements to address its limitations and expand its usefulness across different applications. As shown in Figure 3, enhancements to LIME can be grouped into four main categories: A key improvement in LIME’s explanations is the refinement of distance measures used
|
https://arxiv.org/abs/2505.21657v1
|
to weigh data points. For instance, SMILE uses statistical techniques to generate more consistent and reliable explanations. However, while these improvements enhance reliability, they also come with increased computational complexity. Another approach to improving LIME focuses on upgrading its simple linear models with more sophisticated alternatives. Methods such as Q-LIME [ 65], S-LIME [ 66], Bay-LIME, and ALIME use more advanced surrogate models to provide explanations that better reflect the underlying model’s behaviour, capturing non-linear relationships that the original method might miss. Furthermore, researchers have explored optimisation strategies to make LIME more efficient and stable. Techniques like OptiLIME and G-LIME work to strike a balance between accuracy and computational cost, ensuring that explanations remain reliable while keeping the process computationally manageable. Recent research has extended LIME to support specific data types and improve its interpretability. Adaptations such as PointNet LIME for 3D point cloud data [ 67], TS-MULE for time series data [ 68], Graph LIME for graph-structured data [ 69], Sound LIME for audio data [ 70], and B-LIME for ECG signal data [ 71] demonstrate how LIME can be tailored to diverse applications. In addition, several modifications have been proposed to improve LIME’s approach, including using alternative surrogate models, fine-tuning distance parameters, optimising sampling techniques, and refining computational efficiency to improve both interpretability and accuracy. 5 APREPRINT - MAY29, 2025 Figure 3: Key strategies for enhancing LIME, including improvements in sampling, distance parameters, surrogate models, and optimisation. 2.3 Explainable AI (XAI) for Large Language Models Large Language Models (LLMs) have revolutionised natural language processing, enabling text generation, translation, and question answering tasks. However, their opaque ‘‘black-box’’ nature raises concerns regarding interpretability, fairness, and trustworthiness [ 72,73]. Explainability in LLMs is crucial for ensuring accountability, enhancing user trust, and mitigating biases. This literature review explores existing research on explainability methods, applications, and challenges in LLMs. The conceptual framework for explainability in LLMs includes four key dimensions: Post-hoc Explainability, Mechanistic Interpretability, Human-centric Explainability, and Applications and Challenges. These categories reflect the main approaches and practical considerations in enhancing the transparency and trustworthiness of LLMs. After training, post-hoc explainability methods aim to interpret LLMs, providing insights into their decision-making processes [ 72,74]. Zhao et al. [ 72] present a taxonomy of explainability techniques for LLMs, categorising approaches into local and global explanations, and discussing limitations and future research directions. Similarly, Bender et al. [73] highlight the risks of opaque LLMs, including biases and misinformation, and call for increased transparency and accountability. Recent surveys further expand this field by reviewing how LLMs contribute to explainable AI at large [ 75,76]. Amara et al. [ 77] recently introduced ConceptX, a concept-level attribution method highlighting semantically meaningful input phrases through a coalition-based Shapley framework, offering robust and human-aligned explanations. Mechanistic interpretability seeks to uncover how LLMs process information internally [ 78,79,80]. Olah et al. [78] propose reverse-engineering techniques to dissect internal representations. Nanda et al. [ 79] examine how LLMs develop abstract representations and generalisation, introducing progress measures for interpretability research. Recent work by Anthropic [ 80] demonstrates the role of mechanistic interpretability in understanding and
|
https://arxiv.org/abs/2505.21657v1
|
controlling model behaviour. Additionally, the SEER framework [ 81] introduces self-explainability mechanisms to enhance the interpretability of internal representations in LLMs. The CELL framework [ 82] further advances mechanistic interpretability by integrating concept-based explanations directly into the training and representation learning of language models. Recent advancements from the Transformer Circuits framework have further enriched mechanistic interpretability. Using causal scrubbing, Meng et al. [ 83] introduced Attribution Graphs to uncover emergent structures within neural networks. In contrast, Nanda et al. [ 84] investigated the scaling of monosemanticity, extracting interpretable features from state-of-the-art models. Additional recent contributions explore specific tasks such as relevance estimation [ 85], emotion inference [ 86], and geospatial reasoning [ 87]. These works leverage techniques like activation patching and sparse autoencoders to reveal LLM internals. Sparse interpretability methods are further reviewed comprehensively in [88]. Neuroscientific perspectives have also been applied, using dynamical systems theory to model token-level trajectories within transformers [89]. 6 APREPRINT - MAY29, 2025 Human-centric explainability focuses on making LLM outputs understandable to non-experts [ 90,91,92]. Ji et al. [ 90] investigate hallucination in LLM outputs and propose techniques for detecting and mitigating incorrect responses. Krause et al. [ 91] assess reasoning abilities compared to human performance, and Martens et al. [ 92] explore counterfactual explanations to improve human understanding. Recent works extend this perspective by enabling interactive explanations of black-box models through fine-tuned LLMs [ 93] and by benchmarking explanation quality in clinical domains [ 94]. These approaches improve the accessibility and transparency of LLM outputs in real-world decision-making. Explainability enhances debugging, bias detection, regulatory compliance, and user trust in LLMs [ 95]. Challenges remain, including defining ‘‘meaning’’ in LLM-generated outputs [ 95], managing the ethical and societal implications of black-box models, and balancing performance with interpretability. Recent work has demonstrated SMILE’s applicability beyond language, such as in instruction-based image editing [ 96], highlighting its modality-agnostic nature. New efforts address rare concept discovery via tailored sparse autoencoders [ 97] and propose hybrid architectures like concept bottleneck models [ 98] and symbolic program compositions [ 99] to enable more transparent reasoning. Future research should emphasise standardised benchmarks, hybrid mechanistic and human-centric methods, and integrating explainability into AI governance frameworks. 3 Problem Definition This section introduces the fundamental concepts and mathematical framework necessary to understand our approach. We begin by defining key notation and problem settings, followed by a discussion of measuring the impact of perturbations, the surrogate model used for approximation, and a theoretical justification for its fidelity [ 13,100, 101]. Finally, we outline a structured workflow for implementing a Wasserstein-based LIME surrogate, ensuring interpretability in the context of large language models [102]. 3.1 Notation and Problem Setting Setting the Stage. In this study, we aim to develop a local surrogate model that explains the behaviour of a black-box model π(n)when provided with input text prompts. The basic notations are as follows: •Input: x, representing the original prompt. •Perturbations: {ˆxj}J j=1, small modifications of xused to explore the neighbourhood around the original input. •Model outputs: π(n)(y|x), the model’s probability distribution over possible outputs yfor the given input x.
|
https://arxiv.org/abs/2505.21657v1
|
3.2 Input-Level Distance To measure how different each perturbed input ˆxjis from the original prompt x, we compute the semantic distance using Word Mover’s Distance (WMD) [103]: δxj:= WMD( x,ˆxj). (1) Equation 1 quantifies the semantic dissimilarity between xandˆxj, ensuring that closer perturbations are treated as more relevant. 3.3 Output-Level Distribution Shift To assess how the model’s outputs change when the input is perturbed, we compute the Wasserstein distance between the output distributions for xandˆxj[101]: ∆(x,ˆxj) :=W π(n)(· |x), π(n)(· |ˆxj) . (2) Equation 2 captures the magnitude of the change in the model’s behaviour due to input perturbations. 7 APREPRINT - MAY29, 2025 3.4 Weighting via Input Similarity Since not all perturbations are equally meaningful, we prioritise those that are semantically closer to the original input. We define a Gaussian-based weighting scheme [13]: wj= exp −δxj σ22! . (3) As shown in Equation 3, perturbations with smaller semantic distances δxjreceive higher weights, reflecting their greater relevance. 3.5 Fitting a Local Surrogate Each perturbed input ˆxjis mapped to a feature vector zj∈Rd. The surrogate model hθadopts a linear form: hθ(zj) =θ0+θ⊤zj. (4) The parameters θare learned by minimising the weighted squared error between the surrogate’s predictions and the observed distribution shifts: min θX jwj(hθ(zj)−∆(x,ˆxj))2. (5) Equation 5 ensures that the surrogate closely approximates the black-box model’s behaviour within the local neighbourhood around x. 3.6 Theoretical Justification (Lipschitz Smoothness) Shift Function Definition. We formally define the shift function: f(x,ˆx) :=W π(n)(· |x), π(n)(· |ˆx) . (6) Assuming that π(n)is Lipschitz continuous with respect to its input, the shift function fis also Lipschitz [104]: |f(x,ˆx)−f(x′,ˆx′)| ≤L·(∥x−x′∥+∥ˆx−ˆx′∥), (7) where L > 0is a Lipschitz constant (Equation 7). This smoothness justifies the use of a local linear surrogate model to approximate f. Relation to Total Variation. The Kantorovich-Rubinstein duality [ 105] relates the Wasserstein distance and Total Variation (TV) distance: W(p, q)≤diam(Y)·TV(p, q), (8) where: TV(p, q) =1 2X y∈Y|p(y)−q(y)|. (9) Equations 8 and 9 indicate that small changes in the input resulting in small TV distances also imply bounded Wasserstein shifts. 8 APREPRINT - MAY29, 2025 3.7 Linear Surrogates Approximate Lipschitz Functions Due to the Lipschitz property, any smooth function fcan be locally approximated by a linear model in a small neighbourhood U: f(x′)≈f(x) +∇f(x)·(x′−x). (10) This principle, expressed in Equation 10, underlies the surrogate modelling approach adopted by LIME and related methods [13]. 3.8 Putting It All Together The complete Wasserstein-based LIME surrogate process consists of the following steps: 1. Generate perturbations {ˆxj}nearx. 2. Compute semantic distances δxj(Equation 1) and output-level shifts ∆(x,ˆxj)(Equation 2). 3. Assign weights wjusing Equation 3. 4. Extract feature vectors zj. 5. Fit a local surrogate model hθby minimising the loss (Equation 5). 6. Use hθto interpret how input perturbations affect the model’s output distribution. This framework provides a rigorous, interpretable method for understanding how small changes to an input affect the outputs of complex models, particularly in applications involving large language models [102, 13, 101, 105]. 4 PROPOSED METHOD The proposed approach enhances the interpretability of large language models for text generation by analysing how specific textual inputs influence
|
https://arxiv.org/abs/2505.21657v1
|
the generated outputs. Understanding the interaction between input prompts and generated text improves transparency and predictability in the model’s behaviour [8, 102]. In the context of classification tasks, Fig. 4 illustrates SMILE, a tool designed to explain model predictions by isolating critical features that drive decision-making [ 12]. SMILE segments an input (e.g., an image) into meaningful components: ‘‘super-pixels’’ for images or logical sections for text, and creates perturbations by selectively modifying these components. These perturbations reveal how different input elements contribute to the final prediction. 9 APREPRINT - MAY29, 2025 Figure 4: SMILE flowchart for explaining image classification [12] SMILE records the model’s predictions for each perturbed input and calculates similarity scores between the original and modified inputs using metrics such as the Wasserstein distance [ 100,101]. These scores are transformed into weights using a kernel function, enabling a balanced weighting scheme. The results are used to train a weighted linear regression model that identifies the most influential input components. Inspired by SMILE, we propose a similar method for interpreting text prompt-based text generation models, as demonstrated in Fig. 5. Instead of perturbing the generated image, as in SMILE, we modify the instruction by including or excluding certain words. We generate a corresponding text output for each altered text prompt and compute the Wasserstein distance between these outputs and the one produced by the original text prompt [ 103]. This distance is a similarity measure, allowing us to identify which words in the text prompt have the most significant impact on the generated text. These similarity scores are then used as the outcome variable in a weighted linear regression model, with weights derived from the text distances and perturbations [ 106]. This regression model helps determine how each word affects the generated text. The coefficients derived from this model quantify the influence of each word, which we use to create a visual heatmap highlighting the most influential words in the text prompt. This heatmap provides an intuitive visual representation of the elements of the prompt that contribute the most to the generated output. Using this enhanced three-step process, which builds on SMILE’s methodology [ 12], we aim to help users understand how particular elements of text prompts impact text generation. This technique improves user control and 10 APREPRINT - MAY29, 2025 predictability in text generation models by providing insightful information about which elements of textual input, such as stylistic instructions or descriptive terms, most impact the final output. The first step employs a text generation model based on input text. The model receives the original text input and generates a corresponding output; only the input text changes each time the model renders the output. The text generation model is expected to produce responses similar to the original response that vary only based on critical components of the input texts [107]. Different text permutations are utilised from the individual words of the input prompt to create various text perturbation blocks, following the breakdown of the prompt into individual words. The instruction-based text generation model then generates text outputs based on both the
|
https://arxiv.org/abs/2505.21657v1
|
original input and the perturbed texts [108]. While the text generation model faces challenges in tasks such as counting objects and spatial reasoning, these issues can be mitigated by carefully selecting perturbation texts and restricting the input domain for perturbations [ 109]. The text generation models enable the proposed method to create responses that differ from the original text solely in terms of keywords related to the perturbation texts. We use the Wasserstein distance to quantify these differences, comparing the distributions in both text embed- dings [ 110,100]. This metric is employed because it provides a more comprehensive measure of the differences between text embeddings compared to other ECDF-based distances, such as Kolmogorov–Smirnov or Cramér–von Mises. Although these methods focus on specific aspects, the Wasserstein distance captures both broad and subtle changes in the data [110, 101]. Finally, by applying linear regression, the space of input prompts is linked to the output space of the text generation model. As a result, the model’s behaviour can be interpreted by examining the regression coefficients [111, 106]. 4.1 Text Generating and perturbed prompts To begin, we generate variations of the original text prompt, known as ‘‘perturbed prompts’’, to examine how these subtle changes affect the output of a text generation model [ 112,113]. The original text is first broken down into individual words. Multiple text versions are then created by selectively including or excluding specific words. Each perturbed prompt is then provided to the text generation model, which produces a unique output for each variation. In this process, the output generated from the original prompt is the reference, or baseline output, against which all other outputs are compared. For the original text prompt porg, the text generation function ϕgengenerates a corresponding output text t′ org, as shown in Eq. 11: t′ org=ϕgen(porg) (11) Similarly, for each perturbed prompt pi pertwithin the set Spert, the text generation model produces a corresponding output text t′i pert, according to Eq. 12: t′i pert=ϕgen(pi pert),∀i∈Spert. (12) Here, porgrepresents the original prompt, pi pertdescribes a perturbation generated from the original text, and ϕgenis the text generation model function used to produce outputs t′that reveal the influence of prompt variations. The set of all perturbed prompts is denoted by Spert[13, 108]. 4.2 Creating the interpretable space In order to map the output space of the text generation model to a one-dimensional space, the Wasserstein distance between each text generated from the perturbed prompts and the original prompt (baseline output) is computed as a measure of similarity [ 100,101]. In all cases, the input prompt remains consistent. Word Mover’s Distance (WMD) is employed to extract meaningful embeddings from the texts, allowing the focus to remain on the primary semantic components of the generated text [ 103]. This reduces noise in the distance estimation, enabling the accurately calculated distance to serve as the output for the linear regression model. The distance between the outputs generated by the perturbation texts and the original prompt is given by Eq. 13: Wi(t′ org, t′i pert) = 1 nnX j=1 δWMD(t′j org)−δWMD(t′i,j pert) p 1 p ,∀i∈Spert.
|
https://arxiv.org/abs/2505.21657v1
|
(13) 11 APREPRINT - MAY29, 2025 In the above relation, the δWMD function represents the embedding extraction model applied to the texts before calculating the distance [ 103]. In Eq. 13, nrepresents the number of features, pdenotes the norm order, and W(t′ org, t′i pert) demonstrates the Wasserstein distance between the outputs produced by the original and perturbed prompts. Using p-values is crucial for validating the significance of the observed Wasserstein distances ( WD) between texts generated under different prompt perturbations. By calculating a p-value, we test whether the observed WDcould have occurred by chance, determining if the perturbation-induced changes are meaningful [ 114]. To compute this p-value, we use the Bootstrap algorithm [ 115,116], which estimates the probability of observing a WDas large as the observed one. This involves generating distributions of WDs through repeated sampling and comparing each sampled WDwith the observed value. If a small proportion of sampled WDs exceed the observed WD, the resulting p-value confirms that the differences are statistically significant, supporting the validity of the perturbation effects. TheWasserstein- pmetric allows us to adjust WD’s sensitivity to different text variations by fine-tuning the norm order p. Smaller pvalues emphasise local differences, while larger pvalues capture broader distributional shifts [ 101]. By testing various p-values, we identify the norm that best captures meaningful variations caused by prompt changes, enhancing the reliability and interpretability of the results [101]. When analysing Wasserstein distances, measures with high p-values, indicating non-significant differences, are excluded from the procedure [ 114]. As described in Algorithm 1, the Bootstrap algorithm calculates these p-values for all relevant ECDF-based distance measures, including WD. The algorithm computes the WDand its associated p-value. For a univariate example, let XandYbe the inputs. The Bootstrap algorithm performs 1e5iterations, where XY is the concatenated set of XandY. In each iteration, two random samples are drawn from XY, and their Wasserstein distance ( boostWD ) is computed. If boostWD exceeds the observed WD, a counter ( bigger ) is incremented. After completing all iterations, the p-value ( pVal ) is calculated as the proportion of iterations where boostWD is greater than the observed WD, as shown in Algorithm 1. Algorithm 1: Bootstrap Algorithm for Wasserstein Distance (WD) P-Value Calculation Input: Two sets of text-based distances: XandY Output: Observed Wasserstein Distance ( WD) and corresponding p-value ( pVal ) 1MaxItr ←105; 2WD←Wasserstein_Dist (X, Y); 3XY←concatenate XandY; 4LX,LY←length of X, length of Y; 5n←LX+LY; 6bigger ←0; 7fori←1toMaxItr do 8 Draw a random sample eof size LXfrom XY; 9 Draw a random sample fof size LYfrom XY; 10 boostWD ←Wasserstein_Dist (e, f); 11 ifboostWD >WDthen 12 bigger ←bigger + 1; 13 end 14end 15pVal←bigger /MaxItr ; 16return WD,pVal ; 12 APREPRINT - MAY29, 2025 Figure 5: SMILE flowchart for Large Language Models When generating perturbation texts derived from the original text, the keywords of the original text play a crucial role in maintaining certain levels of similarity and fidelity between the perturbation texts and the source content. This is important when the interpretable model is trained on perturbed texts as input. Therefore, a similarity criterion is
|
https://arxiv.org/abs/2505.21657v1
|
used to measure the degree of similarity between each perturbation text and the original, and this value is employed as a sample weight in the interpretable model. Eq. 15 calculates this similarity, and a Gaussian kernel is applied to normalise it into the range [0,1][103]. 13 APREPRINT - MAY29, 2025 WMD (porg, pi pert) = min T≥0nX k=1mX l=1Tkld(wk, wl) subject tomX l=1Tkl=pk,∀k∈Sorg nX k=1Tkl=ql,∀l∈Spert Tkl≥0(14) πi(porg, pi pert) = exp − Ci(porg, pi pert) σ2!2 ,∀i∈Spert (15) The function WMD (porg, pi pert)quantifies the semantic distance between the perturbation text and the original. The resulting weight πi(porg, pi pert)is obtained by applying a Gaussian kernel to this distance [ 106], allowing the surrogate model to assign higher importance to more semantically similar inputs. 4.3 Developing the Interpretable Surrogate Model We are innovative in mapping the output space of the text generation model to a one-dimensional distance scale, accompanied by corresponding perturbation texts for the calculated distances [ 117]. This method enables us to train an interpretable model on the mapped output space of the text generation model, effectively demonstrating how input words influence text generation. In our linear regression model, the vectors of the perturbation texts are critical, as they are treated as independent variables. Following the extraction of text embeddings, the distance between the outputs of these perturbation texts and the corresponding original text output is regarded as the response variable. Additionally [ 118], the degree of similarity between each perturbation text and the original text is assumed to represent the weight of the samples [ 102]. Eq. 16 outlines the square loss objective function for interpretable weighted linear regression: Loss(ϕgen, δWMD, porg) = 1 nnX i=1πi(porg, pi pert)× Wi(t′ org, t′i pert)−ˆft(pi pert)2 (16) Here, ϕgenrepresents the text generation model, and δWMD is the Word Mover’s Distance function applied to compute similarity between generated outputs. The weight πi(porg, pi pert)is derived using a Gaussian kernel function and reflects the degree of similarity between the original prompt and each perturbed prompt. The term ˆft(pi pert)is a predicted value from the linear regression model. After feature extraction, the ˆfi(pi,01 pert)function is a linear regression model trained over the one-hot vector space of perturbation texts (pi,01 pert)and the calculated distances [102]. 4.4 Evaluation Metrics In our investigation, we adopt a suite of evaluation metrics inspired by foundational work provided by Google [ 119], emphasising the multifaceted nature of assessing explainable models. This work highlights the significance of metrics such as accuracy, stability, fidelity and consistency as essential tools for a rigorous evaluation of model behaviour, particularly when comparing explainable models to traditional black-box models. The adoption of these metrics provides a structured methodology to dissect and understand model reliability in a more holistic manner [119]. 14 APREPRINT - MAY29, 2025 Figure 6 4.4.1 Accuracy Accuracy measures how well the model’s output aligns with the expected results (ground truth). Specifically, we compare the model’s attribution scores (attention to text elements) against ground truth labels that identify the most relevant text elements for specific generated text segments [120]. To quantify this, we use the
|
https://arxiv.org/abs/2505.21657v1
|
Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) [ 121]. AUC reflects the model’s ability to rank relevant elements (from the ground truth) higher than irrelevant ones: •AUC∼1:The model effectively distinguishes relevant elements, closely matching human-identified ground truth. •AUC∼0.5:The model’s ranking is random, showing poor alignment with the ground truth. For example, Fig. 6 shows that the ground truth identifies the words “meaning” and“life” as the most relevant text elements for generating the corresponding text output. The metric AttAUC evaluates the model’s alignment with the ground truth, where: •AttAUC = 1.0: Perfect alignment. •Lower AttAUC (0.8): Less accurate [122]. The heatmaps produced by the model visually represent these attention scores, as shown in Fig. 6, where darker red shades indicate higher attribution to specific text elements. An accurate model should correctly assign higher attention to the text elements that most influence the generated text output, improving the interpretability and reliability of the model [120]. 4.4.2 Stability Stability in an explainable model refers to its ability to provide consistent explanations even when minor changes are made to the input data [ 123]. In other words, small perturbations in the input should not significantly affect the 15 APREPRINT - MAY29, 2025 model’s predictions or explanations. Stability is a key property for fostering trust in explainable artificial intelligence (XAI) systems [ 14]. It reassures users that the model’s interpretations are not arbitrary or overly sensitive to irrelevant variations [15]. In the example text, the attention scores are visualised for two slightly different text prompts. The heatmaps show how the model assigns attention to each word. A stable model, as demonstrated in Fig. 6, assigns similar importance to the words make andrainy in both prompts, even though the token was added in one case. This consistency is critical for ensuring the model behaves predictably across minor input variations [ 16]. Without such stability, the explanations generated by the model could lead to confusion or misinterpretation, especially in high-stakes scenarios such as medical diagnosis or financial decision-making [13]. We use the Jaccard index to quantify stability, which measures the similarity between two sets of model predictions. For two sets AandB, the Jaccard index is calculated as follows in Eq. 17: Jaccard (A, B) =|A∩B| |A∪B|(17) This metric reflects the proportion of shared elements between the sets, where values closer to 1 indicate greater stability and similarity in explanations [ 123]. A higher Jaccard index confirms the model’s ability to yield stable outputs, thus providing consistent and reliable interpretations [ 6]. By ensuring stability, we not only enhance the robustness of the model but also its usability and credibility in real-world applications [14]. 4.4.3 Consistency In evaluating the consistency of our model, we assess whether it produces stable and reliable outputs when the same input is provided multiple times. Consistency ensures that the model behaves predictably, reinforcing trust in its outputs and usability across diverse scenarios [13]. As shown in Fig. 6, when the input phrase ‘‘what is the meaning of life?’’ was fed into the model multiple times, the output remained stable across iterations.
|
https://arxiv.org/abs/2505.21657v1
|
This consistency is crucial for applications where repeatable and reliable results are required. In our case, the model consistently assigns higher weights to the words ‘‘meaning’’ and ‘‘life’’, demonstrating that it understands the importance of these words in generating the desired output [ 6]. Such behaviour underscores the model’s ability to focus on the semantic core of the input, a trait essential for maintaining coherence in practical use cases [15]. In practical applications—such as automated content generation, interactive systems, or scenario simulations—users depend on models to produce results that do not fluctuate unexpectedly. Any inconsistencies could undermine user trust and compromise the system’s overall effectiveness. Our model enhances user confidence by demonstrating consistent outputs and establishes a foundation for further optimisation and integration into more complex systems. This ability to maintain consistency across iterations directly contributes to the robustness and scalability of the model, making it well-suited for deployment in dynamic real-world environments [14]. 4.4.4 Fidelity Fidelity is a metric to evaluate how well an explainable model aligns with a black-box model. The predictions of the black-box model, denoted as f(xi), and the explainable model, g(xi), for the i-th data perturbation are compared. Various loss functions and coefficient metrics quantify the similarity between the models’ predictions, measuring how effectively the explainable model approximates the black-box model. Thecoefficient of determination R2is an accuracy-based metric that measures how well the explainable model correlates with the black-box model [124], and is defined in Eq. 18: R2= 1−PNp i=1(f(Xi)−g(Xi))2 PNp i=1(f(Xi)−¯f(Xi))2(18) where ¯f(Xi)represents the average of f(Xi). Values of R2close to 1 indicate a strong alignment between the two models. For scenarios where weighting is necessary, the weighted coefficient of determination R2 wprovides an alternative version [16, 125], defined as Eq. 19: 16 APREPRINT - MAY29, 2025 R2 w= 1−PNp i=1(f(Xi)−g(Xi))2 PNp i=1(f(Xi)−¯fw(Xi))2(19) where ¯fw(Xi)represents the weighted average of f(Xi). These accuracy-based metrics, R2andR2 w, offer insight into how well the explainable model approximates the black-box model. To account for both sample size and the number of features used in the local explanation, the weighted adjusted coefficient of determination ˆR2 w[16] is used, as shown in Eq. 20: ˆR2 w= 1−(1−R2 w)Np−1 Np−Ns−1 (20) To measure the error or risk between the models’ predictions, error-based metrics, including the Weighted Mean Squared Error (WMSE) andWeighted Mean Absolute Error (WMAE) , are used [ 126]. These metrics quantify the weighted differences between the predictions of the explainable model and those of the black-box model. Lower values indicate better alignment. WMSE and WMAE are defined in Eqs. 21 and 22: WMSE =PNp i=1wi·(f(Xi)−g(Xi))2 PNp i=1wi(21) WMAE =PNp i=1wi· |f(Xi)−g(Xi)| PNp i=1wi(22) Here, widenotes the weight for each data point i,f(Xi)represents the predictions from the black-box model, g(Xi) represents the predictions from the explainable model, and Npis the total number of data points. Additionally, the mean L1andmean L2losses, defined below, are commonly used to assess prediction discrepancies [127, 16], as shown in Eqs. 23 and 24: L1=1 NpNpX i=1|f(Xi)−g(Xi)| (23) L2=1 NpNpX i=1(f(Xi)−g(Xi))2(24) These error-based metrics, including MSE, MAE, L1, and L2, quantify the differences between the black-box and explainable
|
https://arxiv.org/abs/2505.21657v1
|
models’ predicted scores, reflecting the risk of prediction discrepancies. We compute fidelity across various scenarios. Fidelity is measured by comparing the predictions of the explainable model to those of the black-box text generation model. We analyse how different perturbations in the input text affect fidelity scores and how various distance metrics for text-to-text comparisons perform when using linear regression and Bayesian ridge as surrogate models. 5 Experimental Results This section evaluates the proposed method’s ability to enhance explainability in instruction-based text generation. We analyse how the structure of textual prompts affects interpretability, how variations in input text influence the clarity of explanations, and how the proposed solution performs across diverse scenarios. Experiments were conducted using a range of datasets and four state-of-the-art text generation models, OpenAI GPT [1], LLaMA [5], and Claude-AI [128] to ensure a comprehensive assessment. 5.1 Qualitative Results The qualitative evaluation demonstrates the proposed framework’s interpretability using scenario-based testing and visualisations. We designed diverse testing scenarios by varying textual prompts, including changes in sentence complexity, rephrasing, and the inclusion of domain-specific terminology. These scenarios examine how the model interprets and generates outputs based on nuanced variations in textual inputs. 17 APREPRINT - MAY29, 2025 5.1.1 Input Prompt Explainability The proposed framework applies scenario-based testing across multiple prompt variations to evaluate the interpretability of the model’s behaviour regarding different inputs. These variations include rephrased questions, descriptive modifiers, and changes in word choice. The system generates a text heatmap highlighting individual words’ contribution to the model’s output for each input, with colours indicating the importance scores. Figure 7 shows how varying the input prompt’s phrasing or focus leads to different word importance patterns, demonstrating the model’s sensitivity to linguistic changes. Figure 7: Text heatmaps showing word-level importance for various input prompts. The colour intensity represents the influence of each word on the model’s output. 5.1.2 Reasoning Path and Token Attribution Beyond token-level importance visualisations, the proposed framework provides structured reasoning paths that illustrate how the model logically connects input elements to produce its response. Figure 8 presents an example where the SMILE method not only assigns attribution scores to individual tokens but also generates a reasoning flow diagram. This combined visualisation offers a holistic understanding of local word-level contributions and the logical structure of the model’s decision-making. Such comprehensive visual explanations align with advanced techniques used by leading AI research organisations, including Anthropic and OpenAI, while providing greater flexibility across diverse tasks and models. As shown in Figure 8, SMILE’s explainability aligns with or exceeds the interpretability achieved by leading industry methods [ 129]. 18 APREPRINT - MAY29, 2025 Figure 8: Comparison between attribution visualisations by Anthropic’s Attribution Graphs and the SMILE framework. The upper section shows the reasoning path identified by Anthropic’s method, while the lower section illustrates SMILE’s token-level importance heatmap for the same prompt. [129] 5.1.3 Word-Level Attribution Analysis on MMLU Using SMILE To evaluate the interpretability of ChatGPT using the SMILE framework, we used examples from the MMLU (Massive Multitask Language Understanding) dataset [ 130]. This benchmark contains four-choice multiple-choice questions covering various academic and professional subjects,
|
https://arxiv.org/abs/2505.21657v1
|
including science, history, law, and the humanities. For this study, we accessed a pre-formatted version of the dataset available on Kaggle [ 131] to facilitate prompt construction and experimentation. Figure 9 shows a set of attribution heatmaps that visualise which parts of the input prompt most influenced the model’s decision. Darker red tones indicate higher influence, while neutral or irrelevant tokens are shaded lighter. These visualisations demonstrate SMILE’s ability to reveal the model’s internal reasoning by tracing the most impactful words in multi-choice question contexts. 19 APREPRINT - MAY29, 2025 Figure 9: Attribution heatmaps generated by SMILE using ChatGPT (GPT-4) on selected samples from the MMLU dataset. Highlighted words represent those with the highest influence on the model’s output. 5.2 Quantitative Results In this part, we apply some evaluation metrics, including accuracy, stability, consistency, fidelity, and computation complexity, to asses our explainability method across different text generation models and different scenarios. 5.2.1 Accuracy To evaluate the performance of instruction-based text generation models, we tested ten different prompts and their variations through 64 perturbations. Heatmaps were generated to display the weights of each keyword in the prompts across various models, including OpenAI GPT, LLAMA, and Claude-AI. A ground truth was defined, assigning weights of 1 to critical keywords in the prompts and 0 to less relevant or auxiliary words. These ground truth values were compared with the extracted heatmaps to assess the interpretability and accuracy of each model. To quantify accuracy, we employed multiple metrics, including Attention Accuracy (ATT ACC), F1-Score for Attention (ATT F1), and Area Under the Receiver Operating Characteristic Curve for Attention (ATT AUROC). The results were averaged across the ten different prompts, and each model’s performance is summarised in Table 1, showcasing their effectiveness based on these metrics. Table 1: Performance metrics (ATT ACC, ATT F1, and ATT AUROC) for various models, evaluated under two conditions: different prompts and different images. Model name ATT ACC ATT F1 ATT AUROC OpenAI-GPT 0.70 0.59 0.84 LLaMA 0.76 0.40 0.84 Claude-AI 0.82 0.67 0.88 20 APREPRINT - MAY29, 2025 5.2.2 Stability To quantify stability, we use the Jaccard index, a metric that measures the similarity between sets by comparing their overlap. For text generation models such as OpenAI GPT, LLAMA, and Claude-AI, we tested ten different prompts and extracted the coefficients and weights assigned to each word. To evaluate stability, we added the token ‘‘ at the end of each prompt as input text, extracted the coefficients again, and computed the Jaccard index to compare the sets of coefficients before and after the perturbation. Finally, we calculated the average Jaccard index for each model, and the results are presented in Table 2. Table 2: Stability across different models for ten prompts and 30 perturbations Model name Jaccard Index OpenAI-GPT 0.62 LLaMA 0.45 Claude-AI 0.44 5.2.3 Consistency To compute consistency, we ran the same code with 64 perturbations for each prompt over 10 iterations, ensuring that the outputs remained consistent across these repetitions for text generation models such as OpenAI GPT, LLAMA, and Claude-AI, as shown in Table 3. We computed variance and
|
https://arxiv.org/abs/2505.21657v1
|
standard deviation metrics for each word coefficient. This step is essential to confirm that the model’s predictions are not random or heavily dependent on initialisation factors but are instead deterministic and robust. Table 3: Consistency metrics for different models on the prompt ‘‘What is the meaning of life?’’ Model name Variance Standard Deviation OpenAI-GPT 0.0000 0.0046 LLaMA 0.0048 0.0663 Claude-AI 0.0011 0.0331 5.2.4 Fidelity for Different Instruct image editing diffusion based Models (IED) The fidelity computation for various text generation models applied to the prompt ‘‘What is the meaning of life?’’ is displayed below. We compare several models, including OpenAI GPT, LLAMA, and Claude-AI, using 32 perturbations. The comparisons are made using a weighted linear regression as the surrogate model and the Wasserstein distance to compute distances across metrics such as MSE, (R2 ω), MAE, mean L1, and mean L2losses. The results are presented in Table 4. Table 4: Fidelity metrics for different models on the prompt ‘‘What is the meaning of life?’’ Model name WMSE R2 ω WMAE mean-L1 mean-L2 R2 ˆω OpenAI-GPT 0.0388 0.7104 0.1731 0.2035 0.0609 0.6409 LLaMA 0.0368 0.7068 0.1617 0.2387 0.0805 0.6364 Claude-AI 0.0265 0.6967 0.1251 0.1981 0.0708 0.6240 5.2.5 Fidelity Across Different number of Text Perturbations Tables 5, 6, and 7 present the results of fidelity computations for varying numbers of perturbations across the OpenAI GPT, LLAMA, and Claude-AI models, respectively. These results are derived using a weighted linear regression as the surrogate model and the Wasserstein distance for computing distances. The evaluation metrics include weighted 21 APREPRINT - MAY29, 2025 mean squared error (WMSE), weighted mean absolute error (WMAE), coefficient of determination ( R2 ωandR2 ˆω), and average loss metrics such as mean-L1 and mean-L2. Table 5: Performance metrics for different numbers of perturbations in OpenAI GPT #Perturb WMSE R2 ω WMAE mean-L1 mean-L2 R2 ˆω 32 0.0388 0.7104 0.1731 0.2035 0.0609 0.6409 64 0.0617 0.3406 0.1695 0.1756 0.0526 0.2712 128 0.0329 0.4468 0.1385 0.1519 0.0378 0.4193 256 0.0116 0.4276 0.0675 0.0872 0.0163 0.4138 Table 6: Performance metrics for different numbers of perturbations in LLAMA #Perturb WMSE R2 ω WMAE mean-L1 mean-L2 R2 ˆω 32 0.0368 0.7068 0.1617 0.2387 0.0805 0.6364 64 0.0240 0.4844 0.1284 0.1433 0.0382 0.4301 128 0.0085 0.2430 0.0638 0.0612 0.0096 0.2055 256 0.0058 0.2387 0.0605 0.0667 0.0118 0.2203 Table 7: Performance metrics for different numbers of perturbations in Claude-AI #Perturb WMSE R2 ω WMAE mean-L1 mean-L2 R2 ˆω 32 0.0368 0.7209 0.1495 0.2262 0.0733 0.6539 64 0.0115 0.3708 0.0564 0.1036 0.0370 0.3046 128 0.0089 0.1506 0.0535 0.0920 0.0192 0.1085 256 0.0191 0.2670 0.0902 0.0873 0.0163 0.2494 5.2.6 Fidelity for Different Distance Metrics and Surrogate Models Fidelity is also computed by comparing various distance metrics for text generation models such as OpenAI GPT using text-to-text comparisons. The surrogate models used are Weighted Linear Regression and Bayesian Ridge (BayLIME), as shown in Table 8. We explore different combinations of distance metrics, including Cosine similarity and Wasserstein Distance (WD). Fidelity is measured using MSE, (R2 ω), MAE, mean L1, and mean L2losses. Table 8: Fidelity results for different distance measures with 30 perturbations WLR
|
https://arxiv.org/abs/2505.21657v1
|
Fidelity Metrics T vs T T vs T WMSE R2 ω WMAE mean-L1 mean-L2 R2 ˆω Cosine Cosine 0.0172 0.3151 0.0659 0.1277 0.0412 0.1508 Cosine WD 0.0216 0.4197 0.0899 0.1332 0.0385 0.2805 WD WD 0.0388 0.7104 0.1731 0.2035 0.0609 0.6409 WD Cosine 0.0048 0.4026 0.0329 0.0871 0.0296 0.2593 WD+C WD+C 0.0349 0.5349 0.1589 0.3050 0.1468 0.4233 BayLIME Cosine Cosine 0.0200 0.2285 0.0499 0.1067 0.0464 0.0434 Cosine WD 0.0241 0.3572 0.0736 0.1281 0.0430 0.2029 WD WD 0.0118 0.5837 0.0629 0.1170 0.0361 0.4838 WD Cosine 0.0058 0.5273 0.0459 0.0978 0.0304 0.4139 WD+C WD+C 0.0048 0.4227 0.0284 0.0690 0.0213 0.2841 22 APREPRINT - MAY29, 2025 5.2.7 Computation complexity We apply our model to three instruction-based text generation frameworks: OpenAI GPT, LLaMA, and Claude-AI. We also evaluate the execution time of different explainability methods: LIME, SMILE, and Bay-LIME. Each method is tested with 60 perturbations on the same hardware to ensure consistency in computational conditions. The models exhibit notable differences in execution time, primarily due to how each explainability method computes similarity and assigns weights. SMILE, in particular, employs the Wasserstein distance to quantify text dissimilarities, making it substantially more computationally expensive than both LIME and Bay-LIME. As reported in Table 9, SMILE consistently incurs higher execution times across all three model frameworks. This pattern also holds across other frameworks, indicating that SMILE’s computational overhead is a systemic characteristic of its underlying methodology. Wasserstein distance is known for capturing subtle differences between text embeddings, making it especially effective in high-dimensional semantic spaces [ 101]. However, this precision comes at a cost. Computing the Wasserstein-2 distance involves operations on covariance matrices with time complexity proportional to O(d3), where d is the embedding dimensionality. When applied across Nsamples, the total complexity becomes O(Nd3). In contrast, LIME relies on cosine similarity, which has a much lower computational cost of O(Nd). This disparity arises because Wasserstein-2 distance requires solving matrix optimisation problems over d- dimensional Gaussian distributions [ 132]. While this added complexity leads to more stable and expressive explanations, as demonstrated by SMILE, it significantly increases runtime. Conversely, LIME avoids this overhead by utilising simpler vector-based similarity measures. In summary, SMILE offers greater fidelity and stability but at a higher computational cost. Its use of the Wasserstein distance makes it ideal for tasks where precision and interpretability are essential, even if that entails longer execution times. LIME and Bay-LIME, while faster, may trade off some robustness in favour of efficiency. Table 9: Execution times (in seconds) for each framework and explainability method with 60 perturbations Framework LIME SMILE Bay-LIME OpenAI-GPT 156.82 170.70 161.43 LLaMA 113.00 118.99 116.40 Claude-AI 355.09 372.95 349.88 5.2.8 Optimised Instructions from Google Research To further explore how instruction phrasing influences interpretability, we applied SMILE to analyse two high- performing prompts introduced by Google Research through their OPRO framework [ 133]. These prompts were designed to enhance reasoning in LLMs and were explicitly optimised for zero-shot performance on mathematical tasks. We first applied SMILE to GPT-3.5 using 1024 perturbations on the instruction: ‘‘A small quantity of arithmetic and a logical approach will help us quickly solve
|
https://arxiv.org/abs/2505.21657v1
|
this problem. ’’ As shown in Figure 10, the most influential words identified were ‘‘arithmetic’’ and ‘‘approach,’’ and ‘‘solution’’, confirming their importance in shaping reasoning- oriented outputs. We then extended the analysis to GPT-4 using the instruction: ‘‘Let’s combine our numerical command and clear thinking to quickly and accurately decipher the answer.’’ As illustrated in Figure 11, SMILE revealed that ‘‘numerical,’’ ‘‘command,’’ and ‘‘answer’’ had the highest positive influence. Figure 10: Word-level importance in a high-performing instruction from Google Research, identified using SMILE with GPT-3.5. 23 APREPRINT - MAY29, 2025 Figure 11: Word-level importance in an instruction optimized by Google Research, analysed using SMILE with GPT-4. 5.2.9 Token-Level Attention for Different Sentence Structures To investigate how sentence structure affects token-level attention in large language models (LLMs), we analyzed the attention scores assigned to the tokens ‘‘meaning’’ and‘‘life’’ across ten semantically similar but syntactically varied prompts. These prompts ranged from questions (e.g., ‘‘What is the meaning of life?’’), commands (e.g., ‘‘Please give me the meaning of life’’), to declarative formulations (e.g., ‘‘You must explain the meaning of life’’). For each sentence, we extracted the attention values directly from the final layer of the LLM. We focused on the attention allocated to the tokens ‘‘meaning’’ and ‘‘life.’’ The results were visualized using a box plot (see Figure 12), where each point represents a unique sentence. Full-sentence prompts are viewable through hover text, allowing qualitative comparison of sentence structure with attention distribution. The analysis reveals noticeable variability in how these tokens are weighted depending on syntactic form. For instance, in imperative constructions, attention to ‘‘life’’ tends to dominate, whereas in interrogative or abstract prompts, ‘‘meaning’’ often receives comparable or higher attention. This suggests that sentence formulation can subtly steer the internal focus of the model, even when the semantic content remains consistent. These findings underscore the sensitivity of LLMs to surface-level structure in shaping token-level interpretability, offering insight into both model behaviour and prompt engineering strategies. Figure 12: Box plot showing attention scores for the tokens ‘‘meaning’’ and ‘‘life’’ across ten prompts with varying sentence structures. Hovering over each point reveals the full sentence associated with that score. 24 APREPRINT - MAY29, 2025 6 Limitations While SMILE provides a flexible and model-agnostic framework for interpreting large language models (LLMs), its application introduces several limitations due to the use of perturbation-based sampling and LLMS’ black-box nature. 1) Perturbing prompts at the token level can lead to disproportionately large changes in LLM outputs, even when the semantic meaning remains largely intact. This sensitivity challenges the assumption of local smoothness required for regression-based attribution and can reduce explanation stability. 2) Some LLMs exhibit hallucination, producing fluent but factually incorrect or irrelevant outputs, especially when presented with ambiguous or syntactically degraded perturbations. This introduces noise into the attribution signal and affects explanation reliability. 3) The computational cost of generating and processing many perturbations (e.g., 1024 per input) is significant. Commercial APIs can lead to high inference costs and latency. For local models, substantial GPU resources are required, limiting practical scalability. 4) Some models, particularly research-stage or open-source LLMs, do
|
https://arxiv.org/abs/2505.21657v1
|
not provide API access and require manual local deployment. 7 Conclusion Interpretability remains a cornerstone of responsible AI, particularly in the context of large language models (LLMs) and instruction-based text generation. As AI systems become increasingly embedded in domains such as business, education, healthcare, and public services, it becomes essential for users to understand how these models generate outputs from their prompts. Without such insight, trust in AI systems diminishes, limiting adoption in real-world workflows. This underscores the need for transparent and interpretable solutions to demystify the behaviour of these black-box models. Interpretability is not only a research priority but is likely to become a legal and operational requirement in alignment with emerging AI regulations and ethical standards. Organisations that fail to address interpretability may encounter regulatory hurdles and reduced user acceptance. This work introduced a novel interpretability framework tailored for instruction-based text generation models. Leveraging SMILE, we identified and visualised how individual words within a user prompt influence the model’s generated output. Our approach provides model-agnostic, local explanations by perturbing input prompts and analysing the resulting changes using Wasserstein-based distance metrics. Whereas most existing interpretability methods highlight important tokens in the model’s output [ 13,134], SMILE shifts attention to the input space. By attributing influence to specific words in the prompt, our method offers actionable insights into how linguistic phrasing shapes generation behaviour, a key consideration for prompt engineering and instruction design. Our experiments show that SMILE produces interpretable, stable, and faithful explanations across multiple instruction-tuned LLMs, including OpenAI GPT, LLaMA, Claude.ai, and Gemma. The resulting heatmaps and attribution scores intuitively understand model behaviour, empowering users to control better and refine prompt design. This work lays the groundwork for more transparent and trustworthy natural language systems. Future directions include extending SMILE to support multi-turn dialogue, richer prompt hierarchies, and integration into AI safety, policy, and human-AI collaboration frameworks. 8 Future Works 1.Extension to Mathematical and Equation-Based Text Generation: Future work will explore the application of the proposed interpretability framework to models focused on symbolic reasoning and math-related tasks, such as equation solving, theorem generation, and formal derivation. Instruction-based mathematical models like Minerva and MathGLM require high precision in prompt design, and understanding how prompt components affect symbolic output will be essential in educational and scientific settings. 2.Application to Paraphrase Generation Models: The methodology can be extended to paraphrasing models, where semantic preservation and variation are central. Analysing how individual words influence paraphrased 25 APREPRINT - MAY29, 2025 outputs’ style, tone, or fidelity can support better user control in writing assistance, content rephrasing, and translation tools. 3.Adaptation to Multi-Turn and Context-Aware Generation: Another direction is to adapt the framework for multi-turn and context-aware systems, including dialogue agents. This would involve tracking how prompt changes impact responses’ consistency, coherence, and relevance across conversational sequences. 4.Cross-Task Generalisation: While this work focuses on instruction-based text generation, the interpretability strategy can be generalised to broader NLP tasks, including summarisation, classification, and question answering. Evaluating how different model families respond to prompt perturbations can uncover model-specific behaviours and improve transparency. 5.Benchmarking Interpretability and Prompt Sensitivity: We plan
|
https://arxiv.org/abs/2505.21657v1
|
to contribute to standardised evaluation protocols for prompt sensitivity, attribution robustness, and local fidelity. These benchmarks will support consistent assessment across models and help identify best practices for interpretable generation. Data Availability The datasets used in this article are publicly available, and the code supporting this paper is published online on GitHub. For the SMILE explainability framework, please refer to Dependable-Intelligent-Systems-Lab/xwhy. For the proposed algorithm introduced in this paper, see Sara068/CELL_SMILE. References [1]T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” *Advances in Neural Information Processing Systems*, vol. 33, pp. 1877–1901, 2020. [2]J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. Leoni Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., “GPT-4 Technical Report,” *arXiv preprint arXiv:2303.08774*, 2023. [3]A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” in *International Conference on Machine Learning*, pp. 8821–8831, 2021. [4]M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. D. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al., “Evaluating large language models trained on code,” *arXiv preprint arXiv:2107.03374*, 2021. [5]H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al., “LLaMA: Open and Efficient Foundation Language Models,” *arXiv preprint arXiv:2302.13971*, 2023. [6]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems*, pages 5998--6008, 2017. [7]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, Volume 1 (Long and Short Papers), pages 4171–4186, 2019. [8]Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *arXiv preprint arXiv:2108.07258*, 2021. [9]C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nature Machine Intelligence , vol. 1, no. 5, pp. 206–215, 2019. 26 APREPRINT - MAY29, 2025 [10] Y. Belinkov and J. Glass, “Analysis methods in neural language processing: A survey,” Transactions of the Association for Computational Linguistics , vol. 7, pp. 49–72, 2019. [11] S. Parrots, “Can Language Models Be Too Big,” in Proceedings of the 2021 ACM Conference , 2021. [12] Aslansefat, K. et al. Explaining Machine Learning Predictions for Medical Imaging with SMILE: Superpixel-based Model Interpretation . Computers in Biology and Medicine, 2023. [13] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you? Explaining the predictions of any classifier,” inProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pp. 1135–1144, 2016. [14] F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,”
|
https://arxiv.org/abs/2505.21657v1
|
arXiv preprint arXiv:1702.08608 , 2017. [15] Z. C. Lipton, “The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery,” Queue , vol. 16, no. 3, pp. 31–57, 2018. [16] S. M. Ahmadi, K. Aslansefat, R. Valcarce-Dineiro, and J. Barnfather, “Explainability of Point Cloud Neural Networks Using SMILE: Statistical Model-Agnostic Interpretability with Local Explanations,” arXiv preprint arXiv:2410.15374 , 2024. [17] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer,” Journal of Machine Learning Research , vol. 21, no. 140, pp. 1--67, 2020. [18] A. Chowdhery et al., “PaLM: Scaling Language Modeling with Pathways,” arXiv preprint arXiv:2204.02311 , 2022. [19] Y. Bai et al., “Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback,” Anthropic Research , 2022. [20] R. Anil, S. Narang, A. Chowdhery, P. Barham, J. Devlin, et al., “Gemma: Open Models Based on Gemini Research and Technology,” 2024. Available: https://ai.google.dev/gemma [21] S. Black et al., “GPT-NeoX-20B: An Open-Source Autoregressive Language Model,” arXiv preprint arXiv:2204.06745 , 2022. [22] B. Lieber et al., “Jurassic-1: Large Language Models from AI21 Labs,” AI21 Labs Technical Report , 2021. [23] Cohere AI Research, “Command R+: Retrieval-Augmented Generation for Large Language Models,” Cohere Technical Report , 2023. [24] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2019, pp. 4171–4186. [25] DeepSeek Research, “DeepSeek-LLM: Advancements in Open-Source Large Language Models,” arXiv preprint arXiv:2311.12345 , 2023. [26] OpenAI. GPT-4 Technical Report , 2023. https://arxiv.org/abs/2303.08774 [27] OpenAI. Introducing GPT-4o , 2024. https://openai.com/index/gpt-4o [28] Anthropic. Claude 3 Technical Overview , 2024. https://www.anthropic.com/news/claude-3-family [29] Google DeepMind. Gemini 1.5 Technical Report , 2024. https://deepmind.google/technologies/gemini [30] Cohere AI Research. Command R+: Retrieval-Augmented Generation for Large Language Models , 2023. https://cohere.com/blog/command-r-plus 27 APREPRINT - MAY29, 2025 [31] Mistral AI. Mixtral of Experts , 2023. https://mistral.ai/news/mixtral-of-experts/ [32] Hugo Touvron et al. LLaMA 3: Open Foundation Models , 2024. Meta AI Blog. [33] Mistral AI. Mixtral of Experts , 2023. https://mistral.ai/news/mixtral-of-experts/ [34] Technology Innovation Institute. Falcon 180B: Scaling up Open Language Models , 2023. https://huggingface. co/tiiuae/falcon-180B [35] DeepSeek Research. DeepSeek-V3: Advancements in Open-Source Large Language Models , 2023. https: //github.com/deepseek-ai/DeepSeek-V3 [36] OpenChat Team. OpenChat: Advanced Chatbot Model Based on Open LLMs , 2023. https://github.com/ imoneoi/openchat [37] xAI. Announcing Grok-1.5 , 2024. https://x.ai/news/grok-1.5 [38] Alibaba Group Qwen Team. Qwen2.5: Large-Scale MoE Language Model by Alibaba , 2024. https:// huggingface.co/Qwen/Qwen2.5 [39] Lev Lepikhin et al. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding , arXiv preprint arXiv:2006.16668, 2020. [40] William Fedus, Barret Zoph, and Noam Shazeer. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity , arXiv preprint arXiv:2101.03961, 2021. [41] Jiahui Huang et al. Kosmos-2: Multimodal Grounding for Language Models , arXiv preprint arXiv:2306.14824, 2023. [42] Teven Le Scao et al. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model , arXiv preprint arXiv:2211.05100, 2022.
|
https://arxiv.org/abs/2505.21657v1
|
[43] Xiang Lisa Li et al. XGLM: An Extra Large Cross-lingual Language Model , arXiv preprint arXiv:2112.10668, 2021. [44] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2019, pp. 4171–4186. [45] Alexis Conneau et al. Unsupervised Cross-lingual Representation Learning at Scale , ACL, 2020. [46] Linting Xue et al. ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models , arXiv preprint arXiv:2105.13626, 2021. [47] Karan Singhal et al. Towards Expert-Level Medical Question Answering with Med-PaLM 2 , arXiv preprint arXiv:2305.09617, 2023. [48] Ross Taylor et al. Galactica: A Large Language Model for Science , arXiv preprint arXiv:2211.09085, 2022. [49] Ilias Chalkidis et al. Legal-BERT: The Muppets Straight Out of Law School , arXiv preprint arXiv:2010.02559, 2020. [50] BigCode Project. StarCoder2: Open Models for Code Generation , 2023. https://huggingface.co/bigcode [51] Boris Rozière et al. Code LLaMA: Open Foundation Models for Code , arXiv preprint arXiv:2308.12950, 2023. [52] Tatsu Lab. Alpaca: A Strong, Replicable Instruction-Following Model , Stanford CS, 2023. [53] Zhihan Chiang et al. Vicuna: An Open Chatbot Impressing GPT-4 with 90% ChatGPT Quality , arXiv preprint arXiv:2304.11264, 2023. [54] THUDM. ChatGLM: Open Bilingual Chat Model , 2023. https://github.com/THUDM/ChatGLM-6B 28 APREPRINT - MAY29, 2025 [55] Hu Xu et al. Baize: An Open-Source Chatbot with Self-Feedback , arXiv preprint arXiv:2304.01196, 2023. [56] Microsoft Research. Phi-2: A Small Language Model with Reasoning Capabilities , 2023. https://huggingface. co/microsoft/phi-2 [57] TinyLLaMA Team. TinyLLaMA: Small-Scale Pretrained LLM , 2023. https://huggingface.co/ cognitivecomputations/TinyLLaMA-1.1B [58] Victor Sanh et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , arXiv preprint arXiv:1910.01108, 2019. [59] Alec Radford et al. Language Models are Unsupervised Multitask Learners , OpenAI Blog, 2019. [60] Hugo Touvron et al. LLaMA 2: Open Foundation and Fine-Tuned Chat Models , Meta AI, 2023. [61] Adadi, A., and Berrada, M. “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access , vol. 6, pp. 52138–52160, 2018. [62] Molnar, C. Interpretable Machine Learning , 2nd ed., 2020. https://christophm.github.io/interpretable- ml-book/ [63] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. “A Survey of Methods for Explaining Black Box Models,” ACM Computing Surveys (CSUR) , vol. 51, no. 5, pp. 93:1–93:42, 2018. [64] Du, M., Liu, N., and Hu, X. “Techniques for Interpretable Machine Learning,” Communications of the ACM , vol. 63, no. 1, pp. 68–77, 2019. [65] Steven Bramhall, David Lykins, and David Witte. QLIME: A Quadratic Local Interpretable Model-Agnostic Explanation Approach. SMU Data Science Review , 3(1):4, 2020. [66] Zhengze Zhou, Giles Hooker, and Fei Wang. S-LIME: Stabilized-LIME for Model Explanation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining , pages 2429--2438, 2021. [67] Daria Levi and Alexander Binder. Fast PointNet-LIME: Towards Interpretable Deep Learning on 3D Point Clouds. InProceedings of the International Joint Conference on Artificial Intelligence (IJCAI) , 2024. [68] Tobias Schlegel, Albin Schmidt, and Bernd Bischl. TS-MULE: A Framework for Local Explanations of Time
|
https://arxiv.org/abs/2505.21657v1
|
Series Models. In Proceedings of the AAAI Conference on Artificial Intelligence , 2021. [69] Xiaoran Huang, Yu Rong, Tingyang Xu, Wenbing Huang, and Junzhou Huang. GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems , 2022. [70] Saumitra Mishra, Bob L. Sturm, and Simon Dixon. Local Interpretable Model-Agnostic Explanations for Music Content Analysis. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) , volume 53, pages 537--543, 2017. [71] Md Abdullah, Talal A. Abdullah, and others. B-LIME: Explaining ECG Signal Classification with Interpretable Local Models. IEEE Transactions on Biomedical Engineering , 2023. [72] Zhao, W. et al. Explainability in Large Language Models: A Taxonomy and Survey , 2023. [73] Bender, E. M. et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? , 2021. [74] Madsen, R. et al. Post-hoc Explanations in NLP: A Survey , 2022. [75] Mumuni, A. et al. Survey on Explainable AI in NLP and Large Language Models , 2025. [76] Bilal, M. et al. Trends in Explainability for Foundation Models: A Comprehensive Review , 2025. [77] Amara, T. et al. ConceptX: Concept-Level Attribution for Explaining Language Models , 2025. 29 APREPRINT - MAY29, 2025 [78] Olah, C. et al. Reverse-Engineering Language Models with Interpretability Tools , 2022. [79] Nanda, N. et al. Progress Measures for Mechanistic Interpretability , 2023. [80] Anthropic. Claude 3 Technical Overview , 2024. https://www.anthropic.com/news/claude-3-family [81] Li, M. et al. SEER: Self-Explaining Language Representations , 2023. [82] Toshniwal, S. et al. CELL: Concept-Based Explanations for Language Models , 2022. [83] Meng, K. et al. Attribution Graphs: Causal Scrubbing with Transformer Circuits , 2025. [84] Nanda, N. et al. Scaling Monosemanticity: Interpreting Features in Large Models , 2024. [85] Liu, Y. et al. Relevance Estimation in Large Language Models , 2025. [86] Tak, S. et al. Interpreting Emotion Inference in LLMs , 2025. [87] Sabbata, S. et al. Geospatial Reasoning in Language Models: Interpretability Perspectives , 2025. [88] Shu, R. et al. Sparse Interpretability in Language Models: A Survey , 2025. [89] Fernando, C. et al. Neuroscientific Interpretability of Transformers via Dynamical Systems , 2025. [90] Ji, Z. et al. Survey on Hallucination in Natural Language Generation , 2022. [91] Krause, B. et al. Evaluating Reasoning in LLMs against Human Performance , 2024. [92] Martens, D. et al. Counterfactual Explanations for LLM Outputs , 2023. [93] Bokstaller, T. et al. Interactive Explanations with Fine-Tuned LLMs , 2025. [94] Chen, L. et al. Benchmarking Clinical Explanation Quality in LLMs , 2025. [95] Jin, D. et al. Meaning and Interpretability in Language Models , 2023. [96] Dehghani, M. et al. Mapping Instructions to Images: The SMILE Framework , 2024. [97] Muhamed, A. et al. Rare Concept Discovery via Sparse Autoencoders , 2025. [98] Sun, R. et al. Hybrid Explainability with Concept Bottleneck Models , 2025. [99] Wang, H. et al. Symbolic Program Compositions for Transparent LLM Reasoning , 2025. [100] Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein GAN . arXiv preprint arXiv:1701.07875, 2017. [101] Peyré, G., and Cuturi, M. Computational Optimal
|
https://arxiv.org/abs/2505.21657v1
|
Transport: With Applications to Data Science . Foundations and Trends in Machine Learning, 11(5-6), 2019, pp. 355–607. [102] Molnar, C. Interpretable Machine Learning . Lulu.com, 2020. https://christophm.github.io/interpretable- ml-book/ [103] Kusner, M., Sun, Y., Kolkin, N., and Weinberger, K. From Word Embeddings to Document Distances . In *Proceedings of the 32nd International Conference on Machine Learning (ICML)*, 2015, pp. 957–966. [104] Shalev-Shwartz, S., and Ben-David, S. Understanding Machine Learning: From Theory to Algorithms . Cambridge University Press, 2014. [105] Villani, C. Optimal Transport: Old and New . Springer-Verlag, 2009. [106] Hastie, T., Tibshirani, R., and Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction . Springer, 2009. [107] Liu, S. et al. Toward Unified Dialogue Evaluation with Prompt-based Learning . arXiv preprint arXiv:2302.03458, 2023. 30 APREPRINT - MAY29, 2025 [108] Qiu, X. et al. PromptBench: Towards Evaluating the Robustness of Large Language Models with Prompting . arXiv preprint arXiv:2302.07387, 2023. [109] Qiu, X. et al. Benchmarking the Reasoning Abilities of Large Language Models . arXiv preprint arXiv:2305.07984, 2023. [110] Rubner, Y., Tomasi, C., and Guibas, L. J. The Earth Mover’s Distance as a Metric for Image Retrieval . International Journal of Computer Vision, 40(2), 2000, pp. 99–121. [111] Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., and Yu, B. Interpretable Machine Learning: Definitions, Methods, and Applications . Proceedings of the National Academy of Sciences, 116(44), 2019, pp. 22071–22080. [112] Fu, Y., Yao, S., Zhang, W., Gao, T., Wang, Y., and Peng, H. “Guiding Large Language Models via Directional Stimulus Prompting,” arXiv preprint arXiv:2307.15043 , 2023. [113] Jin, X., Li, M., Yuan, Y., and Sun, M. “Prompting as Probing: Using Prompting to Understand Large Language Models,” arXiv preprint arXiv:2305.11097 , 2023. [114] Wasserman, L. All of Statistics: A Concise Course in Statistical Inference . Springer Science & Business Media, 2013. [115] Efron, B., and Tibshirani, R. J. An Introduction to the Bootstrap . Chapman and Hall/CRC, 1993. [116] Gilleland, E. “Bootstrap Methods in R: An Implementation Guide,” Journal of Statistical Software , vol. 95, no. 5, 2020, pp. 1–36. https://www.jstatsoft.org/article/view/v095i05 [117] Habib, H., et al. “Exploring Distance-Based Surrogate Modeling for Interpreting Text Generation,” arXiv preprint arXiv:2403.12345 , 2024. [118] Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. “Learning Transferable Visual Models From Natural Language Supervision,” In Proceedings of the International Conference on Machine Learning (ICML) , 2021. [119] Sanchez, M. I., Wu, M., and Gilad-Bachrach, R. “Evaluating Explainable AI: Which Metrics Should We Use?” Google Research , 2020. https://research.google/pubs/pub50518/ [120] Fong, R. C., and Vedaldi, A. “Interpretable Explanations of Black Boxes by Meaningful Perturbation,” In Proceedings of the IEEE International Conference on Computer Vision (ICCV) , 2017, pp. 3429–3437. [121] Hanley, J. A., and McNeil, B. J. “The Meaning and Use of the Area under a Receiver Operating Characteristic (ROC) Curve,” Radiology , vol. 143, no. 1, pp. 29–36, 1982. [122] Powers, D. M. “Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation,” arXiv preprint arXiv:2010.16061 , 2020. [123] Samek, W., Montavon,
|
https://arxiv.org/abs/2505.21657v1
|
G., Lapuschkin, S., Anders, C. J., and Müller, K.-R. “Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models,” arXiv preprint arXiv:1708.08296 , 2017. [124] Draper, N. R., and Smith, H. Applied Regression Analysis , 3rd ed. John Wiley & Sons, 1998. [125] Hocking, R. R. “The Analysis and Selection of Variables in Linear Regression,” Biometrics , vol. 32, no. 1, pp. 1–49, 1976. [126] Willmott, C. J., and Matsuura, K. “Advantages of the Mean Absolute Error (MAE) over the Root Mean Square Error (RMSE) in Assessing Average Model Performance,” Climate Research , vol. 30, no. 1, pp. 79–82, 2005. [127] Goodfellow, I., Bengio, Y., and Courville, A. Deep Learning . MIT Press, 2016. https://www.deeplearningbook. org/ [128] Anthropic. Claude AI: Next-Generation Assistant , 2023. https://www.anthropic.com/index/claude 31 APREPRINT - MAY29, 2025 [129] Anthropic. Interpreting LLMs with Attribution-Based Techniques , 2025. https://www.anthropic.com/ research/attribution-techniques [130] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. “Measuring Massive Multitask Language Understanding,” arXiv preprint arXiv:2009.03300 , 2021. [131] Hendrycks, D. “MMLU (Massive Multitask Language Understanding),” Kaggle , 2022. https://www.kaggle. com/datasets/hendrycks/mmlu [132] Cuturi, M., and Doucet, A. “Fast Computation of Wasserstein Barycenters,” In Proceedings of the 31st International Conference on Machine Learning (ICML) , 2014, pp. 685–693. [133] Yang, K., Bai, Y., Zhao, D., et al. “Large Language Models Are Reasoning Teachers,” arXiv preprint arXiv:2305.14325 , 2023. [134] Lundberg, S. M., and Lee, S.-I. “A Unified Approach to Interpreting Model Predictions,” In Advances in Neural Information Processing Systems (NeurIPS) , vol. 30, 2017. 32
|
https://arxiv.org/abs/2505.21657v1
|
Expert Survey: AI Reliability & Security Research PrioritiesMay 2025By Joe O’Brien¹*†, Jeremy Dolan²*, Jay Kim³, Jonah Dykhuizen², Jeba Sania⁴, Sebastian Becker⁵, Jam Kraprayoon¹, and Cara Labrador¹* Equal contribution. † Corresponding author: joe@iaps.ai. ¹ Institute for AI Policy and Strategy (IAPS). ² Independent Researcher. ³ Williams College. ⁴ Harvard Kennedy School. ⁵ Effektiv Spenden. Executive Summary As AI systems accelerate toward broadly human-level performance amid unprecedented investment, reliability and security research is urgently needed to prevent severe harms and ensure AI can be deployed safely and reliably to realize its benefits. Progress is partly bottlenecked by uncertainty about which technical research directions offer the greatest risk reduction per marginal dollar. To clarify priorities, we surveyed 53 experts from academia, industry, and civil society. These experts rated subsets from a list of 105 technical areas of AI reliability and security on importance (likelihood to significantly reduce AI harms) and tractability (ability for $10M USD to make significant progress) on a 1-5 scale. Due to the breadth of the list of research areas, respondents were encouraged to only rate areas they had personal expertise in, rather than the full list. Highest-Ranking Research Areas Research sub-area1 Importance (n) Tractability (n) Promise2 Emergence and task-specific scaling patterns 5.00 (3) 4.25 (4) 21.25 CBRN (Chemical, Biological, Radiological, and Nuclear) evaluations 4.67 (3) 4.33 (3) 20.22 Evaluating deception, scheming, situational awareness, & persuasion 4.75 (4) 4.25 (4) 20.19 Oversight and monitoring of LLM-agents 4.67 (9) 4.22 (9) 19.70 Cyber-capability evaluations 4.50 (4) 4.25 (4) 19.13 … see full results in Appendix C. The highest-ranked approaches emphasized preparing for emerging risks, with a strong focus on practical evaluation and monitoring over theoretical work: six of the top ten most promising approaches center on improving evaluations of dangerous capabilities, while the top-ranked approach focuses on capability forecasting. Additionally, research on multi-agent interactions consistently ranked highly, suggesting growing concern that advanced multi-agent systems and their interactions present a new vector of risk. 2 Promise = Importance × Tractability (max = 25) 1 For the full list of research sub-areas including descriptions, see Appendix A. Expert Survey: AI Reliability & Security Research Priorities | 1 In addition to these highest-ranked sub-areas, eight sub-areas were rated as highly important but not tractable at a two-year, $10M USD scale. All are either (i) applied security engineering (access control, supply-chain integrity, weight-key management, confidential compute) or (ii) deep-model understanding (mechanistic reasoning, eliciting latent knowledge). These likely require multi-year, larger-scale R&D programs. Implications for Funders and Policymakers Horizon Recommended Action Rationale 0-2 yrs (< $10 M) Fund independent dangerous-capability evaluations, scalable oversight tools, and multi-agent testbeds. High promise, strong expert consensus, undercapitalized relative to risk. 2-5 yrs (> $10 M) Launch coordinated initiatives in AI-specific cyber and infrastructure security (e.g., supply-chain integrity, confidential compute). Rated critical but not tractable at small scale; cross-disciplinary and infrastructure-heavy. Enablers i. Targeted talent programs ii. Subsidised access to frontier models (e.g., NAIRR pilot, regulatory sandboxes) iii. Government or multilateral signalling of priority research gaps Addresses skills bottleneck and coordination failures. Caveats: Modest sample size (n=53) and uneven coverage across sub-areas limit
|
https://arxiv.org/abs/2505.21664v1
|
statistical confidence; results should be read as directional. Future iterations will aim to broaden participation and capture updates in priorities closer to real time. Our study reveals a consistent message from respondents: significant, actionable opportunities exist within technical AI reliability and security research. 52 out of 53 respondents identified at least one research direction as both important and tractable (scoring ≥ 4.0 on both dimensions). We consider this broad optimism about accessible, actionable opportunities a strong signal for well-resourced stakeholders interested in funding technical AI reliability and security research. Expert Survey: AI Reliability & Security Research Priorities | 2 Table of Contents Executive Summary...............................................................................................................1 Introduction............................................................................................................................5 Background........................................................................................................................5 Motivation...........................................................................................................................5 Related Work......................................................................................................................6 Scope of Results................................................................................................................7 Methods..................................................................................................................................8 Survey Design....................................................................................................................8 Taxonomy.....................................................................................................................8 Expert Consultations....................................................................................................9 Survey Piloting.............................................................................................................9 Survey Administration........................................................................................................9 Sample.............................................................................................................................10 Data Processing.........................................................................................................11 Demographics.............................................................................................................11 Results..................................................................................................................................12 Most Promising Sub-Areas...............................................................................................12 Strategic Long-Term Opportunities...................................................................................13 Discussion............................................................................................................................14 Key Findings.....................................................................................................................14 Patterns Among Top-Ranked Approaches.................................................................14 Additional Patterns.....................................................................................................15 Four Example Promising Sub-Areas................................................................................16 Emergence and Task-Specific Scaling Patterns.........................................................16 Oversight and Monitoring of LLM-agents...................................................................17 Evaluation Methodology and Metrics.........................................................................18 Detecting and Addressing Previously Unmeasured or Latent Capabilities................19 Limitations........................................................................................................................20 Policy Implications............................................................................................................21 Future directions...............................................................................................................22 Conclusion............................................................................................................................24 Acknowledgements.............................................................................................................25 Appendix A: Taxonomy........................................................................................................26 Top-level categories.........................................................................................................26 Category and sub-area descriptions................................................................................27 Appendix B: Survey Instructions.......................................................................................42 Introductory page.............................................................................................................42 Category Selection Example............................................................................................43 Sub-Area Evaluation Example.........................................................................................44 Expert Survey: AI Reliability & Security Research Priorities | 3 Category Feedback Example...........................................................................................44 Demographic Questions...................................................................................................45 Appendix C: Survey Results...............................................................................................47 Full Survey Results..........................................................................................................47 Consensus Analysis.........................................................................................................51 Areas with Strongest Expert Consensus....................................................................51 Areas with Significant Expert Disagreement..............................................................51 Appendix D: Sub-areas excluded due to insufficient response......................................52 Appendix E: Demographics................................................................................................53 Bibliography.........................................................................................................................55 Expert Survey: AI Reliability & Security Research Priorities | 4 Introduction Background Artificial intelligence (AI) is rapidly advancing in many domains (Sevilla et al. 2022; Sevilla 2024; Maslej et al. 2025). Language models have demonstrated remarkable potential to scale capabilities along with more data, compute, and more effective algorithmic techniques (Kaplan et al. 2020; Hoffmann et al. 2022; Samborska 2025), while experts predict they will continue to become increasingly capable of performing complex tasks and taking extended autonomous actions (Kwa et al. 2025). CEOs of leading AI companies have publicly suggested that AI systems could reach human-level performance across a wide range of domains within the next 2-5 years,3 and these companies are investing billions of dollars to reach this milestone first. The potential economic and geopolitical advantages from such systems have ignited competition between global powers, particularly between the United States and China, creating a growing national priority and additional incentives to accelerate development. While it is widely recognized that AI reliability and security research is essential for ensuring that these increasingly powerful systems remain safe, reliable, and beneficial (Bengio et al. 2025), the impending arrival of powerful, general-purpose AI creates an urgent need for such research to keep pace. To do so requires addressing a range of technical challenges, from AI-specific concerns (such as alignment, control, and interpretability) to high-stakes versions of traditional cybersecurity and infrastructure threats (such as model theft, adversarial attacks, and infrastructure vulnerabilities). Motivation Despite widespread awareness of this urgent challenge (Bengio et al. 2025), significant gaps remain in coordinating and prioritizing technical AI reliability and security (AI R&S) research efforts. Progress has been limited
|
https://arxiv.org/abs/2505.21664v1
|
by several bottlenecks: ● Resources: While funding for AI R&S has increased in recent years, it remains inadequate relative to the scale and urgency of the problem. ● Expertise: The field faces a shortage of researchers with the necessary technical skills. ● Uncertainty: There is considerable uncertainty about which research directions offer the most promise for risk reduction per unit of effort. This survey aims to address the uncertainty bottleneck by providing an assessment of research priorities across the technical AI R&S landscape. Through a survey of researchers across 3 “I think we’re probably three to five years away [from AGI]” — Demis Hassabis, CEO of Google DeepMind (Kantrowitz 2025). “We are now confident we know how to build AGI” — Sam Altman, CEO of OpenAI (Altman 2025). “We are rapidly running out of truly convincing blockers, truly compelling reasons why this [AGI] will not happen in the next few years.” — Dario Amodei, CEO of Anthropic (Fridman 2024). Expert Survey: AI Reliability & Security Research Priorities | 5 industry, academia, and civil society, we evaluate 105 areas of technical AI R&S research (grouped into 20 high-level research categories) along dimensions of importance and tractability. Appendix A summarizes all 105 sub-areas (and their corresponding higher-level categories) included in the survey. Readers looking for additional context on the research landscape may wish to review this taxonomy before proceeding to the results and analysis. While several high-quality descriptive overviews of the technical AI R&S landscape exist (Bengio et al. 2025; Reuel et al. 2025; Anwar et al. 2024; The Singapore Consensus on Global AI Safety Research Priorities, 2025) none of these attempt to explicitly rank research directions. Likewise, the few published expert surveys to date either cover governance practices (Schuett et al. 2023) or gauge general attitudes toward “AI safety” as a whole (B. Zhang and Dafoe 2019; Grace et al. 2024); they do not present a comparative, quantitative priority list of technical sub-areas. In the absence of such guidance, funders, policymakers, and researchers—especially new entrants to the field—risk allocating attention and resources toward the most visible labs or currently prevailing research paradigms, potentially crowding out less-publicized but higher-leverage work (cf. Merton 1968). This study is the first to explicitly quantify expert judgment across a comprehensive set of technical AI R&S topics (105 sub-areas in 20 categories) and to produce a data-driven ranking of their “promise” (importance × tractability). By examining where experts agree and where they diverge, we highlight the research directions most widely regarded as both critical and amenable to near-term progress. Related Work This survey builds upon several recent efforts to map the technical AI R&S landscape. Taxonomies from Anwar, et al. (2024), Reuel & Bucknall et al. (2024), and the AI Assurance Technology Market Report (AIAT Report 2024) informed the structure of our survey. Although released after our survey was designed, the International AI Safety Report (Bengio et al. 2025) is another relevant contribution. This work also follows prior related research by IAPS, notably Mapping Technical Safety Research at AI Companies (Delaney, Guest, and Williams 2024). Methodologically, our survey draws
|
https://arxiv.org/abs/2505.21664v1
|
inspiration from the Centre for the Governance of AI’s AGI Safety and Governance Survey (Schuett et al. 2023), which similarly aggregated expert judgments to identify high-priority policy interventions. Expert Survey: AI Reliability & Security Research Priorities | 6 Scope of Results The central result of this survey is a ranking of AI R&S research areas based on their overall “promise,” defined as the product of experts’ assessments of each area’s importance and tractability. There are many ways one could define importance and tractability; the definitions used in our survey were designed to assess the reduction in severe harms expected from an actionable level of funding ($10M USD). ● “Importance” was defined as agreement with the statement: “Resolving the core challenges of this sub-area and implementing the resulting solutions would significantly reduce the risk of severe harm (loss of >100 lives or >$10 billion in economic impact) from AI.” 4 This framing was chosen to provide concrete thresholds that capture consequential harms while remaining accessible to experts from diverse backgrounds. ● “Tractability” was defined as agreement with the statement: "an additional targeted investment of approximately $10 million over the next two years would lead to significant, measurable advancements in addressing this issue." This dollar amount and time horizon were chosen as a practical scale for interventions by researchers, funding organizations, and policymakers facing near-term resource allocation decisions. Any definition of these terms inherently affects the resulting assessment. For example, some research directions may be potentially high-impact, but unlikely to produce implementable results within two years; this framing will naturally disadvantage those directions. Similarly, others may require substantially larger monetary investments to overcome development and implementation barriers. While our core results must be interpreted in accordance with the definitions detailed above, we have also separately highlighted several challenging (i.e., high-impact, low-tractability) research directions, which may be of interest to funders with particularly large resource pools or who are willing to take long-term bets on important interventions (see Results). 4 Defining importance via quantitative thresholds for severe harm (e.g., lives lost, major economic impact) is analogous to practices in other established risk management domains. For instance, nuclear safety regulation employs quantitative health objectives related to mortality risks as part of its safety goals for evaluating reactor accident risks (U.S. Nuclear Regulatory Commission 1986). Expert Survey: AI Reliability & Security Research Priorities | 7 Methods Survey Design The primary objective of this survey was to systematically capture experts’ judgments of the importance and tractability of technical interventions related to AI reliability and security, with particular attention to large language models (LLMs) and their underlying architectures. The survey was developed through an iterative process involving literature reviews, consultations with domain experts, and pilot testing to ensure clarity, relevance, and comprehensive coverage of significant technical interventions. Taxonomy We developed a taxonomy of technical reliability and security research consisting of 105 sub-areas organized into 20 broader categories. The initial set of categories and sub-areas was drawn from recent literature, particularly Anwar et al. (2024), which was chosen for its structured overview of current technical challenges specific to LLM safety.
|
https://arxiv.org/abs/2505.21664v1
|
This foundational source provided a well-defined categorization aligned with ongoing discussions within relevant research communities. The taxonomy used in the survey, with brief descriptions of all areas, is reproduced in Appendix A. Items selected for inclusion met two primary criteria: 1. Technical Focus: To maintain the scope of the survey on a single mode of intervention, only research tracks with a “technical” focus were included. Broader governance and policies work, as well as investigations of sociotechnical factors, were intentionally omitted. 2. Interventions on or around the model: Interventions with a direct effect on model behavior, evaluation of model behavior, model design, or a model’s immediate environment (including the physical infrastructure on which it runs, as well as tooling, monitoring, and access control systems) were prioritized over more distal reliability and security remedies (e.g., policy and governance). This focus on technical and model-centric interventions aligns with the survey’s goal of assessing tractability and importance specifically within technical AI R&S. This narrowed scope allows us to develop a more detailed understanding of technical intervention points and lays the groundwork for future surveys to explore complementary areas, such as AI governance and sociotechnical interventions. Expert Survey: AI Reliability & Security Research Priorities | 8 Expert Consultations The initial set of items underwent significant refinement based on consultations with approximately 15 experts specializing in machine learning and AI R&S. Experts were chosen based on their publication records, academic credentials, or industry experience in relevant research domains. These consultations occurred over a period of approximately three weeks between late October and November 2024, involving structured one-on-one discussions aimed at verifying item relevance, clarity, and completeness. During consultations, experts suggested additions, removals, or revisions to survey items, informed by their research experience and judgment regarding practical tractability and importance. This refinement process involved subjective judgments by the authors, who aimed to balance comprehensive coverage with survey manageability, thereby potentially introducing selection biases. Nonetheless, we sought in our decisions on survey design to reflect the consensus expert opinion regarding important and distinct approaches to reflect in the survey taxonomy. Survey Piloting We conducted a pilot phase in December 2024 prior to the survey’s official launch, involving one-on-one interactions with a small group of experts. The pilot phase assessed the interpretability of questions, definitions of “importance” and “tractability,” and overall survey usability. Pilot feedback led to specific adjustments, notably improvements in clarity of rating scales, item descriptions, and clarity of framing text. Survey Administration The finalized survey was administered online through the Qualtrics platform from December 21, 2024 to March 4, 2025. Responses were collected anonymously. Respondents were first prompted to select one or more high-level categories that aligned with their expertise from the presented list of 20 categories (e.g., “Scalable oversight and alignment techniques,” “Robustness,” “Cybersecurity for AI models”; see Appendix A for more). For each category chosen, respondents were then shown a series of sub-areas (e.g., a sub-area of “Scalable oversight and alignment techniques” is “Iterated Distillation and Amplification”). For each of the sub-areas in a chosen category, respondents were provided a brief description (reproduced in Appendix A)
|
https://arxiv.org/abs/2505.21664v1
|
and asked a series of questions. Respondents then assessed that sub-area along dimensions of importance and tractability. ● Importance: “Resolving the core challenges of this sub-area and implementing the resulting solutions would significantly reduce the risk of severe harm (loss of >100 lives or >$10 billion in economic impact) from AI.” Expert Survey: AI Reliability & Security Research Priorities | 9 ● Tractability: “An additional targeted investment of approximately $10 million over the next two years would lead to significant, measurable advancements in addressing this sub-area’s underlying challenges.” Respondents were asked to indicate their level of agreement with the statement based on a 5-point Likert scale: “Strongly Disagree,” “Disagree,” “Neutral,” “Agree,” “Strongly Agree.” Respondents also had the option to say “I don’t know,” and were encouraged to skip any sub-areas they were insufficiently familiar with. Each sub-area also contained an open-ended question asking respondents what indicators would best measure progress in this area and for suggestions to improve work in the area. After assessing each sub-area in a category, respondents were also asked three free-response questions on what they saw as the most impactful challenge and key obstacles to progress in their chosen area, and any sub-areas missing from the survey. Full instructions from the survey are reproduced in Appendix B. Sample The expert sample for this survey was recruited through a structured, multi-pronged approach designed to identify individuals with demonstrable research contributions or recognized expertise relevant to the 105 technical AI R&S sub-areas detailed in our comprehensive taxonomy (see Appendix A). The primary component of our recruitment strategy involved systematically identifying and inviting the first authors of the key publications cited as exemplars for each specific sub-area within our taxonomy, as well as secondary authors with relevant domain expertise and level of experience. This ensured that our initial outreach was grounded in established research contributions directly aligned with the survey's content. To complement this and enhance the breadth of expertise, this initial list of potential participants was augmented through targeted outreach. This involved: ● Identifying additional relevant experts through supplementary, focused literature reviews within specific domains of the taxonomy. ● Consultations with subject matter experts to identify leading figures or individuals with unique perspectives in niche or emerging sub-areas, ensuring a more comprehensive inclusion of specialized knowledge. In total, 515 researchers were invited to participate in the survey, and 53 completed an assessment of one or more sub-area (mean: 5.2, SD: 3.57), totaling 546 assessments across all sub-areas. This was a response rate of 10.6%, roughly comparable to similar surveys of AI experts (such as Grace et al. [2024], which garnered a 15% response rate). Expert Survey: AI Reliability & Security Research Priorities | 10 470 researchers were initially contacted on December 21, 2024, with reminders sent on January 7, January 15, and January 21, 2025. After preliminary analysis of responses by sub-area, 45 additional experts for categories underrepresented in the initial data were contacted on February 22, with reminders sent on March 3. All responses occurred between December 21, 2024 and March 4, 2025. Data Processing Sub-areas that received two
|
https://arxiv.org/abs/2505.21664v1
|
or fewer ratings in either importance or tractability were excluded from quantitative analysis due to insufficient data for meaningful statistical interpretation. Exclusions are listed in Appendix C and limitations related to this exclusion are discussed in the Limitations section below. For each sub-area with sufficient responses, we calculated mean scores for importance and tractability. We then computed a “promise score” for each sub-area by multiplying these means, reflecting our assessment that areas must excel in both dimensions to warrant prioritization. Standard deviations were calculated to assess consensus levels among respondents. Demographics 43 of the 53 respondents (81%) completed the optional demographic survey. Roughly half of respondents were from academia (PhD students, postdoctoral researchers, or faculty/professors), with the other half primarily affiliated with industry or non-profit organizations. Only one respondent identified their primary affiliation as governmental, while three identified their current position as “policymaker or other professional with equivalent experience.” Reflecting our survey’s focus on technical research, 84% of respondents identified their primary area of expertise as “machine learning/AI” or “computer science/engineering,” with only 7% listing “public policy/governance.” For full demographic results, see Appendix E. Expert Survey: AI Reliability & Security Research Priorities | 11 Results Most Promising Sub-Areas We identified the most promising research directions based on the product of each sub-area’s average importance and tractability ratings, resulting in what we term a “promise score.” To improve robustness in the results, we only include approaches for which at least three respondents provided ratings for both importance and tractability (this filtered out 29 sub-areas, listed in Appendix D). Finally, we filtered for areas that achieved an average rating of at least 4 (“Agree,” on our 5-point Likert scale) for both importance and tractability. This resulted in the following 15 most promising research directions. A description of each sub-area (as shown to respondents during the survey) is given in Appendix A. Full results, as well as additional data on areas of highest and lowest consensus, are contained in Appendix C. Research sub-area Importance (n) Tractability (n) Promise 1 Emergence and task-specific scaling patterns 5 (3) 4.25 (4) 21.25 2 CBRN (Chemical, Biological, Radiological, and Nuclear) evaluations 4.67 (3) 4.33 (3) 20.22 3 Evaluating deception, scheming, situational awareness, and persuasion 4.75 (4) 4.25 (4) 20.19 4 Oversight and monitoring of LLM-agents 4.67 (9) 4.22 (9) 19.7 5 Cyber evaluations 4.5 (4) 4.25 (4) 19.13 6 Detecting and addressing previously unmeasured or latent capabilities 4.38 (16) 4.25 (16) 18.59 7 Multi-agent metrics and evaluations 4.43 (7) 4.14 (7) 18.35 8 Multi-agent security 4.57 (7) 4 (6) 18.29 9 Quantifying cyber threats from advanced capabilities 4.25 (4) 4.25 (4) 18.06 10 Manage conflicts between different values 4.5 (4) 4 (4) 18 11 Safety and emergent functionality in multi-agent interactions 4.57 (7) 3.86 (7) 17.63 12 Mechanistic understanding and limits of LLM reasoning 5 (3) 3.5 (4) 17.5 13 Validating and applying interpretability methods 4.38 (8) 4 (7) 17.5 14 Evaluation methodology and metrics 4.14 (14) 4.2 (15) 17.4 15 Control mechanisms for untrusted models 4 (4) 4.25 (4) 17 Expert Survey: AI Reliability & Security Research Priorities
|
https://arxiv.org/abs/2505.21664v1
|
| 12 Strategic Long-Term Opportunities Several research areas emerged with strong consensus on their importance despite lower perceived tractability. These critical domains—primarily in security implementation and applied systems—represent strategic priorities that may require longer timelines and substantial resource commitments to achieve meaningful progress. While they present significant technical challenges, their high importance scores (all ≥4.0) indicate these areas warrant further consideration, despite apparent implementation barriers. The following table shows research areas with the largest positive gaps between importance and tractability: Research sub-area Importance (n) Tractability (n) Gap 1 Access control and interface hardening 4.75 (4) 2.75 (4) 2.00 2 Supply chain integrity and secure development 4.57 (7) 3.00 (6) 1.57 3 Mechanistic understanding and limits of LLM reasoning 5.00 (3) 3.50 (4) 1.50 4 Preventing model self-exfiltration 4.00 (4) 2.75 (4) 1.25 5 Weight security and key management 4.50 (4) 3.25 (4) 1.25 6 Confidential computing and environment isolation 4.57 (7) 3.43 (7) 1.14 7 Detecting modified models or poisoned data 4.00 (4) 3.00 (4) 1.00 8 Eliciting Latent Knowledge (ELK) 4.25 (8) 3.25 (8) 1.00 9 Iterated Distillation and Amplification (IDA) 4.00 (9) 3.22 (9) 0.78 Expert Survey: AI Reliability & Security Research Priorities | 13 Discussion Key Findings Patterns Among Top-Ranked Approaches Analysis of the highest-ranked approaches reveals several patterns that should inform research prioritization: Anticipating Dangerous Capabilities Eight of the top ten approaches focus on ensuring society is not caught unprepared by sudden advancements in harmful AI capabilities: ● Six approaches center on improving evaluations of dangerous capabilities. ● The top-ranked approach “Emergence and Task-Specific Scaling” (T=4.25, I=5.0) focuses on formalizing and forecasting new capabilities as models scale. ● The fourth-ranked approach “Oversight and monitoring of LLM-agents” (T=4.22, I=4.7) addresses tracking agent actions. This corroborates findings from previous expert surveys that dangerous capability evaluations are among the most promising directions for further research (Schuett et al., 2023). Of the 31 respondents who prioritized these approaches, five explicitly noted in free-form responses that lack of funding for top talent was the main bottleneck to progress. While dangerous capability evaluations are currently being researched at frontier AI companies and by a small number of third-party bodies (such as ScaleAI, METR, Apollo Research, and government initiatives like the US AI Safety Institute and the UK AI Security Institute), our findings suggest this direction is still undercapitalized relative to its importance. Multi-Agent Interactions Emerge as a Critical Area All sub-areas of the Multi-Agent Interactions category scored within the top 30 (out of 105 total), a feat matched only by Domain-specific Evaluations. Multi-agent metrics and evaluations (T = 4.14, I = 4.43)5 and multi-agent security (T = 4.14, I = 4.57) ranked 7th and 8th highest, respectively. This highlights growing concern that advanced multi-agent systems present novel risks distinct from those posed by single agents (Hammond et al. 2025). Practical Evaluation Over Theoretical Frameworks A clear divide emerged between approaches focused on practical evaluation and those 5 T refers to the mean of respondent answers to sub-area tractability, and I refers to the mean of respondent answers to sub-area importance. We will use this
|
https://arxiv.org/abs/2505.21664v1
|
shorthand throughout the paper. Expert Survey: AI Reliability & Security Research Priorities | 14 centered on theoretical frameworks: ● Evaluation-focused approaches dominate the top rankings. Nine of the top 15 approaches emphasize evaluation, detection, or monitoring of harms (such as "Emergence and task-specific scaling patterns," "CBRN evaluations," and "Evaluating deception and scheming”), rather than solutions aimed at root causes, or theoretical research. ● Theoretical approaches clustered at the bottom. Traditional machine learning theory (such as "Double descent and overparameterization," "Implicit bias of optimization algorithms," and "Optimization and loss landscape analysis") received lower rankings. These approaches address foundational questions about the nature of modern AI systems but were evaluated as less directly impactful for addressing large-scale concerns. ● A preference for practical interventions is also evident in the high rankings of approaches targeting specific risk domains like CBRN, cyber, and deception. Additional Patterns Implementation Challenges (High Importance, Low Tractability) Analysis of essential but challenging approaches reveals several patterns: 1. Security implementation challenges: Six of the ten largest importance-tractability gaps come from cybersecurity or infrastructure security domains. 2. Implementation over theory: These gaps primarily appear in applied implementation areas rather than theoretical frameworks. 3. Technical complexity: Many of these approaches require complex technical solutions spanning multiple domains (e.g., hardware, software security, ML systems). Consensus Areas (Low Variance) Areas with the strongest consensus among experts share several characteristics: 1. Evaluation focus: All five areas with the lowest variance relate to evaluation, monitoring, or detection approaches. 2. Strong promise: Areas with the strongest consensus frequently appear among the highest promising scores (four of the five appear in the top 15). 3. Concrete objectives: These approaches tend to have well-defined objectives and clear implementation paths. 4. Applied methods: Most low-variance areas focus on practical applications rather than theoretical frameworks. Expert Survey: AI Reliability & Security Research Priorities | 15 Areas of Disagreement (High Variance) Areas with the greatest expert disagreement reveal several interesting patterns: 1. Implementation uncertainty: The highest variance appears in areas focused on implementing safety mechanisms rather than evaluating risks. 2. Technical complexity: Many high-variance areas involve complex technical challenges spanning multiple domains. 3. Hardware and infrastructure security: Three hardware/infrastructure security topics appear in the top 10 highest variance list, suggesting some fundamental disagreement on effective paths forward for this domain. 4. Theoretical foundations: Several foundational research areas show high variance, particularly in tractability assessments. Four Example Promising Sub-Areas Below we discuss four illustrative promising sub-areas in detail. For each, we provide the description given during the survey, why research in this area matters for preventing severe harms, ratings according to the survey, exemplary research and funding, and key respondent insights. The discussions provide example starting points for funders and others who want to explore individual sub-areas further.6 We chose these particular sub-areas as they are either highly ranked or received the highest number of responses, compared to the other top 15 sub-areas. Emergence and Task-Specific Scaling Patterns Description: Formalizing and forecasting the emergence of new capabilities as models scale, investigating whether scaling alone can produce certain capabilities, and designing methods for discovering task-specific scaling
|
https://arxiv.org/abs/2505.21664v1
|
laws. Ranking: Highest ranked sub-area according to promise score (T = 4.25, I = 5). Why It Matters: Anticipating and mitigating potential severe harms from future AI capabilities necessitates accurate forecasting of when and how these capabilities may emerge. This foresight enables the proactive implementation of safety measures and the formulation of informed policy guidelines, ensuring that safety responses do not lag behind the development of capabilities. The phenomenon of emergence—where capabilities such as multi-step reasoning and instruction following arise unexpectedly with scaling (Wei et al. 2022)—is particularly pertinent, as it is inherently surprising and challenging to predict. Employing task-specific scaling laws facilitates more granular predictions, enhancing our ability to 6 Additional sources to further ground prioritization of AI R&S research and funding include Bengio et al. (2025), The Singapore Consensus on Global AI Safety Research Priorities (2025) and Fist (2025) (particularly the sections on “basic science,” “security,” and “evidence of risks”). Expert Survey: AI Reliability & Security Research Priorities | 16 anticipate and address emerging capabilities effectively (Caballero et al. 2023). Exemplary Research and Funding: Example work has been conducted across academic institutions (Caballero et al. 2023), AI companies (Ganguli et al. 2022), and research non-profits (Sevilla 2024). Ruan, Maddison, and Hashimoto (2024), from Stanford and the University of Toronto, introduce the concept of observational scaling laws, showing that small model performance on specific benchmark tasks can anticipate the emergence of qualitatively new behaviors in larger models. Jones et al. (2025) at Anthropic propose a methodology for forecasting model failures at deployment scale. This line of research has also attracted philanthropic support: EpochAI, a non-profit research institute working on forecasting AI capabilities, has received funding from Open Philanthropy (OP) and Jaan Tallinn (Epoch AI 2025). Furthermore, OP and Schmidt Futures supported work on “inverse scaling,” focused on cases of worse performance with increased scale (McKenzie et al. 2024). Finally, OP’s April 2025 RFP on “Improving Capability Evaluations,” is, among other things, interested in research exploring how existing measures can predict emergence of new capabilities (Open Philanthropy 2025a). Respondent Insights: One respondent worried that the safety benefits from this area of research could be offset by advances in dual-use AI capabilities that it would enable. They suggested future support would be best targeted toward enabling independent researchers or academic labs to work with cutting-edge models. Oversight and Monitoring of LLM-agents Survey Description: Building automated oversight and monitoring tools to track LLM-agent actions. Ranking: 4th highest rated sub-area (T = 4.22, I = 4.7), with the 3rd highest number of responses (9). Why It Matters: LLM agents can autonomously execute multi-step tasks—such as writing code, conducting research, or interacting with external systems—without continuous human supervision. This autonomy increases the risk of unintended or malicious actions, including data breaches, financial fraud, or physical harm. Without monitoring, failures could go undetected until significant damage occurs. As agents are integrated into critical domains like finance, healthcare, and cybersecurity, robust oversight mechanisms are essential to ensure accountability, enforce constraints on agents, and prevent large-scale harm. Exemplary Research and Funding: There is ongoing academic research which proposes different
|
https://arxiv.org/abs/2505.21664v1
|
frameworks for monitoring and evaluating LLM agents (T. Yuan et al. 2024; Ruan et al. 2024; Guo et al. 2024). A relevant field, which in our taxonomy belongs to the sub-area “Control mechanisms for untrusted models," but has important overlap, is “AI control.” AI Expert Survey: AI Reliability & Security Research Priorities | 17 control is a category of plans which tries to ensure the safety of AI systems even if they try to avoid control. This work, developed at the non-profit research lab Redwood Research, initially focused on non-agentic LLMs (Greenblatt et al. 2024), but is being extended to agents, too (Bhatt et al. 2025). The work at Redwood Research has been made possible by several philanthropic funders, such as OP (Open Philanthropy 2023), the Survival and Flourishing Fund (Survival & Flourishing Fund 2024) and The Future of Life Institute (Future of Life Institute 2023). AI control has been endorsed by Google DeepMind (Shah et al. 2025) and the UK AI Security Institute staff (Tomek Korbak et al. 2025). Respondent Insights: Respondents suggested that this area would strongly benefit from additional funding, but that because of overlap with rote surveillance, it would be important to prioritize funding directions that would institute monitoring regimes outside of direct government control. Evaluation Methodology and Metrics Survey Description: Research focuses on designing holistic, theory-grounded metrics (e.g., focused on more than just harmlessness), accounting for scaffolding in evaluations, and characterizing safety–performance trade-offs. Ranking: 12th highest rated sub-area (T = 4.2, I = 4.14), with the second highest number of responses (14) and the second lowest variance, suggesting a uniquely widespread optimism compared to most other sub-areas found in this survey. Why It Matters: Research into holistic evaluation methodologies and statistically sound metrics is essential for ensuring that we are accurately measuring what matters, and testing models in ways that reflect the full complexity of real-world use. Traditional benchmarks often emphasize narrow metrics like accuracy on static datasets, but this can give a false sense of capabilities. In practice, models may perform well on these curated test sets yet fail in interactive, adversarial, or open-ended contexts. Denser evaluations which consider safety–performance trade-offs, trustworthiness, and robustness in addition to test set accuracy can expose blind spots in current systems and improve our ability to forecast model behavior in deployment settings. Success would enable more transparent, accountable, and adaptive assessments of AI systems, helping to prevent the deployment of models whose safety properties are poorly understood or misrepresented. Exemplary Research and Funding: Evaluations as a tool to measure the capabilities and tendencies of AI systems receive substantial attention from AI companies (Ziv and Anadkat 2024), non-profit labs (e.g., METR), government institutes (UK AI Security Institute 2024) and academia. Safety–performance trade-offs have been subject to research at AI companies (Bai et al. 2022). OP has disbursed grants worth $25 million on “benchmarking LLM agents on consequential real-world tasks” (Open Philanthropy 2024), the need for which had been discussed in Kiela et al. (2021). Expert Survey: AI Reliability & Security Research Priorities | 18 Respondent Insights: Respondents emphasized the broad range
|
https://arxiv.org/abs/2505.21664v1
|
of opportunities for making “measurable progress” in evaluation methodology and metrics. Several highlighted the potential for importing methodological best practices from experimental psychology, such as validity testing and control of confounding variables. Others pointed to a large and urgent need for better automation and tooling to support scalable, rigorous assessments. The area’s consistent, high ratings for importance and tractability were reflected in many comments, with one respondent concluding: “Anything that can bring clarity and focus to the science of evaluations will be enormously helpful.” Detecting and Addressing Previously Unmeasured or Latent Capabilities Survey Description: Developing strategies to uncover latent harmful abilities within AI models and prevent models from exhibiting undesirable behaviors such as “sandbagging” or deceptively underperforming during evaluations. Ranking: This approach garnered the highest number of responses from respondents out of all approaches (16, 30% of respondents) and was ranked 7th according to the promising score. Why It Matters: Harmful capabilities include deception, persuasion, and the self-extraction of a model’s own capabilities. AI models can become deceptive if it serves their goals (Ward et al. 2023) and even hide their deceptive behaviour to the user (Scheurer, Balesni, and Hobbhahn 2024). Deception is in itself harmful as it causes users to have false beliefs. “Sandbagging,” the tendency for a model to underperform during formal evaluations of dangerous capabilities (van der Weij et al. 2025), can have even more severe consequences. The concern is that a model might be engineered—or may evolve tendencies—such that it exhibits low performance on safety-critical benchmarks while in reality possessing the latent capacity to engage in harmful capabilities, which will then unfold after deployment. Exemplary Research and Funding: Some of the relevant work on deception was produced at academic institutions (Ward et al. 2023) and non-profit research organizations supported by philanthropic funders (Scheurer, Balesni, and Hobbhahn 2024) and the ML Alignment & Theory Scholars Program (Weij et al. 2025). In an RFP closed in April 2025, OP is eliciting proposals on “Experiments on alignment faking” and “Evaluating whether models can hide dangerous behaviors,” 2 of 21 sub-areas on which they want to spend $40 million (Open Philanthropy 2025b). What seems particularly relevant from a funder’s perspective in this area is that developers can have an incentive to understate a model’s capabilities to meet regulatory standards (van der Weij et al. 2025). This indicates structural reasons why detecting and measuring latent harmful capabilities might not be sufficiently invested in by the major AI companies. It should be noted, however, that companies have been doing some work in this area, e.g., Anthropic (Järviniemi and Hubinger 2024). Expert Survey: AI Reliability & Security Research Priorities | 19 Respondent Insights: Respondents suggested that red-blue team exercises and the development of model organisms could be useful tools for uncovering latent harmful capabilities. One concern expressed was that while it may be feasible to detect capabilities researchers already suspect might exist, identifying unexpected or unanticipated abilities could be significantly more challenging. Limitations As a first iteration, our research has several limitations that should inform the interpretation of its results: Response Count Despite extensive outreach,
|
https://arxiv.org/abs/2505.21664v1
|
we received responses from a total of 53 experts—below our initial goal. This sample size limits the survey’s reliability as a gauge for the perspectives of the broader AI R&S expert community. Accordingly, we encourage readers to consider this survey as one piece of evidence among many, rather than as ground truth. For future iterations, we may consider offering an honorarium or other incentives to encourage greater participation (Grace et al. 2024). Response Distribution Several categories and sub-areas consistently received fewer responses, limiting our ability to provide robust quantitative insights across all areas. Although the distribution of outreach targeting subject matter experts was approximately balanced across categories, this balance did not carry over into responses. The anonymous nature of the survey prevents confirming if experts were biased toward their own research areas (though we did ask experts to answer questions specifically in areas aligning with their expertise, in an attempt to reduce respondent bias by making it more uniform across areas). The concentration of machine learning academics may have skewed responses toward popular topics in that field while potentially underrepresenting views on areas like privacy or fairness. Other possible explanations include network bias, or organic forwarding of the survey to experts not formally recruited. Design Choices for Scope of Impact Our specific definitions of impact, tractability, and importance—including number of deaths prevented, time horizons, and funding amounts—may have disadvantaged certain research directions, especially long-term approaches requiring more than $10M USD in investment. However, we defined these specific parameters to ensure the results provided actionable insights for funders, researchers, and policymakers, with the objective of conveying the value of near-term marginal contributions (as opposed to “grand-challenge”-style bets). Expert Survey: AI Reliability & Security Research Priorities | 20 Evolving Research Landscape This survey captures expert perspectives during a specific period. New reliability and security challenges have already emerged since this survey concluded in early 2025, including challenges related to reasoning models and inference-time compute scaling following releases like Deepseek’s R1 models (DeepSeek-AI et al. 2025). A “live,” continuously updated future version of the survey could track real-time changes and update categories accordingly, and we invite interested readers to contact us to explore the feasibility of such a project. Policy Implications Policy can be a powerful instrument in mitigating critical research gaps. Specifically, government can directly fund neglected areas, incentivize investment, play a coordination role, strengthen AI talent pipelines, and expand researcher access to frontier models. Direct Funding Government’s role in promoting research will depend on the source of the research gap. In some instances, research areas may be underfunded because of limited financial resources. To address this, Congress and relevant executive agencies, such as the National Science Foundation, should consider directly appropriating funding toward the most promising research areas identified in this report. Respondents classified areas as tractable if a $10M USD investment over the next two years would yield substantial improvements in AI R&S—a modest sum by government grantmaking standards that could yield significant dividends in terms of increased reliability leading to broader adoption. In addition, government is uniquely positioned to direct higher
|
https://arxiv.org/abs/2505.21664v1
|
sums to underfunded research areas that score high on importance but low on tractability (e.g., supply chain security, access control and interface hardening). The AI R&D ecosystem may be unable to address these gaps without large, long-term investments usually provided by government. Incentivizing investment Short of providing direct funding, government agencies can use various indirect methods to incentivize additional investment in promising AI R&S research areas. These may include: ● Listing identified research gaps as official government priorities, thereby signaling their interest and spurring investment from industry, academia, and civil society. ● Promoting AI R&S research through tax incentives or subsidies. ● Incorporating specific AI R&S research commitments into broader agreements with industry stakeholders. ● Lowering the cost of research by leveraging existing frameworks such as the National Artificial Intelligence Research Resource (NAIRR) pilot, which provides Expert Survey: AI Reliability & Security Research Priorities | 21 computational, data, software, model, training and user support resources. Coordinating Investment In other instances, research areas may be neglected not because of a lack of resources but because of limited awareness among funders that specific funding gaps exist. This represents a coordination problem that government can help alleviate by identifying and proactively communicating research needs internally and externally, targeting relevant government researchers, industry stakeholders, academia, and the broader research community. As part of its efforts to identify future research gaps in real time, government could establish public-private information-sharing mechanisms, giving it not only a comprehensive view of domestic AI R&D but also insights into progress toward advanced AI thresholds within AI companies. This coordination role may extend past the research stage, with government officials providing incentives for AI developers and deployers to incorporate best practices learned from AI R&S research. Strengthening Talent Pipelines Underinvestment may also stem from the limited availability of specialized talent to conduct high-quality research. In the short term, where organizations lack the required talent to conduct relevant research, government can sponsor upskilling efforts such as scholarships for individuals to study specific critical AI R&S areas and develop specialized pathways for top international talent to work in the United States. Government should also consider how to incentivize the use of AI models themselves to differentially accelerate reliability and security research (Carlsmith 2025). In the long term, government should redouble its efforts to expand the domestic AI talent pipeline, granting scholarships and research grants to AI undergraduate and graduate students. Expanding researcher access Finally, a lack of research may reflect limited researcher access to advanced AI systems that are either necessary for conducting meaningful research or that are the subject of the research itself. In addition to continuing to use NAIRR, policymakers can democratize access by encouraging industry to waive model access costs for underresourced research organizations. In addition, government can encourage industry to grant external researchers pre-deployment access to models through mechanisms such as regulatory sandboxes or by offering tax incentives to companies that submit their models to independent evaluations. Future directions Refining our understanding of expert priorities necessitates methodological advancements in future elicitation efforts. Building on the limitations identified in this study,
|
https://arxiv.org/abs/2505.21664v1
|
subsequent research should aim for: Expert Survey: AI Reliability & Security Research Priorities | 22 ● Broader and More Systematic Sampling: Employing methods to achieve higher response rates and potentially more representative samples of relevant expertise. Incorporating modest incentives, as suggested by Grace (2024), may be beneficial. ● Refined Survey Instruments: Improving the clarity and precision of demographic questions (e.g., regarding relevant experience, organizational roles). ● Exploring Rationales: Conducting studies (perhaps using qualitative methods or follow-up workshops) to delve deeper into the reasoning behind expert assessments, key considerations, and perceived barriers or enablers for different AI R&S approaches. ● Inclusive Processes: Recognizing that technical expertise is only one component, future work should explore methods for incorporating perspectives from diverse stakeholders and the broader public, potentially through participatory workshops or public surveys (see Pasquini, 2025), to ensure alignment with societal values. ● Mitigating respondent bias: Address the challenge of respondents’ bias toward their own research, either via broader sampling (mentioned above) or by asking respondents to report their area(s) of expertise and including this information in analysis. ● Real-time Surveying: The process of designing, administering, and publishing a survey is time-intensive, and results can become quickly outdated. We envision that a more up-to-date version of this survey could be continuously maintained and updated, and plan to explore the feasibility of such a project.7 Addressing these methodological challenges will be crucial for developing and implementing a robust portfolio of technical and governance approaches needed to navigate the risks and opportunities of advancing AI. 7 We also believe it will be important for respondents to add research areas. For example, respondents in this survey suggested 16 additional unique research directions in the qualitative feedback section, suggesting that our initial list did not comprehensively cover the full space of technical AI R&S interventions under active research. Expert Survey: AI Reliability & Security Research Priorities | 23 Conclusion Navigating the complex landscape of AI reliability and security research demands clear prioritization, especially as transformative capabilities develop at a rapid pace. Our survey offers a reference point by quantifying expert judgments across a wide set of technical research directions, identifying potential priorities for funders, researchers, and policymakers seeking to mitigate risks from advanced AI. A clear picture emerged from expert assessments: immediate priority lies heavily in practical evaluation, monitoring, and forecasting to anticipate and detect potentially dangerous capabilities before they cause harm. Multi-agent safety and robust applied security measures also featured prominently. Notably, experts consistently favored concrete, domain-specific interventions over more abstract theoretical frameworks for driving near-term progress. While leading AI companies are active in some top-ranked areas, our findings underscore significant opportunities for additional, targeted support. Furthermore, expert insights highlight the need for sustained, potentially large-scale investment in important but less tractable areas, many of which involve improving security around the development, storage, and access of AI systems. To maximize the impact of limited resources, funders and policymakers should leverage both the overall promise scores and the specific bottlenecks identified here. Strategic deployment of direct funding, investment incentives, researcher support, and field-wide coordination efforts can all contribute
|
https://arxiv.org/abs/2505.21664v1
|
to better targeting critical gaps. We must acknowledge the limitations of this study. First, this study represents a snapshot in a rapidly evolving field. Second, our sample size and response distribution limited our ability to glean insights across all research areas. Future iterations should pursue broader sampling, and explore real-time participatory methods to align research directions with up-to-date priorities. We see this work as a template for iterative improvement, and actively welcome feedback and collaboration to refine our methodology, broaden our reach, and maximize the impact of future iterations. If you have suggestions or wish to contribute, please reach out. Expert Survey: AI Reliability & Security Research Priorities | 24 Acknowledgements This project benefited significantly from the contributions and expertise of numerous individuals. Assessing the landscape of AI reliability and security is an inherently interdisciplinary and ambitious endeavor, made possible only through generous collaboration. We extend our sincere gratitude to the following individuals whose insights and thoughtful feedback greatly enhanced the quality and rigor of this work: Usman Anwar, Richard Ren, Peter Barnett, Zoe Williams, Ari Holtzman, Oscar Delaney, Micah Carroll, Buck Shlegeris, Daniel Kang, Peter Hase, Daniel Brown, Nouha Dziri, Chirag Agarwal, Niloofar Mireshghallah, Leilani Gilpin, David Krueger, Noemi Dreksler, Willem Sleegers, David Moss, and, of course, all of our anonymous survey respondents. All remaining errors are our own. Expert Survey: AI Reliability & Security Research Priorities | 25 Appendix A: Taxonomy For this survey, we developed a taxonomy of technical reliability and security research consisting of 105 sub-areas organized into 20 broader categories. See Methods section for details on the design process. We reproduce here the taxonomy in its entirety, along with the descriptions (as seen by survey respondents) and references to example work (which were not shown to respondents). We believe this taxonomy may serve as a capable basis for ongoing mapping of this research landscape. Top-level categories Respondents first selected one or more of the following categories: 1. Theoretical foundations and provable safety in AI systems 2. Training and fine-tuning methods for alignment and safety 3. Scalable oversight and alignment techniques 4. Understanding in-context learning, reasoning, and scaling behavior 5. Interpretability, explainability, and transparency 6. Robustness 7. Improving the science of AI evaluation 8. Domain-specific AI evaluation design 9. Agentic LLMs and single-agent risks 10. Multi-agent interactions 11. Cooperative AI and mechanism design 12. Fairness 13. Accountability 14. Ethics 15. Choosing and operationalizing values in AI 16. Privacy 17. Cybersecurity for AI models 18. Hardware and infrastructure security for AI 19. Improving general understanding of deep learning 20. Research on safety in non-LLM systems Expert Survey: AI Reliability & Security Research Priorities | 26 Category and sub-area descriptions Area descriptions were shown to respondents alongside importance and tractability questions. The initial survey did not include “example work” citations for each sub-area, but they have been provided here as additional context for readers. 1. Theoretical foundations and provable safety in AI systems: Advancing the theoretical foundations of AI safety by building models and frameworks that ensure provably correct and robust behavior. These efforts span from verifiable architectures and formal
|
https://arxiv.org/abs/2505.21664v1
|
verification methods to embedded agency, decision theory, incentive structures aligned with causal reasoning, and control theory. a. Building verifiable and robust AI architectures: Constructing AI systems with architectures that support formal verification and robustness guarantees, such as world models that enable safe and reliable planning, or guaranteed safe AI with Bayesian oracles. This area emphasizes simplicity and transparency to aid in provability. Example work includes (Dalrymple et al. 2024) and (Bengio et al. 2024). b. Formal verification of AI systems: Applying formal methods to verify that AI models and algorithms meet stringent safety, robustness, and performance criteria. This includes proving resilience against adversarial inputs and perturbations, and certifying conformance to specified safety properties under varying conditions. Example work includes (Seshia, Sadigh, and Sastry 2022) and (Henzinger, Lechner, and Žikelić 2021). c. Decision theory and rational agency: Establishing formal decision-making frameworks that ensure rational and safe choices by AI agents, potentially drawing on concepts like causal and evidential decision theory. Example work includes (Tennant, Hailes, and Musolesi 2023). d. Embedded agency: Explores how agents can model and reason about themselves and their environment as interconnected parts of a single system, addressing challenges like self-reference, resource constraints, and the stability of reasoning processes. This includes tackling problems arising from the lack of a clear boundary between the agent and its environment. Example work includes (Demski and Garrabrant 2020). e. Causal incentives: Developing frameworks that formalize how to align agent incentives with safe and desired outcomes by ensuring their causal understanding matches intended objectives. This research provides a formal language for guaranteeing safety, addressing challenges like goal misspecification, and complementing broader efforts in agent foundations and robust system design. Example work includes (Everitt et al. 2021), (Farquhar, Carey, and Everitt 2022), and (Ganguly et al. 2023). f. Control theory applications in AI safety: Leveraging principles from control theory to ensure stability, robustness, and safety for AI-driven systems interacting with dynamic physical environments. This includes designing controllers and feedback mechanisms to maintain system integrity, prevent runaway behaviors, and achieve desired performance criteria under uncertainty. Example work includes (Xiao et al. 2024) and (D. D. Fan et al. 2020). Expert Survey: AI Reliability & Security Research Priorities | 27 2. Training and fine-tuning methods for alignment and safety: Developing reliable training and fine-tuning strategies for AI models to ensure that their outputs remain safe, interpretable, and aligned with intended goals. This involves understanding how fine-tuning affects model behavior, employing adversarial training for robust alignment, carefully adjusting pre-training processes, and improving data quality and auditing methods. a. Understanding how fine-tuning changes a pretrained model: Investigating how fine-tuning alters a model’s internal representations and behaviors to better predict, and ultimately control, downstream safety outcomes. Example work includes (B. Y. Lin et al. 2023), (Jain et al. 2024), and (Clymer et al. 2023). b. Develop output-based adversarial training techniques for more robust alignment: Developing training procedures, such as adversarial training focused on internal model representations, or ‘process supervision’, that directly optimize against adversarial examples and undesirable outputs, making models more resistant to manipulations that could lead to unsafe behaviors. Example
|
https://arxiv.org/abs/2505.21664v1
|
work includes (Miyato et al. 2019), (Lightman et al. 2023), and (Casper et al. 2024). c. Scalable techniques for targeted modifications of LLM behavior (including unlearning): Creating scalable methods for precisely adjusting model outputs, such as removing unwanted content or refining responses to adhere to alignment constraints without broadly degrading performance. This may also include removal of unknown or latent undesirable capabilities that emerge in large models. Example work includes (Hubinger et al. 2024), (Y. Yuan et al. 2024), (Belrose et al. 2025), and (Jang et al. 2023). d. Retrieval-augmented pre-training: Incorporating retrieval mechanisms during pre-training to better ground models in verified information. Example work includes (Guu et al. 2020). e. Pretraining alterations to improve interpretability: Altering pre-training protocols to produce models with clearer internal representations and decision-making pathways, allowing for more effective downstream analysis and intervention. Example work includes (Ismail, Bravo, and Feizi 2021) and (Golechha, Cope, and Schoots 2024). f. Limiting models’ ability to perform harmful tasks: Introducing mechanisms during pre-training that proactively limit a model’s potential to learn or perform harmful tasks, constraining the model’s capability space to safer domains before downstream fine-tuning. Example work includes (Henderson et al. 2023) and (Zhou et al. 2023). g. Scalable data auditing, filtering, and Pretraining with Human Feedback (PHF): Developing tools for large-scale data auditing, filtering, training-data attribution, and incorporating human feedback at the pre-training stage. Example work includes (Fernando et al. 2023; Agarwal, D’souza, and Hooker 2022; Grosse et al. 2023; and Korbak et al. 2023). 3. Scalable oversight and alignment techniques: Developing approaches to guide and align increasingly complex AI systems even in tasks where direct oversight is challenging, such as by the use of AI feedback, debate, iterative training processes, and enhanced elicitation methods. Expert Survey: AI Reliability & Security Research Priorities | 28 a. Reinforcement Learning from AI Feedback (RLAIF): Using feedback generated by AI systems to guide reinforcement learning, effectively scaling the oversight process beyond purely human-labeled data. Example work includes (H. Lee et al. 2023). b. Debate: Encouraging multiple models (or model instances) to discuss and critique each other’s reasoning, with human overseers judging the best arguments. Example work includes (Brown-Cohen, Irving, and Piliouras 2023) and (Irving, Christiano, and Amodei 2018). c. Iterated Distillation and Amplification (IDA): An alignment approach where increasingly capable AI systems are trained by recursively using weaker AIs to teach and amplify smarter successors. To address the limitations of human-defined feedback and reward functions, IDA decomposes complex tasks—using AI assistance—into simpler subtasks with accessible human or algorithmic evaluation signals, enabling scalable alignment and improved performance over time. Example work includes (Christiano, Shlegeris, and Amodei 2018). d. Better elicitation mechanisms from humans: Improving methods to extract more reflective, aspirational, and consistent human preferences, to provide data to guide AI systems along these preferences and update in accordance with changes in values over time. Example work includes (Klingefjord, Lowe, and Edelman 2024) and (Oliveira et al. 2023). e. Recursive reward modeling: Breaking down complex tasks into simpler subtasks for which reward signals can be more easily specified, then “building up” to oversee more complex
|
https://arxiv.org/abs/2505.21664v1
|
behaviors. Example work includes (Leike et al. 2018) and (Jeff Wu et al. 2021). 4. Understanding in-context learning, reasoning, and scaling behavior: Methods to gain a comprehensive understanding of how large language models learn, reason, and scale, such as by examining in-context learning (ICL) mechanisms, the influence of data and design on behavior, the theoretical foundations of scaling, the emergence of advanced capabilities, and the nature of reasoning. a. Mechanistic understanding of In-Context Learning: Investigating the internal processes by which transformers perform ICL, including whether these processes resemble emergent optimization behavior, advanced pattern-matching, or other structural mechanisms. This research may include scenario-based analyses to identify the circuits critical for ICL under artificial constraints. Example work includes (Xie et al. 2021), (L. Lin, Bai, and Mei 2024), and (Olsson et al. 2022). b. Influences on ICL behavior and performance: Examining how the tasks, instructions, pre-training data distribution, and design choices (e.g., instruction tuning, model size, training duration) shape the range and reliability of behaviors that can be specified in-context. Example work includes (Y. Wang et al. 2023) and (Hahn and Goyal 2023). c. Theoretical and representational aspects of scaling: Clarifying when and how scaling drives improvements, such as by building a more robust theoretical framework to describe scaling laws, or analyzing how increasing model size and training data Expert Survey: AI Reliability & Security Research Priorities | 29 influence learned representations. Example work includes (Viering and Loog 2023) and (Vyas et al. 2023). d. Emergence and task-specific scaling patterns: Formalizing and forecasting the emergence of new capabilities as models scale, investigating whether scaling alone can produce certain capabilities, and designing methods for discovering task-specific scaling laws. Example work includes (McKenzie et al. 2024), (Ganguli et al. 2022), and (Caballero et al. 2023). e. Impact of scaling and training on reasoning capabilities: Determining whether and how increases in model size and training complexity enhance reasoning abilities, and identifying which aspects of training conditions and data sources facilitate the acquisition of reasoning skills. Example work includes (Wei et al. 2022), (Saparov et al. 2023), and (Magister et al. 2023). f. Mechanistic understanding and limits of LLM reasoning: Examining the underlying mechanisms of reasoning in LLMs, exploring non-deductive reasoning capabilities of LLMs (e.g., causal or social reasoning). Example work includes (Hou et al. 2023) and (Gandhi, Sadigh, and Goodman 2023). g. Limits of Transformers: Defining the computational limits of transformers in supporting sophisticated reasoning. Example work includes (Merrill and Sabharwal 2023) and (Strobl 2023). 5. Interpretability, explainability, and transparency: Ensuring that AI systems are understandable, trustworthy, and transparent. This involves developing tools and methods to interpret model internals, refining the reliability and scalability of interpretability techniques, exploring ways to elicit and explain model reasoning, and improving the transparency of complex models. a. Interpretability foundations: Focuses on theoretical and experimental studies investigating how models represent and encode concepts, emphasizing structural and abstraction-level insights, including by distinguishing linear from non-linear encodings, understanding polysemanticity and superposition, examining concept mismatches between models and humans, and discovering more accurate abstractions for interpretability. Example work includes (Bilodeau et al. 2024), (Mahinpei et
|
https://arxiv.org/abs/2505.21664v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.