text
string
source
string
Θ′exp NℓN(θ⋆ i) +g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′+O 1√ N p′(θ′)dθ′O1 N , =−R Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′+O 1√ N p′(θ′)dθ′ R Θ′exp Ng⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′+O 1√ N p′(θ′)dθ′O1 N , =−R Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ exp O 1√ N p′(θ′)dθ′ R Θ′exp Ng⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ exp O 1√ N p′(θ′)dθ′O1 N , =O −1 NR Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ′)dθ′ R Θ′exp Ng⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ′)dθ′  (21) where we have multiplied top and bottom by exp (−NℓN(θ⋆ i))to derive the second equality and used the fact that 0<exp O 1√ N <∞to derive the final line. Now, as the prior is Lipschitz, we make a Taylor series expansion about θ⋆ i: p(θ) =p(θ⋆ i) +O(∥θ−θ⋆ i∥), hence: p′(θ′) =p θ=θ⋆ i+1√ Nθ′ =p(θ⋆ i) +O1√ N . This allows us to find an asymptotic expression for Eq. (21): R Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ′)dθ′ R Θ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ′)dθ′ =R Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ⋆ i)dθ′ 1 +O 1√ N R Θ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ⋆ i)dθ′ 1 +O 1√ N =R Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ⋆ i)dθ′ R Θ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ⋆ i)dθ′ 1 +O1√ N =R Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ dθ′ R Θ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ dθ′ 1 +O1√ N . 29 We re-write the exponential term to recover a quadratic form: exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ = exp 1 2H⋆ N−1g⋆ i,N+θ′⊤ H⋆ N1 2H⋆ N−1g⋆ i,N+θ′ −1 4g⋆ i,N⊤H⋆ N−1g⋆ i,N! . Substituting yields: R Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ dθ′ R Θ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ dθ′ =R Θ′θ′⊤H⋆ iθ′exp1 2H⋆ N−1g⋆ i,N+θ′⊤H⋆ N1 2H⋆ N−1g⋆ i,N+θ′ dθ′ R Θ′exp 1 2H⋆ N−1g⋆ i,N+θ′⊤ H⋆ N 1 2H⋆ N−1g⋆ i,N+θ′ dθ′. In this form, we notice the expectation is that of a Gaussian N µ=−1 2H⋆ N−1g⋆ i,N,Σ =−H⋆ i−1 restricted to Θ′. Noting that in the limit Θ′− − − − → N→∞Rd, hence: R Θ′θ′⊤H⋆ iθ′exp θ′+1 2H⋆ N−1g⋆ i,N⊤H⋆ N θ′+1 2H⋆ N−1g⋆ i,N dθ′ R Θ′exp θ′+1 2H⋆ N−1g⋆ i,N⊤ H⋆ N θ′+1 2H⋆ N−1g⋆ i,N dθ′ =O R Rdθ′⊤H⋆ iθ′exp θ′+1 2H⋆ N−1g⋆ i,N⊤H⋆ N θ′+1 2H⋆ N−1g⋆ i,N dθ′ R Rdexp θ′+1 2H⋆ N−1g⋆ i,N⊤ H⋆ N θ′+1 2H⋆ N−1g⋆ i,N dθ′ , =O Eθ′∼N(−1 2H⋆ N−1g⋆ i,N,−H⋆ N−1) θ′⊤H⋆ iθ′ . (22) Putting everything together, we have: R Θ⋆ i(ℓ⋆−ℓ(θ) exp ( NℓN(θ)))p(θ)dθ R Θ⋆ iexp (NℓN(θ))p(θ)dθ =O −1 NR Θ′θ′⊤H⋆ iθ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ′)dθ′ R Θ′exp g⋆ i,N⊤θ′+θ′⊤H⋆ Nθ′ p′(θ′)dθ′ ,Eq. (21) =O −1 NEθ′∼N(−1 2H⋆ N−1g⋆ i,N,−H⋆ i−1) θ′⊤H⋆ iθ′ ,Eq. (22) Using standard results for the multivariate Gaussian [53] yields our desired result: −1 NEθ′∼N(−1 2H⋆ N−1g⋆ i,N,−H⋆ i−1) θ′⊤H⋆ iθ′ =Tr H⋆ iH⋆ N−1 −1 4g⋆ i,N⊤H⋆ N−1⊤H⋆ iH⋆ N−1g⋆ i,N N, =O Tr(I)−g⋆ i,N⊤H⋆ i−1g⋆ i,N N! , =O d−g⋆ i,N⊤H⋆ i−1g⋆ i,N N! , almost surely, where we have used the strong law of large numbers on the empirical Hessian from Assumption 1 to derive the second line. In our final Lemma, we show that regions that are not close to the maximising points diminish exponentially
https://arxiv.org/abs/2505.22442v1
in posterior probability as Ngrows large. 30 Lemma 4. Under Assumption 1, EDN∼P⋆ Data P(¯Θ|DN) =O(exp(−N)). Proof. We start by splitting the posterior expectation into integrals over ¯ΘandΘ\¯Θ: P(¯Θ|DN) =R ¯Θexp (NℓN(θ))p(θ)dθR Θexp (NℓN(θ))p(θ)dθ, =R ¯Θexp (NℓN(θ))p(θ)dθR ¯Θexp (NℓN(θ))p(θ)dθ+R Θ\¯Θexp (NℓN(θ))p(θ)dθ. Dividing top and bottom byR ¯Θexp (NℓN(θ))p(θ)dθ: P(¯Θ|DN) =1 1 +R Θ\¯Θexp(NℓN(θ))p(θ)dθR ¯Θexp(NℓN(θ))p(θ)dθ. Hence if we can show there exists some N′<∞and a function Cexp(cN)with positive constants candCthat lower bounds the ratio: Cexp(cN)≤R Θ\¯Θexp (NℓN(θ))p(θ)dθ R ¯Θexp (NℓN(θ))p(θ)dθ almost surely for all N≥N′, then it follows: EDN∼P⋆ Data P(¯Θ|DN) =O1 1 + exp( N) , =O(exp(−N)). From Assumption 1, each θ⋆ imaximises ℓ(θ)withsupθ∈¯Θℓ(θ′)< ℓ(θ⋆ i). Asℓ(θ)is continuous, there thus exists a small, closed ball B(θ⋆ j, r):={θ|∥θ⋆ i−θ∥ ≤r}of radius r >0centred on some θ⋆ jsuch that supθ′∈¯Θℓ(θ′)<minθ′′∈B(θ⋆ j,r)ℓ(θ′′). From Assumption 1, the uniform strong law of large numbers holds with ℓN(θ)Unif.a.s− − − − − → ℓ(θ). By the definition of the limit and continuity ofℓN(θ), there thus exists some finite N′such that supθ′∈¯ΘℓN(θ′)<minθ′′∈B(θ⋆ j,r 2)ℓN(θ′′)for all N≥N′almost surely, where B(θ⋆ j,r 2)is a ball of half radiusr 2. Noting that B(θ⋆ j,r 2)⊂Θ\¯Θand 0≤exp (NℓN(θ)), this allows us to lower bound the integral: Z Θ\¯Θexp (NℓN(θ))p(θ)dθ≥Z B(θ⋆ j,r 2)exp (NℓN(θ))p(θ)dθ, ≥exp N min θ′′∈B(θ⋆ j,r 2)ℓN(θ′′)!Z B(θ⋆ j,r 2)p(θ)dθ, = exp N min θ′′∈B(θ⋆ j,r 2)ℓN(θ′′)! P B θ⋆ j,r 2 . We can also upper bound the integral: Z ¯Θexp (NℓN(θ))p(θ)dθ≤exp Nsup θ′∈¯ΘℓN(θ′)!Z ¯Θp(θ)dθ, = exp Nsup θ′∈¯ΘℓN(θ′)! P¯Θ . Using these results, we lower bound the ratio as: R Θ\¯Θexp (NℓN(θ))p(θ)dθ R ¯Θexp (NℓN(θ))p(θ)dθ≥exp Nminθ′′∈B(θ⋆ j,r 2)ℓN(θ′′) P B θ⋆ j,r 2 exp (Nsupθ′∈¯ΘℓN(θ′))P¯Θ , = exp N min θ′′∈B(θ⋆ j,r 2)ℓN(θ′′)−sup θ′∈¯ΘℓN(θ′)!! P B θ⋆ j,r 2 P¯Θ. 31 LetP(B(θ⋆ j,r 2)) P(¯Θ)=C > 0from Assumption 1. As there exists some N′such that minθ′′∈B(θ⋆ j,r 2)ℓN(θ′′)>supθ′∈¯ΘℓN(θ′)for all N > N′, we have shown exists some positive constants c >0andC >0such that Cexp (cN)≤R Θ\¯Θexp (NℓN(θ))p(θ)dθ R ¯Θexp (NℓN(θ))p(θ)dθ, for all N > N′almost surely, as required. We now present our proof of Theorem 2. Here we split the posterior expectation up into small regions close to maximising points and regions away from maximising. We then apply our two lemmas to each region. Our result then follows by an application the central limit theorem under Assumption 1. Theorem 2. Let the data be drawn from the underlying true distribution DN∼P⋆ Data. Under Assumption 1, there exists some constant 0< C < ∞such that for sufficiently large N: EDN∼P⋆ Data[Regret (M⋆,DN)]≤2Rmax·exp 1−s Cd (1−γ)N! . Proof. Under the notation introduced at the start of Theorem 2, we write the PIL as: Iπ N:=Es,a∼ρ⋆ π Eθ∼PΘ(DN) KL P⋆ R,S(s, a)∥PR,S(s, a, θ ) , =Eθ∼PΘ(DN)h Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)[logp(r, s′|s, a, θ⋆)−logp(r, s′|s, a, θ )]i , =Eθ∼PΘ(DN)[ℓ⋆−ℓ(θ)], Under this same notation, we write the posterior density as: p(θ|DN) =exp (NℓN(θ))p(θ)R Θexp (NℓN(θ))p(θ)dθ. (23) Now, under Assumption 1, we split the inner expectation into small regions Θ⋆ iaround each maximis- ing point θ⋆ iand the remainder of the parameter space ¯Θ:= Θ\SK i=1Θ⋆ i: Eθ∼PΘ(DN)[ℓ⋆−ℓ(θ)] =KX i=1Z Θ⋆ i(ℓ⋆−ℓ(θ))p(θ|DN)dθ+Z ¯Θ(ℓ⋆−ℓ(θ))p(θ|DN)dθ. (24)
https://arxiv.org/abs/2505.22442v1
Using Eq. (23), we now re-write each integral in the summation term of Eq. (24) as: Z Θ⋆ i(ℓ⋆−ℓ(θ))p(θ|DN)dθ=R Θ⋆ i(ℓ⋆−ℓ(θ)) exp ( NℓN(θ))p(θ)dθ R Θexp (NℓN(θ))p(θ)dθ, =R Θ⋆ i(ℓ⋆−ℓ(θ)) exp ( NℓN(θ))p(θ)dθ R Θexp (NℓN(θ))p(θ)dθ·R Θ⋆ iexp (NℓN(θ))p(θ)dθ R Θ⋆ iexp (NℓN(θ))p(θ)dθ, =R Θ⋆ i(ℓ⋆−ℓ(θ)) exp ( NℓN(θ))p(θ)dθ R Θ⋆ iexp (NℓN(θ))p(θ)dθ·R Θ⋆ iexp (NℓN(θ))p(θ)dθ R Θexp (NℓN(θ))p(θ)dθ, =R Θ⋆ i(ℓ⋆−ℓ(θ)) exp ( NℓN(θ))p(θ)dθ R Θ⋆ iexp (NℓN(θ))p(θ)dθ·P(Θ⋆ i|DN), ≤R Θ⋆ i(ℓ⋆−ℓ(θ)) exp ( NℓN(θ))p(θ)dθ R Θ⋆ iexp (NℓN(θ))p(θ)dθ. (25) where we have used 0≤P(Θ⋆ i|DN)≤1from Kolmogorov’s axioms to bound the final line. 32 For the last term in Eq. (24), we note that ℓ⋆−ℓ(θ)is bounded PΘ-almost surely from Assumption 1, hence there exists some ℓ†<∞such that:Z ¯Θ(ℓ⋆−ℓ(θ))p(θ|DN)dθ≤Z ¯Θℓ†p(θ|DN)dθ, =ℓ†P(¯Θ|DN). (26) Using Ineqs. 25 and 26, we bound Eq. (24) as: Eθ∼PΘ(DN)[ℓ⋆−ℓ(θ)]≤KX i=1R Θ⋆ i(ℓ⋆−ℓ(θ)) exp ( NℓN(θ))p(θ)dθ R Θ⋆ iexp (NℓN(θ))p(θ)dθ+ℓ†P(¯Θ|DN), and hence the PIL can be bounded as: Iπ N≤KX i=1R Θ⋆ i(ℓ⋆−ℓ(θ)) exp ( NℓN(θ))p(θ)dθ R Θ⋆ iexp (NℓN(θ))p(θ)dθ+ℓ†P(¯Θ|DN). Applying Lemma 3 and Lemma 4 under Assumption 1 yields: Iπ N=KX i=1O d−g⋆ i,N⊤H⋆ i−1g⋆ i,N N! +ℓ†O(exp(−N)), =O d−PK i=1g⋆ i,N⊤H⋆ i−1g⋆ i,N N! . (27) almost surely. As f(x):= 2Rmax·r 1−exp −x (1−γ) is monotonic in xand d−PK i=1g⋆ i,N⊤H⋆ i−1g⋆ i,N N≥0, Eq. (27) implies there exists some positive 0< C < ∞such that: Regret (M⋆,DN)≤2Rmax·vuut1−exp −Cd−PK i=1g⋆ i,N⊤H⋆ i−1g⋆ i,N (1−γ)N! , almost surely for large enough N. Under Assumption 1, g⋆ i,Nd− → N (0,Σg i). As f(x)is also a bounded, continuous function and concave, we can apply the Portmanteau Theorem (see for example Bass [8, Chapter 21.7]) followed by Jensen’s inequality to yield: EDN∼P⋆ Data[Regret (M⋆,DN)]≤2Rmax·Egi∼N(0,Σg i) vuut1−exp −Cd−PK i=1gi⊤H⋆ i−1gi (1−γ)N! , ≤2Rmax·vuut1−exp −Cd−PK i=1Egi∼N(0,Σg i) g⊤H⋆ i−1g (1−γ)N! , =2Rmax·vuut1−exp −Cd−PK i=1Tr Σg iH⋆ i−1 (1−γ)N! . Now, examining the Hessian: H(θ) =∇2 θEs,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)[logp(r, s′|s, a, θ )] =∇θEs,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)[∇θlogp(r, s′|s, a, θ )], =∇θEs,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)∇θp(r, s′|s, a, θ ) p(r, s′|s, a, θ ) , =Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a) ∇θ∇θp(r, s′|s, a, θ ) p(r, s′|s, a, θ ) , =Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)∇2 θp(r, s′|s, a, θ ) p(r, s′|s, a, θ ) −Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)∇θp(r, s′|s, a, θ ) p(r, s′|s, a, θ )∇θp(r, s′|s, a, θ )⊤ p(r, s′|s, a, θ ) , (28) 33 Hence at θ=θ⋆ i, the first term of Eq. (28) is: Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)∇2 θp(r, s′|s, a, θ ) p(r, s′|s, a, θ⋆ i) =Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)" ∇2 θp(r, s′|s, a, θ )|θ=θ⋆ i p⋆(r, s′|s, a)# , =Es,a∼ρ⋆πZ R×S∇2 θp(r, s′|s, a, θ )|θ=θ⋆ id(r, s′) , =Es,a∼ρ⋆π ∇2 θZ R×Sp(r, s′|s, a, θ )d(r, s′)|θ=θ⋆ i , =∇2 θ1|θ=θ⋆ i, = 0, hence: H(θ⋆ i) = 0−Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a)∇θp(r, s′|s, a, θ⋆ i) p(r, s′|s, a, θ⋆ i)∇θp(r, s′|s, a, θ⋆ i)⊤ p(r, s′|s, a, θ⋆ i) , =−Es,a∼ρ⋆π,r,s′∼P⋆ R,S(s,a) ∇θlogp(r, s′|s, a, θ⋆ i)∇θlogp(r, s′|s, a, θ⋆ i)⊤ , =−Σg i. Using this result, each Tr Σg iH⋆ i−1 =Tr(−I) =−d. Substituting y ields: EDN∼P⋆ Data[Regret (M⋆,DN)]≤2Rmax·s 1−exp −C(k+ 1)d (1−γ)N , ≤2Rmax·s 1−exp −C′d (1−γ)N , for some 0< C′<∞and sufficiently large N, as
https://arxiv.org/abs/2505.22442v1
required. 34 E Further Results E.1 SOReL Figure 6: SOReL BAMDP hyperparameter sweeps (tuning set ϕIII) for 200,000 randomly sampled transi- tions of the brax datasets. The plots correspond to ϕIII←arg minϕIIIRegretMetric (ϕI, ϕII, ϕIII,DN)in Algorithm 1. SOReL selects the BAMDP hyperparameters that yield the lowest approximate regret (green). For Walker2d the high approximate regret for all hyperparameter combinations ( RN>0.6) suggests that in Algorithm 1 RN>RDeploy has not been satisfied: the practitioner should change the model or approximate inference method to obtain a lower PIL before re-tuning the BAMDP hyperparameters. Alternatively, the practitioner could consider collecting more data - though the high approximate regret highlights the associated risk. While we report the true regret to validate the approach, in practice only the policy with the lowest approximate regret would be deployed. Figure 7: Simplified version of SOReL on brax-halfcheetah-full-replay. The plot showing only the ensemble median as the approximate regret is given in the main body of the paper. Shaded purple shows the approxi- mate regret range across all the metrics (with varying degrees of conservatism). Shaded red indicates where E(DN,M⋆)̸≈ V(DN)(for a threshold of 0.25), and hence the approximate regret may be unreliable. Mean and standard deviation given over 3 seeds. 35 The only environment for which the world model is sufficiently accurate relative to its discount factor, resulting in a non-trivial upper bound, is pendulum-v1 (Fig. 8b). This is due to two factors: (i) the other environments use a lower discount factor (0.998 vs. 0.995), and (ii) learning accurate world models with low MSE is inherently more challenging in high-dimensional settings. We note that this is a supervised-learning problem, and orthogonal to our line of work. (a) gymnax-cartpole-random-v1 (b) gymnax-pendulum-random-v1 (c) brax-hopper-full-replay (d) brax-walker2d-full-replay Figure 8: Simplified version of SOReL applied to various tasks. For the gymnax environments, N < 700 is shaded red because the PIL is undefined in this region (validation set <batch size): without being able to ensure E(DN,M⋆)<≈ V(DN)the practitioner would have been unable to determine whether to trust the approximate regret. Mean and standard deviation given over 3 seeds. 36 (a) gymnax-cartpole-random-v1 (b) gymnax-pendulum-random-v1 (c) brax-hopper-full-replay (d) brax-walker2d-full-replay Figure 9: Simplified version of SOReL applied to various tasks. Shaded purple shows the approximate regret range across all the metrics (with varying degrees of conservatism). For the gymnax environments, N < 700 is shaded red because the PIL is undefined in this region (validation set <batch size): without being able to ensure E(DN,M⋆)<≈ V(DN)the practitioner would have been unable to determine whether to trust the approximate regret. Mean and standard deviation given over 3 seeds. 37 E.2 TOReL Task IQL ReBRAC MOPO MOReL brax- halfcheetah- full-replayTrue 0.203 0 .312 0 .403 0 .751 Oracle 0.186 0 .089 0 .133 0 .641 TOReL 0.186 0.089 0.133 0.648 brax- hopper- full-replayTrue 0.550 0 .534 0 .558 0 .575 Oracle 0.377 0 .070 0 .082 0 .243 TOReL 0.397 0.070 0.086 0.282 brax- walker- full-replayTrue 0.374 0 .357 0 .342 0 .625 Oracle 0.304 0 .000 0 .243 0 .415 TOReL 0.331 0.000 0.384 0.554 d4rl- halfcheetah- medium-expert-v2True
https://arxiv.org/abs/2505.22442v1
0.400 0 .134 0 .331 0 .418 Oracle 0.469 0 .000 0 .116 0 .187 TOReL 0.459 0.036 0.339 0.227 d4rl- hopper- medium-v2True 0.411 0 .467 0 .848 0 .595 Oracle 0.375 0 .053 0 .681 0 .183 TOReL 0.428 0.083 0.681 0.327 d4rl- walker2d- medium-replay-v2True 0.450 0 .625 0 .952 1 .000 Oracle 0.339 0 .204 0 .724 1 .000 TOReL 0.358 0.206 0.724 1.000 Table 2: True, oracle and TOReL regrets across tasks. Bold indicates where TOReL identifies the oracle hyperparameters, while italic indicates where TOReL identifies hyperparameters with a regret lower than the true regret. ReBRAC+TOReL outperforms all algorithms on every dataset. Task IQL ReBRAC MOPO MOReL brax-halfcheetah- full-replayr0.29 0 .92 0 .98 0 .93 p0.448 0 .000 0 .000 0 .000 brax-hopper- full-replayr0.18 0 .98 1 .00 0 .98 p0.635 0 .000 0 .00 0 .000 brax-walker- full-replayr0.32 nan −0.68 0 .99 p0.406 nan 0.133 0 .000 d4rl-halfcheetah- medium-expert-v2r0.29 0 .90 0 .02 0 .98 p0.442 0 .000 0 .975 0 .000 d4rl-hopper- medium-v2r0.29 0 .98 0 .98 0 .94 p0.443 0 .000 0 .000 0 .000 d4rl-walker2d- medium-replay-v2r0.65 0 .79 1 .00 −0.53 p0.057 0 .000 0 .000 0 .008 Table 3: Pearson correlation ( r) and statistical significance ( p) between TOReL regret metrics and true regrets for different hyperparameter combinations. For two-thirds of the tasks there is statistically significant ( p <0.05), strong ( r >|0.5|) positive ( r >0) correlation (bold). Even where no strong positive correlation is observed (possibly due to limited hyperparameter coverage), the TOReL regret is lower than the true regret averaged over those tasks. 38 Figure 10: Scatter plots to visualise the positive correlation between the TOReL regret metric and the true regret. 39 F Implementation Details Our implementations of SOReL and TOReL, along with all of the code to reproduce the experiments, are made publicly available with this work. F.1 Diverse full-replay datasets As mentioned in Section 7.1, without a model prior, the offline dataset must include transitions from poor, medium and expert regions of performance. For the brax environments, we collect our own full-replay datasets to ensure that this is the case. We arbitrarily choose the hyperparameters, simply requiring that the agent spends sufficient time in all three regions of performance. The training curves obtained while collecting the offline datasets are given in Figure 11. The gymnax environments are simple enough that collecting a dataset using an ensemble of randomly initialised policies leads to sufficient coverage across all three regions of performance. Figure 11: Training curves while collecting the brax full-replay offline datasets. We ensure that the agent spend sufficient time in poor, medium and expert regions of performance such that the offline dataset captures diverse transitions. F.2 World Model and RP Approximate Inference To ensure compatibility with Unifloral implementations [ 30], our world model is a variation of the Gaussian World Model presented in 5.3, but amended to predict the change in state ∆:=s′−srather than the absolute next state s′. We also allow the model to characterise its uncertainty with variance functions σ2 r,θ(s, a)andσ2 ∆,θ(s, a).
https://arxiv.org/abs/2505.22442v1
The Gaussian reward and state transition models then have the form: PR(s, a, θ ) =N(rθ(s, a), σ2 r,θ(s, a)), P ∆(s, a, θ ) =N(∆θ(s, a), σ2 ∆,θ(s, a)), with mean reward function rθ(s, a)and mean state transition function ∆θ(s, a), as before. Let r(s, a,DN):=Eθ∼PΘ(DN)[rθ(s, a)]and∆(s, a,DN):=Eθ∼PΘ(DN)[∆θ(s, a)]denote the Bayesian mean reward and state transition functions and r⋆(s, a)and∆⋆(s, a)denote the true mean reward and state transition functions. We define the normalised mean squared error between the true and Bayesian mean functions as: E(DN,M⋆):=E(s,a)∼ρ⋆π∥r(s, a,DN)−r⋆(s, a)∥2 2 2σ2r(DN)+∥∆(s, a,DN)−∆⋆(s, a)∥2 2 2σ2 ∆(DN) , and the normalised predictive variance using the law of total variance as: V(DN):=E(s,a)∼ρ⋆π" Eθ∼PΘ(DN)" ∥r(s, a,DN)−rθ(s, a)∥2 2 2σ2r(DN)+∥σ2 r,θ(s, a)∥2 2 +∥∆(s, a,DN)−∆θ(s, a)∥2 2 2σ2 ∆(DN)+∥σ2 ∆,θ(s, a)∥2 2## , where σ2 r(DN)andσ2 ∆(DN)denote the variance of the reward and change in state over the offline dataset. 40 When rolling out sequences of trajectories on which to train our (Bayes-Optimal) policy, we uni- formly sample a model from the ensemble of elite models and then sample the transition from the corresponding Gaussian output distribution. Our ensemble consists of multilayer perceptrons (MLPs) with ReLU activation, which we train using negative log-likelihood loss derived in Appendix C.1. Training the models in parallel allows us to simultaneously optimise maximum and minimum (log) variance parameters for each dimension across the model ensemble, which we use to soft-clamp the (log) variances output by the individual models. This prevents any individual model becoming overly confident or too uncertain in one dimension. All models in our ensemble have identical structure, but are initialised differently using LeCun [43] initialisation. The maximum and minimum log-variance terms are initialised at constants. The exact loss function and ensemble dynamics model are the same as the one implemented by Jackson et al. [30], but we use an Adam optimiser [ 35] with cosine learning rate schedule rather than constant learning rate. A percentage of the available offline dataset is used as a validation set to calculate the PIL. At the end of training, only a subset of elite models are retained, based on their validation MSE. Although the current implementation uses hard-coded reset and termination conditions during model rollouts, the dynamics model could naturally be extended to learn reset and termination heads. When sampling transitions, we conservatively clip the rewards to remain within the support of the offline dataset distribution. Hyperparametergymnax- cartpole- random-v1gymnax- pendulum- random-v1brax- halfcheetah- full-replaybrax- hopper- full-replaybrax- walker2d- full-replay Num. layers 3 3 3 3 3 Layer Size 200 200 200 200 200 Activation ReLU ReLU ReLU ReLU ReLU Num. Ensemble Models 7 7 7 10 10 Num. Elite Models 5 5 5 8 8 Log Var. Diff. Coeff. 0.01 0.01 0.01 0.01 0.01 Batch Size 64 64 256 256 256 Num. Epochs 400 400 400 400 400 Learning Rate 0.001 0.001 0.001 0.001 0.001 Learning Rate Schedule cosine cosine cosine cosine cosine Final Learning Rate % 10 10 10 10 10 Weight Decay 2.5e-05 2.5e-05 2.5e-05 2.5e-05 2.5e-05 Validation Split 0.1 0.1 0.1 0.1 0.1 Table 4: World model ensemble dynamics hyperparameters. 41
https://arxiv.org/abs/2505.22442v1
F.3 BAMDP Solver We use the RNN-PPO implementation of Lu et al. [46], which we amend to be compatible with continuous action spaces. We sweep over the hyperparameters given in Table 5. Hyperparameter Value / Sweep Values Learning rate [0.0001, 0.0003] Anneal learning rate True Number of environments [4, 64, 128, 256, 512] Steps per environment [32, 64, 128] Total timesteps Set to 500,000, 1,000,000 or 50,000,000 Update epochs [2, 4, 8] Number of minibatches [2, 4, 8, 16] Discount factor ( γ) [0.99, 0.995, 0.998] GAE lambda [0.8, 0.9, 0.95] Clipϵ [0.2, 0.3] Entropy coefficient [0.000, 0.001, 0.010] Value function coefficient 0.5 Max gradient norm [0.5, 1.0] Layer Size 256 Activation function tanh RNN size Set to 64, 128 or 256 Burn-in Percentage 25 Table 5: RNN-PPO hyperparameters swept over. Hyperparametergymnax- cartpole- random-v1gymnax- pendulum- random-v1brax- halfcheetah- full-replaybrax- hopper- full-replaybrax- walker2d- full-replay Learning rate 0.0003 0.0003 0.0003 0.0003 0.0003 Anneal learning rate True True True True True Number of environments 4 128 512 512 512 Steps per environment 128 64 64 32 64 Total timesteps 500,000 1,000,000 50,000,000 50,000,000 50,000,000 Update epochs 4 8 8 2 4 Number of minibatches 4 16 16 8 8 Gamma 0.99 0.99 0.99 0.998 0.995 GAE lambda 0.95 0.95 0.95 0.8 0.95 Clipϵ 0.2 0.2 0.2 0.3 0.2 Entropy coefficient 0.01 0.003 0.003 0.001 0.001 Value function coefficient 0.5 0.5 0.5 0.5 0.5 Max gradient norm 0.5 0.5 0.5 1.0 0.5 Layer Size 256 256 256 256 256 Activation function tanh tanh tanh tanh tanh RNN size 64 128 256 256 256 Burn-in Percentage 25 25 25 25 25 Table 6: RNN-PPO hyperparameters for gymnax and brax environments. For computational reasons, we sweep over the hyperparameters of each task once, for a fixed dataset size (1000 datapoints for the gymnax tasks and 200,000 datapoints for brax tasks) and choose the hyperparameters corresponding to the lowest approximate regret, which we then use to train the policy of all other dataset sizes. Ideally, we would sweep over all hyperparameters for each dataset size. 42 F.4 ORL Implementations We use Jackson et al. [30]’s implementations of the ORL algorithms. We use their default hyperpa- rameters and sweep over their suggested hyperparameters, which we summarise in Table 7. Hyperparameter Value / Sweep Values Generic OptimisationDiscount factor γ 0.99 Polyak averaging coefficient 0.005 IQLLearning rate 0.0003 Batch size 256 Beta [0.5, 3.0, 10.0] τ(expectile) [0.5, 0.7, 0.9] Advantage clip 100.0 ReBRACLearning rate 0.001 Batch size 1024 Critic BC coefficient [0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1] Actor BC coefficient [0.0005, 0.001, 0.002, 0.003, 0.03, 0.1, 0.3, 1.0] Critic layer norm true Actor layer norm false Observation normalization false Noise clip 0.5 Policy noise 0.2 Num critic updates per step 2 MOPOLearning rate 0.0001 Batch size 256 Model retain epochs 5 Number of critics 10 Rollout batch size 50000 Rollout interval 1000 Rollout length [1, 3, 5] Dataset sample ratio 0.05 Step penalty coefficient [1.0, 5.0] MOReLLearning rate 0.0001 Batch size 256 Model retain epochs 5 Number of critics 10 Rollout batch size 50000 Rollout interval 1000 Rollout length 5 Dataset sample
https://arxiv.org/abs/2505.22442v1
ratio 0.01 Threshold coefficient [0, 5, 10, 15, 20, 25] Termination penalty offset [-30, -50, -100, -200] Table 7: Hyperparameters and sweep ranges for IQL, ReBRAC, MOPO, and MOReL. For the sample efficiency experiments in Fig. 5, we use the default UCB bandit-based hyperparameter- tuning algorithm hyperparameters. We rescale the regret into a score to be maximised, normalised between 0 and 100 ( 100·(1−regret )), enabling fair comparison and compatibility with the algo- rithm. F.5 Regret Normalisation We normalise the regret using the (known) minimum and maximum returns ( RminandRmax) that an online-learnt policy would achieve in the true environment. In the absence of access to these values, we suggest estimating Rmax orRminasrmin 1−γandrmax 1−γrespectively, where rminandrmax denote the 2.5th and 97.5th percentiles of episode rewards in the offline dataset. These thresholds are 43 suggested to avoid unrealistic assumptions, such as that the best possible policy consistently receives the 100th percentile reward at every time step. Such unrealistic assumptions may significantly distort results, especially in reward distributions with heavy tails where an inflated Rmax−Rminwould compress the true and expected regrets. gymnax- cart pole- random- v1gymnax- pend ulum- random- v1brax- half cheetah- full- replaybrax- hop per- full- replaybrax- walker 2d full- replayd4rl- halfcheetah medium- expert- v2d4rl- hopper med ium- v2d4rl- walker2d medium- replay- v2 rmin 0 -13.3 -0.50 0.00 0.00 -0.28 -0.20 0.00 rmax 1 -0.2 3.50 3.50 3.50 12.14 3.23 4.59 P2.5 1 -13.2 -0.84 1.08 0.01 1.99 1.25 -0.19 P97.5 1 -0.16 3.44 4.34 5.65 12.39 4.73 4.92 Table 8: Top half of table: for the gymnax and brax tasks, we define rminandrmax using approximate known minimum and maximum returns [ 31], and dividing by the episode length. For the D4RL tasks, we divide the given D4RL minimum and maximum reference scores by the episode length to find rminandrmax. Bottom half of table: the suggested normalisation values if expert and random scores were unknown, and determined using the95thpercentile of the offline dataset. Based on the above datasets, for all datasets apart from cartpole-v1, normalisation using the 95thpercentile approximation would lead to a more conservative approximate regret. We note that as long as we use the same normalisation constants for both the approximate and true regrets, the absolute value of the normalisation constants is arbitrary. cartpole-v1 is an exception, as the reward is constant for each step (episode returns vary only due to early termination). We calculate the infinite horizon discounted return from the finite horizon discounted return as follows: Rinf=Rfin· 1 +γs 1−γs , where srepresents the maximum number of episode steps. The normalised regret is then calculated from the infinite horizon discounted return using: Regret =Rmax−Rinf Rmax−Rmin. F.6 Experiment Compute Resources All of our experiments were run within a week using four L40S GPUs. 44
https://arxiv.org/abs/2505.22442v1
NFR: Neural Feature-Guided Non-Rigid Shape Registration Puhua Jiang1,2†, Zhangquan Chen1†, Mingze Sun1†, Ruqi Huang1* 1*Tsinghua Shenzhen International Graduate School, Shenzhen, China. 2Pengcheng Laboratory, Shenzhen, China. *Corresponding author(s). E-mail(s): ruqihuang@sz.tsinghua.edu.cn; Contributing authors: jph21@mails.tsinghua.edu.cn; czq23@mails.tsinghua.edu.cn; smz22@mails.tsinghua.edu.cn; †These authors contributed equally to this work. Abstract In this paper, we propose a novel learning-based framework for 3D shape registration, which overcomes the challenges of significant non-rigid deformation and partiality undergoing among input shapes, and, remarkably, requires nocorrespondence annotation during training. Our key insight is to incorporate neural features learned by deep learning-based shape matching networks into an iterative, geometric shape registration pipeline. The advantage of our approach is two-fold – On one hand, neural features provide more accurate and semantically meaningful correspondence estimation than spatial features (e.g., coordinates), which is critical in the presence of large non-rigid deformations; On the other hand, the correspondences are dynamically updated according to the intermediate registrations and filtered by consistency prior, which prominently robustify the overall pipeline. Empirical results show that, with as few as dozens of training shapes of limited variability, our pipeline achieves state-of- the-art results on several benchmarks of non-rigid point cloud matching and partial shape matching across varying settings, but also delivers high-quality correspondences between unseen challenging shape pairs that undergo both significant extrinsic and intrinsic deformations, in which case neither traditional registration methods nor intrinsic methods work. Keywords: Shape Registration, Deep Functional Maps, Non-rigid Shape Matching 1 Introduction Estimating dense correspondences between 3D human shapes is pivotal for a multitude of human- centric computer vision applications, including 3D reconstruction [1], 3D pose estimation [2], and ani- mation [3] among others. In this paper, we tackle the challenging task of estimating correspondences between unstructured point clouds sampled from surfaces undergoing significant non-rigid deforma- tions, irrespective of whether the point clouds are fullorpartial .In fact, the shape matching task has gar- nered increasing attention within the community [4–8]. These methods predominantly adopt a data- driven approach to learn embedding schemes that project point clouds into high-dimensional spaces. By elevating the point clouds to these high- dimensional spaces, non-rigid deformations are more effectively characterized compared to the ambient space, R3. The optimal transformations, and consequently the correspondences, are then estimated within these embedded spaces. Despite the great progresses, such approaches are hindered 1arXiv:2505.22445v1 [cs.CV] 28 May 2025 by several limitations: 1) Their performance on previously unseen shapes is largely unpredictable; 2) The learned high-dimensional embeddings lack intuitive geometric interpretation, making the resulting correspondences challenging to evaluate and analyze in the absence of ground-truth labels; 3) There is still a large room for improvement in estimating dense correspondences between human shapes with partiality . On the other hand, shape registration itself does not serve as a direct rescue to our problem of interest, as the previous approaches typically rely on the premise that the undergoing non- rigid deformation can be approximated by a set of local, small-to-moderate, rigid deformations, which severely hinders their performance in the presence of large deformations (see Fig. 1) and heterogeneity. Motivated by the above observations, we pro- pose a neural feature-guided framework (NFR) that
https://arxiv.org/abs/2505.22445v1
synergistically combines the strengths of clas- sic shape registration and learning-based embed- ding techniques. In a nutshell, we leverage the estimated correspondence from the latter to guide shape deformation via the former iteratively. Our key insight is to enforce similarity between the deformed source and target in both the ambi- ent space and the learned high-dimensional space . Intuitively, in the presence of large deforma- tions, the correspondences learned in the high- dimensional space are more reliable than those based solely on proximity in the ambient space. Conversely, accurately deforming the source mesh to align with the target point cloud enhances spatial similarity in the ambient space, which in turn increases similarity in the embedded space, leading to more precise correspondences. Ulti- mately, this approach enables us to compute cor- respondences between raw point clouds using a shared source mesh as a central hub. As demon- strated in Sec. 4, our method allows for the selection of the source shape during inference, which can be independent of the training data. While conceptually straightforward, we aim to enhance the performance, robustness, and practi- cal utility of our pipeline through several tailored designs. First and foremost, the key component of our pipeline is an embedding scheme for accu- rately and efficiently estimating correspondences between the deformed source shape and the targetpoint cloud . Specifically, to leverage the advan- tages of Deep Functional Maps (DFM) [9–11], the current state-of-the-art approach for match- ingtriangular meshes , we pre-train an unsuper- vised DFM [11] on meshes as a teacher net- work. We then train a point-based feature extrac- tor (i.e., an embedding scheme for points) as a student network on the corresponding vertex sets using natural self-supervision. Unlike the approach in [8], which heavily relies on Diffusion- Net [12] to extract intricate structural details (e.g., Laplace-Beltrami operators) from both mesh and point cloud inputs, the teacher-student paradigm allows us to utilize a more streamlined back- bone—DGCNN [13]. Secondly, in contrast to prior works that either implicitly [14, 15] or explic- itly [7, 8, 16] require rigidly aligned shapes for input/initialization, or demand dense correspon- dence labels [6], we train an orientation regressor on a large set of synthetic, rigidly aligned shapes toautomatically pre-process input point clouds in arbitrary orientations. Lastly, we introduce several innovative components in our pipeline, includ- ing a dynamic correspondence updating scheme, bijectivity-based correspondence filtering, and a two-stage registration process. Morever, we supplement the above framework for estimating correspondences between partial point clouds and complete source shapes . In par- ticular, we assume to be given a set of complete meshes as training data. During training, we ran- domly pick a pair of meshes S,T, and sample on Tfor a partial point cloud Tp, without perserving any mesh structure. Recall that, in DPFM [17], one first constructs a graph-like structure on Tp and then computes spectral embedding on it, which is both slow and unstable. In contrast to that, we consider to assign the spectral embed- ding of Tto its subset Tpvia the natural inclusion map, forming a novel spatially truncated spectral embdding with
https://arxiv.org/abs/2505.22445v1
respect to TpandT. As shown in Prop. 1 in Sec. 3, under certain conditions, we prove that the transformation between the spec- tral embedding of Sand that of Tpcan always be set as the functional map from StoT,regardless of how Tpis generated. The above procedure leads to a simple yet effective training scheme. Namely, we can use arbitary point-based feature extractor to obtain per-point feature on both TpandS, com- pute point-wise map T:Tp→ S via proximities 2 Fig. 1 Shape registration methods like NDP [14] and AMM [15] estimate intermediate correspondences via extrinsic proximity, therefore suffering from large intrinsic deformations. In contrast, our method successfully deforms a FAUST template (the left-most mesh) to another individ- ual of a different pose (the right-most full or partial point cloud). in the feature space, and encode Tinto transfor- mation between the spectral embedding of Tpand S. Finally, the functional map from StoTcan be used as the supervision signal on the obtained transformation. One last missing puzzle from the above pipeline is the source of functional maps among the full shapes. In fact, such can be either given a prior ( e.g., when we train on a set of syn- thetic shapes [18]) or learned ( e.g., with the SOTA DFM [11]). Last but not least, to further improve performance, we incorporate our learned point feature extractor with a neural enhanced shape registration framework, as described above, which applies a geometric optimization during inference. Overall, our framework enjoys the following advantages: 1) Due to the hybrid nature, our framework can effectively manage point clouds that undergo significant deformation and/or exhibit heterogeneity; 2) Our feature extractor, being self-supervised by a deep functional maps network, eliminates the need for correspondence annotation throughout the process; 3) The core operation is conducted in the ambient space, allowing for more efficient, accurate, and straight- forward analysis of registration/mapping results compared to purely learning-based approaches; 4) Utilizing a data-driven orientation regressor, we have developed an automatic pipeline for esti- mating correspondences between deformable point clouds; 5) We introduce a novel self-supervised training scheme for estimating correspondences between partial point clouds and complete source shapes, along with a robust partial-to-full shape registration pipeline.As illustrated in Fig. 2, our framework, trained on the SCAPE [19] dataset, demonstrates excel- lent generalization capabilities and significantly outperforms competing methods on the challeng- ing SHREC07-H dataset, which exhibits substan- tial variability in both extrinsic orientation and intrinsic geometry. Additionally, Fig. 3(d) shows that our feature extractor already provides robust and reasonable correspondences across different partiality patterns. The extra refinement step fur- ther addresses local mismatches, resulting in satis- fying outcomes (Fig. 3(e)). We further emphasize the importance of generalizability in partial shape matching. Unlike full shapes, one can theoretically generate an infinite number of partial subsets from a given surface, with varying degrees of connect- edness, partiality patterns, and sizes. As shown in columns (a-c) of Fig. 3, recent advances such as DPFM [17] and HCLV2S [20] fail to general- ize to unseen partiality patterns. In contrast, our pipeline exhibits strong generalizability, even in practical partial human shape-matching
https://arxiv.org/abs/2505.22445v1
tasks. 2 Related Works Non-rigid Shape Registration Non-rigidly aligning shapes presents greater com- plexity compared to the rigid counterpart due to the intricate nature of deformation models. In gen- eral, axiomatic approaches [21] assume the defor- mation of interest can be approximated by local, small-to-moderate, rigid deformations, therefore suffer from large intrinsic deformations. Recent advances, including [15, 22, 23], have primar- ily aimed at enhancing efficiency and robustness against noise and outliers. Concurrently, there is a growing interest in integrating deep learning tech- niques into non-rigid shape registration, as seen in works like [24–26]. Among them, NDP [14] is simi- lar to our method in the sense that the non-learned version follows an optimization-based approach based on some neural encoding. However, its reg- istration process is solely driven by proximity in the ambient space, which compromises the effec- tiveness for our specific problem. Perhaps the most relevant approach with our method along this line is TransMatch [27], which utilizes a supervised learning framework to train a transformer for pre- dicting 3D flows directly from input point clouds. As demonstrated in Fig.2 and discussed further 3 Fig. 2 We estimate correspondences between heterogeneous shapes from SHREC’07 with four learning-based methods, all trained on the SCAPE r dataset. Our method outperforms the competing methods by a large margin. Remarkably, our method manages to deform a SCAPE template shape to heterogeneous shapes, as indicated by the blue shapes. Fig. 3 Matching different partial shapes to a common template shape (left-most). (a) DPFM [17] trained on Ours-S&F dataset; (b) DPFM [17] trained on CUT [17]; (c) HCLV2S [20] trained on large scale SURREAL [20] dataset ; (d) Ours trained on Ours-S&F dataset. DPFM [17] and HCVL2S [20] fails to handle unseen partial shapes effectively. Remarkablely, DPFM significantly struggles with disconnected shapes. Our method outperforms the competing methods by a large margin and achieve the consist results across different partiality. in Sec.4, TransMatch generalizes poorly to unseen shapes. Non-rigid Shape Matching is a long-standing problem is computer vision and graphics. Unlike the rigid counterpart, non- rigidly aligning shapes is more challenging owing to the complexity inherent in deformation mod- els. Generally, axiomatic approaches [21] assume the deformation of interest can be approximated by local, small-to-moderate, rigid deformations, therefore suffer from large intrinsic deformations. Recent advancements [22, 23, 28] have primarily focused on enhancing the efficiency and robust- ness of methods in the face of noise and outliers. Simultaneously, there’s a growing trend towardsintegrating deep learning techniques [24–26]. Con- trary to the methods mentioned earlier, several recent approaches directly establish correspon- dences between pairs of point clouds [4, 5, 29]. Generally, these methods involve embedding point clouds into a canonical feature space and then esti- mating correspondences based on proximity in this space. Since intrinsic information is not explic- itly formulated in these methods, they can suffer from significant intrinsic deformations and often generalize poorly to unseen shapes. Functional Maps Based on the spectral embedding, i.e.,the trun- cated eigenbasis of the Laplace-Beltrami oper- ator [30] on a shape, functional maps [31] 4 encode point-wise maps into linear transforma-
https://arxiv.org/abs/2505.22445v1
tion between the respective spectral spaces. This approach effectively transforms the problem of finding point-wise correspondences into a simpler task of aligning spectral coefficients. Originating from the foundational work on functional maps, along with a series of follow-ups [32–36], spec- tral methods have made significant progress in addressing the axiomatic non-rigid shape match- ing problem. Unlike axiomatic functional maps frameworks, which focus on optimizing correspondences rep- resented in a spectral basis (referred to as func- tional maps) with hand-crafted features as a prior , DFM [37] adopts an inverse viewpoint. It aims to search for the optimal features such that the induced functional maps satisfy certain structural priors as well as possible. However, because of the heavy dependence of eigenbasis of Laplace-Beltrami operators, DFM is primarily designed for shapes represented by tri- angle meshes and can suffer notable performance drop when applied to point clouds without adap- tation [8]. In fact, inspired by the success of DFM, several approaches [7, 8, 38] have been proposed to leverage intrinsic geometry information carried by meshes in the training of feature extractors tailored for non-structural point clouds. While significant advancements have been made in full shape matching, there remains con- siderable room for improvement in estimating dense correspondences between shapes with par- tiality. Functional maps representation [39, 40] has already been applied to partial shapes. How- ever, the partial functional maps (PFM) formu- lation, as proposed in [39] has its limitations. Building on this foundation, recent learning-based frameworks such as DPFM [17] and SSMSM [8] have shown notable improvements over previous axiomatic approaches. However, both axiomatic and learning-based lines of work typically assume the input to be a connected mesh , with the exception of [8], which relies on graph Laplacian construction [41] in its preprocessing. Learning-based Human Partial Shape Matching When matching partial human scan, a popu- lar approach is to perform template matching.Early works, such as [1, 42], explicitly mod- eled shape deformations and employed extrinsic distance metrics for matching purposes. While some methods [43] utilize deep learning to pre- dict optical flow for guiding registration, these methods are still only effective in cases of minor deformations. More recent methods [44, 45] have shifted towards learning models of shapes. They formulate template matching as an optimization of parameters within a learned latent space. The authors of [20] suggest a different approach: formulating the computation of dense correspon- dences as the initialization and synchronization of local transformations between the scan and the template model. However, these methods often suffer from poor generalizability to unseen shapes. 3 Methodology Given a pair of shapes S,T, our target is to deform Sto non-rigidly align with T. We assume that Sis represented as a triangle mesh so that we can effectively regularize the deformed shape by preserving local intrinsic geometry. On the other hand, we require nostructural information on T and generally assume it to be a point cloud . Our pipeline can also be extended to compute corre- spondences between two raw point clouds T1,T2. To this end, we fix a template mesh S, per- form respective
https://arxiv.org/abs/2505.22445v1
shape registration between Sand the target point clouds, and finally compute the map by composition T12=Ts2◦T1s. Specifically, the non-rigid registration approach developed to tackle the problem of full-2-full is named DFR , and the methodology designed for partial-2-full problem is named Partial-DFR . 3.1 DFR Our DFR full-2-full registration pipeline is shown in Fig. 4, which consists of three main components: 1) An orientation regressor, A, for extrinsically aligning input shapes, either mesh or point cloud; 2) A point feature extractor, F, trained under deep functional maps scheme; 3) A registration module that iteratively optimizes for deformations non-rigidly aligning SwithT. In particular, it takes the rigidly aligned shapes from 1) as input, and leverages 2) to update correspondences during optimization. 5 Fig. 4 The schematic illustration of our pipeline. Ais a pre-trained orientation regressor for aligning input shapes. Then a pre-trained feature extractor Fembeds them into a high-dimensional canonical space. During the iterative optimization procedure of registration, correspondences are dynamically updated according to learned features (Stage- I) and coordinates (Stage-II) of the intermediate shapes. See more details in the text. Though our pipeline leverages a pre-trained feature extractor as registration prior, we high- light that neither the source nor the target is necessarily within or close to the respective train- ing set. We provide throughout experimental evi- dence in Sec. 4 showing the generalizability of our pipeline. 3.1.1 Orientation Regressor Our first objective is to align input point clouds with arbitrary orientations into a canonical frame. We take a data-driven approach by training an orientation regressor [46] on 5000 synthetic SUR- REAL shapes from [47], which are implicitly aligned to a canonical frame by the corresponding generative codes. 3.1.2 Point Feature Extractor In order to efficiently and accurately estimate cor- respondences between the deformed source shape and target point cloud during registration, our next goal is to train a feature extractor for point clouds, which is intrinsic-geometry aware. The authors of [8] propose a multi-modal fea- ture extractor based on DiffusionNet [12], which can process point clouds with an extra step ofgraph Laplacian construction [41]. Though it has demonstrated satisfying performance accom- modating point cloud representation in [8], the explicit graph Laplacian construction is compu- tationally heavy. Hence, we adopt the modified DGCNN proposed in [7] as our backbone, which is lightweight and robust regarding the sampling density and sizes of point clouds. Fig. 5 Illustration of the partial spectral embedding con- struction and partial to full functional map processing during training. In particular, our training follows a teacher- student paradigm. Namely, we first train a deep functional maps (DFM) network, G, on a collec- tion of meshes. Then we train a DFM network F on the corresponding vertex sets, with an extra self-supervision according to the inclusion map between meshes and their vertex sets. In other words, the feature produced by Fispoint-wisely aligned with that produced by G. 3.1.3 Total cost function The total cost function Etotal combines the terms with the weighting factors λcd, λcorr, and λarapto balance them: Etotal=λcdEcd+λcorrEcorr+λarapEarap (1) We describe our algorithm for minimizing Etotal, which is
https://arxiv.org/abs/2505.22445v1
shown in Alg. 1 3.1.4 Two-stage Registration Finally, we observe that solely depending on learned features to infer correspondence is sub- optimal. At the converging point, the deformed source shape is often at the right pose but has defi- ciencies in shape style. To this end, we perform a second-stage registration, based on the coordi- nates of deforming source and target. As shown in Sec. 4.2.5, such a design is clearly beneficial. 6 Algorithm 1: Shape registration pipeline. Input: Source mesh S={V,E}and target point cloud T;Trained point feature extractor F Output: X∗converging to a local minimum of Etotal; Deformed source model {V∗,E}; Correspondence Π∗ ST,Π∗ T S between SandT. 1Initialize deformation graph DGandX(0) with identity transformations; FT=F(T);k = 0; 2while Truedo 3 Update source vertex V(k)by Eqn.(9); 4 ifk%100 == 0 and Flag == Stage-I then 5 F(k) S=F(V(k)); Π(k) ST=NN(F(k) S, FT) ; Π(k) T S=NN(FT, F(k) S); 6 ifk%100 == 0 and Flag == Stage-II then 7 Π(k) ST=NN(V(k),T) ; Π(k) T S=NN(T,V(k)); 8 Compute the set of filtered correspondences Ck; 9 X(k+1)= arg min Etotalby Eqn.(14); 10 ifconverged and Flag == Stage-I then Flag = Stage-II; 11 ifconverged and Flag == Stage-II then return Π∗ ST;Π∗ T S;V∗; 12 k = k + 1; 3.2 Partial-DFR 3.2.1 Partial Theoretical Analysis In this section, we further analyze the adaptability of DFR under partial cases. The different between this and DFR is shown in Fig. 5. We let Φ S,ΦT∈ Rn×kbe the first keigenvectors of the respective cotangent Laplcian [30], and ∆ S,∆T∈Rk×kbe the diagonal matrices storing the corresponding eigenvalues in the diagonals. The functional map, CST, from StoTis then defined as: CST= Φ† TΠT SΦS, (2)where†denotes pseudo inverse of a matrix. We follow the perspective in [35, 48] and recognize CSTas a transformation aligning ΦSand Φ T. It is pointed out in [35, 48] that CST= arg min C∥ΦTC−ΠT SΦS∥2 Fro.(3) In other words, CSTdefined in Eqn.(2) best aligns the spectral embeddings with respect to a given point-wise map Π T S. In particular, if Π STis a isometry between SandT, then the induced functional map CSTenjoys nice structural prop- erties [31] including 1) Orthogonality: CT STCST= Ik, where Ikis the k×kidentity matrix; 2) Com- mutativity with Laplacian operators: CST∆T= ∆SCST. Now, we consider Tp, a set of npvertices sam- pled from T. We denote the natural inclusion map as Π TpT∈Rnp×n:Tp→ T , which leads to a composed map Π TpS= ΠTpTΠT S:Tp→ S . In the following, we focus on the functional map CSTp, which is associated with Π TpS, namely, the partial-to-full map of interest. If we follow the framework of partial func- tional maps [39], then we need to first compute the eigenbasis of Tp, which further requires meshing, or at least graph construction of top of the dis- crete sampling points. However, in practice, it is generally difficult to guarantee the connectedness ofTp. When Tpcontains multiple distant discon- nected components, we have to deal with them one by one, resulting in multiple functional maps, which are heavy but also hard to ensure global consistency. In
https://arxiv.org/abs/2505.22445v1
contrast, we propose to consider the func- tional maps encoded in the spatially truncated spectral embedding forTp. Specifically, we set ΦTp= ΠTpTΦT (4) Proposition 1. LetS,Tbe a pair of shapes each having non-repeating Laplacian eigenvalues, which are the same ( i.e., ∆ S= ∆T), and ΠT S be an isometry between TandS.ΦS,ΦTare the corresponding eigenvectors. On the other hand, we letTpbe a sub-sampled set of vertices of T, and ΦTpbe the spatially truncated embedding defined in Eqn. (4), then CSTdefined in Eqn.( 2) satisfies: CST= arg min C∥ΦTpC−ΠTpSΦS∥2 Fro.(5) 7 Proposition 1 essentially argues that, under certain conditions, the full-to-full functional map CSTexactly aligns Φ Tpto ΦS. 3.2.2 Self-Supervised Partial Training Scheme Fig. 6 Overview of our self-supervised partial shape scheme for learning an intrinsic-geometry-aware point fea- ture extractor. In the forthcoming section, we delve into the registration of partial cases via deep functional maps prior, namely Partial-DFR. We start by giving a theoretical analysis on transformations between spectral embeddings of full shapes and that of partial shapes sampled from the former (Sec. 3.2.1). Then based on our theoretical result, we propose a self-supervised scheme for learning an intrinsic-geometry-aware point-based feature extractor on a set of meshes (Sec. 3.2.2). Finally, we show how our point-based feature extractor can be integrated to the recent neural enhanced shape registration framework [38] (Sec. 3.2.3). In order to efficiently and accurately estimate correspondences between a source shape and a target partial point cloud, we propose a novel self- supervised scheme to train an intrinsic-geometry- aware point feature extractor, which is illustrated in Fig. 6. In particular, we assume to be given a set of human meshes as training data, and emphasize that no mesh input is used during inference. Dur- ing training, we consume a pair of meshes S,T, and generate a subset of point clouds TpfromT. We use ray-casting for partial shape generation, which is detailed in Sec. 4.Pre-computing full-to-full maps: As mentioned in Sec. 1, our pipeline allows for both with and without correspondence annota- tions among the input training shapes. For the more non-trivial latter case, we simply train a Deep Functional Maps [11] with the input meshes, and leverage its output (functional) maps in the following procedure. Backbone: For extracting feature from (potentially partial) point clouds, we utilize the modified DGCNN pro- posed in [7], which is lightweight and robust with respect to point sampling density and distribution. In the following, we denote it by F. Estimating Correspondence: By passing S,Tpthrough F, we obtained the fea- tures F(S),F(Tp), the point-wise correspondence can then be computed by searching for the nearest point in F(S) of each query point in F(Tp). How- ever, for the sake of being differentiable, we adopt the following soft-map computation: ˜ΠTpS(i, j) =exp(−αδij)P j′exp(−αδij′),∀i∈[|S|], j∈[|Tp|], (6) where δij=∥F(S)(i,:)−F(Tp)(j,:)∥andαis the temperature parameter, which is increased during training [11]. Training Loss: We follow Eqn. (4) to compute the spatially trun- cated spectral embedding, Φ Tp, forTp. As outlined above, we have access to the functional map CST, either given or estimated. According to Prop. 1, CSTis supposed to align Φ Tpand
https://arxiv.org/abs/2505.22445v1
Φ S, regardless of how Tpis generated. While it seems natural to formulate the following losspa L(F) = arg min F∥ΦTpCST−˜ΠTpSΦS∥2 Fro, we notice that optimizing the above loss requires to compute Frobenius norm of a matrix of vary- ing dimension, which depends on the number of points in Tp. To alleviate such implementation difficulty, we propose the following two-step loss formulation. First we optimize Coptaccording to 8 the following loss Copt= arg min C ΦTpC−ΠTpSΦS 2 F +λ∥∆TC−C∆S∥2 F.(7) In particular, since we estimate ˜ΠTpSfrom learned features, in order to stabilize the noise during training, we adopt the regularization of commuta- tivity with Laplacian (as the second term of the LHS of Eqn.( 7)). The above optimization is lin- ear in C, therefore admits a closed-form solution. Finally, the supervision from CSTis imposed by L(F) =∥CST−Copt∥2 Fro. (8) 3.2.3 Partial Shape Registration In this section, for the sake of self-contentment, we entail the registration pipeline we adopt from [38]. During registration, the source tem- plate shape can be denoted by Sk={Vk,E},Vk= vk i|i= 1,2, . . . , N , where kindicates the itera- tion index and vk iis the position of the i−th vertex at iteration k. The target partial point cloud is denoted by Tp={uj|j= 1, . . . , M }. Following [38], we first construct the deforma- tion graph on template: DG={Θ∈RH×3,∆∈ RH×3H= [N/2]}. For a given Xk, we can compute displaced vertices via: Vk+1=DG(Xk,Vk). (9) Then iterative optimization the DGthrough a cost function. The energy terms in our registration pipeline are as follows: Correspondence Term measures the distance of filtered correspondences between SkandTp, given as: Ecorr=1 |Ck|X (vk i,uj)∈Ck vk i−uj 2 2, (10) where Ckis the filtered set of correspondences by a correspondence filtering module proposed in [38] Chamfer Distance Term are to measure the extrinsic distance between SkandTp. However, in our case, the target point cloud in general cor- responds to a true subset of the source template, which naturally leads to the following modified version of the Chamfer Distance:Ecd=1 MX j∈[M]min i∈[N] vk i−uj 2 2.(11) As-Rigid-As-Possible Term reflects the devia- tion of estimated local surface deformations from rigid transformations. We follow [38, 49, 50] and define it as: Earap =X h∈[H]X l∈ψ(h)(∥dh,l(X)∥2 2+β∥(R(Θh)−R(Θl)∥2 2) (12) dh,l(X) =dh,l(Θ,∆) =R(Θh) (gl−gh)+ ∆ k+gk−(gl+ ∆ l). (13) Here, g∈RH×3are the original positions of the nodes in the deformation graph DG, and ψ(h) is the 1-ring neighborhood of the h−th deforma- tion node. R(·) is Rodrigues’ rotation formula that computes a rotation matrix from an axis-angle representation and βis the weight of the smooth rotation regularization term. Total cost function: The total cost function Etotal combines the above terms with the weight- ing factors λcd, λcorr, and λarapto balance them: Etotal=λcdEcd+λcorrEcorr+λarapEarap (14) We refer readers to Alg.1 of [38] for more details of the algorithm for minimizing Etotal. 4 Experimental Results We undertake a series of experiments employing DFR and its variant, Partial-DFR, to investigate the registration of full shape and partial shapes, respectively. 4.1 Experimental Results of DFR Datasets: We evaluate our method and several
https://arxiv.org/abs/2505.22445v1
state-of-the-art techniques for estimating corre- spondences between deformable shapes on an array of benchmarks as follows: FAUST r:The remeshed version of FAUST dataset [51], which consists of 100 human shapes (10 individuals performing the same 10 actions). We split the first 80 as training shapes and the rest as test- ing shapes; SCAPE r:The remeshed version of 9 SCAPE dataset [19], which consists 71 human shapes (same individual in varying poses). We split the first 51 as training shapes and the rest as testing shapes; SHREC19 r:The remehsed ver- sion of SHREC19 dataset [52], which consists of 44 shapes of different identities and poses. We use it solely in test, and follow the test pair list provided by [52]; SHREC07-H: A subset of SHREC07 dataset [53], which consists of 20 heterogeneous human shapes of the varying number of vertices. We use it solely in test, and use the accompa- nied sparse landmark annotations to quantify all the pairwise maps among them; DT4D-H: A dataset proposed in [54], which consists of 10 cate- gories of heterogeneous humanoid shapes. We use it solely in testing, and evaluating the inter-class maps split in [9]; TOPKIDS: This is a chal- lenging dataset [55] consisting of 26 shapes of a kid in different poses, which manifest significant topological perturbations in meshes. Baselines: We compare our method with an array of competitive baselines, including axiomatic shape registration methods: Smooth Shells [56], NDP [14], AMM [15]; learning- based registration methods: 3D-CODED [44], Deep Shells [57], TransMatch [27], SyNoRiM [26]; deep functional maps frameworks: SURFM- Net [58], WSupFMNet [16], GeomFMaps [59], DiffFMaps [6], NIE [7], ConsistFMaps [10], SSMSM [8]. According to input requirements, we put those relying on pure mesh input on the top, and the rest, suitable for point clouds, in the bottom of both tables. Train/Test Cases: Throughout this section, we consider a highly challenging scenario: for each learning-based method, we train two mod- els respectively with FAUST r and SCAPE r dataset, and then we run tests in a range of test cases including FAUST r, SCAPE r, SHREC19 r, SHREC07-H, DT4D-H and TOPKIDS. The test pairs of each case have been described above. In the following, A/B means we train on dataset A and test on dataset B. Datasets Alignment: There exist extrinsically aligned versions for all the involved datasets but SHREC07-H. We equally feed in aligned datasets to all the baselines when available. On the other hand, the original version of SHREC07-H man- ifests significant variability of orientations. For the sake of fairness and simplicity, we applyour orientation regressor to it and provide base- lines for our automatically aligned data. Note that, all the aligned datasets, as well as the synthetic dataset on which we train our orienta- tion regressor, roughly share the same canonical orientation, which is defined by the SMPL [60] model. Finally, we always apply automatic align- ment before inputting shapes into our pipeline, whatever the original orientation they are. Choice of Source Shape: As mentioned at the beginning of Sec. 3, we compute correspondences between two raw input point clouds by
https://arxiv.org/abs/2505.22445v1
associat- ing them via a common source mesh S. In Tab. 1 and Tab. 2, we indicate our choice of source mesh by Ours- name , where name indicates the origin of the source mesh. For simplicity, we fix the source mesh from each dataset and visualize them in the appendix. On the other hand, when imple- menting axiomatic shape registration methods, we only consider deforming the same template we pick from FAUST r (resp. SCAPE r) to every test point cloud in the test set of FAUST r (resp. SCAPE r), and establish correspondence by map composition, in the same manner as ours. Metric: Though we primarily focus on match- ing point clouds, we adopt the commonly used geodesic error normalized by the square root of the total area of the mesh, to evaluate all methods, either for mesh or point cloud. Hyper-Parameters: We remark that all the hyper-parameters are fixed for allexperiments in this section. In particular, we settle them by per- forming a grid search with respect to the weights used in the final optimization to seek for the com- bination that leads to the the best registration results (quantitatively in terms of Chamfer dis- tance and qualitatively by visual inspection) on a few training shapes. We provide more detailed discussion and ablation of the choice of hyper- parameters in the appendix. 4.1.1 Experimental Results Subsequently, we conduct the experiments of full shape registration in terms of Near-isometric Benchmarks, Non-isometric Benchmarks and Topologically Perturbed Benchmark. 10 Table 1 Mean geodesic errors ( ×100) on FAUST r, SCAPE r and SHREC19 r. The best is highlighted. Train FAUST r SCAPE rMethodTest FAUST rSCAPE rSHREC19 r SCAPE rFAUST rSHREC19 r Smooth Shells [56] 2.5 \ \ 4.7 \ \ SURFMNet(U) [58] 15.0 32.0 \ 12.0 32.0 \ NeuroMorph(U) [61] 8.5 29.0 \ 30.0 18.0 \ WSupFMNet(W) [16] 3.3 12.0 \ 7.3 6.2 \ GeomFMaps(S) [59] mesh 3.1 11.0 9.6 4.4 6.0 11.4 Deep Shells(W) [57] 1.7 5.4 26.6 2.5 2.7 21.4 ConsistFMaps(U) [10] 1.5 7.3 20.9 2.0 8.6 28.7 AttentiveFMaps(U) [9] 1.9 2.6 6.2 2.2 2.2 9.3 NDP [14] 20.4 \ \ 16.2 \ \ AMM [15] 14.2 \ \ 13.1 \ \ CorrNet3D(U) [4] 63.0 58.0 \ 58.0 63.0 \ 3D-CODED(S) [44] 2.5 31.0 \ 31.0 33.0 \ SyNoRiM(S) [26] 7.9 21.9 \ 9.5 24.6 \ TransMatch(S) [27] pcd 2.7 33.6 21.0 18.6 18.3 38.8 DPC(S) [5] 11.1 17.5 31.0 17.3 11.2 28.7 DiffFMaps(S) [6] 3.6 19.0 16.4 12.0 12.0 17.6 NIE(W) [7] 5.5 15.0 15.1 11.0 8.7 15.6 SSMSM(W) [8] 2.4 11.0 9.0 4.1 8.5 7.3 Ours-SCAPE 3.4 5.1 5.4 2.6 4.0 5.1 Ours-FAUST 3.0 6.3 5.9 2.9 4.0 4.8 Near-isometric Benchmarks As shown in Fig. 2, in the presence of large deformation between the template and the tar- get, NDP[14] and AMM[15] fail completely while our method delivers high-quality deforma- tion. Moreover, as illustrated in Table 1, our method achieves the best performance in 5 out of 6 test cases. Remarkably, in the two most challenging tasks, FAUST r/SHREC19 r and FAUST r/SHREC19 r, our method indeed out-
https://arxiv.org/abs/2505.22445v1
performs allof the baselines, including the state- of-the-art methods that take meshes as input. Regarding point-based methods, SSMSM [8] per- forms well in the standard case and outper- forms ours in FASUT r/FAUST r, but general- izes poorly to unseen shapes. Another important observation is that our method manifests robust- ness with respect to the choice of template shapes. In fact, the above observations remain true no matter which template we select. Non-isometric Benchmarks We stress test our method on challenging non- isometric datasets including SHREC07-H and DT4D-H. We emphasize that these test shapes are unseen during training. SHREC07-H contains 20 heterogeneous human shapes, whose number of vertices ranges from 3000 to 15000. More- over, there exists some topological noise ( e.g., the hands of the rightmost shape in Fig. 2 is attachedto the thigh in mesh representation). As shown in Fig. 2, SSMSM [8] barely delivers reasonable results, which might be due to the sensitivity of graph Laplacian construction on point clouds. Topological noise, on the other hand, degrades mesh-based methods like Deep Shells [57]. Mean- while, as shown in Tab. 2, our method achieves a performance improvement of over 40% compared to the previous SOTA approaches ( 8.5vs. 15.3; 5.9vs. 13.4), which again confirms the robust- ness of our approach. Regarding DT4D-H, We follow the test setting of AttentiveFMaps [9], and only consider the more challenging inter-class mapping. Interestingly, our method outperforms the state-of-the-art point cloud-based approach by approximately 30% and exceeds the performance of mesh-based state-of-the-art methods by over 70%. Furthermore, even in the worst case, training on FAUST r and using SCAPE template, our error is still the lowest compared to external baselines. Topologically Perturbed Benchmark The shapes in TOPKIDS present various adhe- sions – hand/arm adhered to abdomen/thigh. Given the ground-truth maps from each topolog- ically perturbed shape to the reference shape, we evaluate all 25 maps (excluding the trivial one from reference to itself) with models trained on FAUST r and SCAPE r respectively. Since the orientation of the TOPKIDS dataset is not orig- inally agreeable with the aligned version of the 11 Table 2 Mean geodesic errors ( ×100) on SHREC’07-H and DT4D-H. The best is highlighted. Train FAUST r SCAPE rMethodTest SHREC07 DT4D SHREC07 DT4D GeomFMaps [59] 30.5 38.5 28.9 28.6 Deep Shells [57] 30.6 35.9 31.3 25.8 ConsistFMaps [10] mesh 36.2 33.5 37.3 38.6 AttentiveFMaps [9] 16.4 11.0 21.1 21.4 TransMatch [27] 25.3 26.7 31.2 25.3 DiffFMaps [6] 16.8 18.5 15.4 15.9 NIE [7] 15.3 13.3 13.4 12.1 SSMSM [8] pcd 42.2 11.8 37.7 8.0 Ours-SCAPE 9.3 9.8 5.9 5.7 Ours-FAUST 8.5 \ 6.1 \ Ours-CRYPTO \ 6.9 \ 5.7 two training sets, we align the input with our orientation regressor and feed them into the base- lines depending on extrinsic information [6–8, 57]. Regarding the template shape, we adopt the ref- erence shape as our source mesh in registration. We report the quantitative comparisons in Tab. 3. It is obvious that topological noise poses great challenges for methods based on pure intrinsic information [9, 10, 59], while the counterparts perform relatively well.
https://arxiv.org/abs/2505.22445v1
Especially, among the point-based methods, our method outperforms the latest SOTA method by a large margin, namely, over 40% relative error reduction ( 7.1vs. 12 .3). We refer readers to the appendix for a qualitative comparison, which agrees with the quantitative results above. 4.1.2 Ablation Study We report ablation studies in Tab. 4, in which we verify the effectiveness of each core design formu- lated in Sec. 3. Throughout this part, we train on SCAPE r and test on SHREC07-H. In the registration stage, we use the SCAPE r template as the source mesh and deform it to each point cloud from SHREC07-H. It is evident that each ablated module contributes to the final perfor- mance. In particular, both stages of registration play an equally important role in our framework. Finally, to validate the superiority of our reg- istration scheme, we report in Tab. 5 the average errors of the initial maps computed by our point- based DFM, and that of output maps of Ours, NDP [14], AMM [15], which are all based on the former. It is evident that, across three different test sets, our method consistently improves the initial maps, while NDP and AMM can even lead to deteriorated maps than the initial input.4.2 Experimental Results of Partial-DFR 4.2.1 Model Training In this section, we train two versions of point fea- ture extractor using different datasets and corre- spondence annotations. To generate partial shapes for training and test, we render 12 partial-view point clouds for each full shape from the train- ing set, oriented in the directions from 12 vertices of a regular icosahedron to the origin, and obtain the point clouds via ray-casting. Unlike previous partial-view datasets such as those in [8, 20], which only render shapes around the z-axis, our dataset features a diverse range of connectedness, partiality patterns, and sizes. Ours-S&F is trained on S&F, an assembled dataset consist- ing of the remeshed version of FAUST dataset [51] and SCAPE dataset [19]. We merge the training and testing shapes of the two datasets which in total consists of 171 human shapes. We split the first 80 of FAUST r and 51 of SCAPE r as train- ing shapes and the rest as test shapes. Regarding full shape correspondence annotation, we utilize full-to-full maps pre-computed by an unsupervised DFM [11] for supervision, eliminating the need for any ground-truth correspondence. Ours-M&P is trained on 340 shapes sampled from the MoSh and PosePrior sequences, which are included in the AMASS [62] dataset. As the shapes from AMASS dataset are all in identity maps with each other, we directly leverage them as the supervision signal. 12 Table 3 Mean geodesic errors ( ×100) on TOPKIDS which trained on FAUST r and SCAPE r. The best is highlighted. GeomFMaps [59] Deep Shells [57] ConsistFMaps [10] AttentiveFMaps [9] DiffFMaps [6] NIE [7] SSMSM [8] Ours FAUST r\TOPKIDS 26.2 14.7 35.9 31.7 20.5 18.9 14.2 8.9 SCAPE r\TOPKIDS 21.7 15.3 33.1 39.4 18.0 16.2 12.3 7.1 Table 4 Mean geodesic errors ( ×100) on different ablated settings, the models are all trained on SCAPE
https://arxiv.org/abs/2505.22445v1
r and test on SHREC’07. w/o Registration w/o Stage I w/o Stage II w/o updating corre. w/o cons. filter Full 11.5 10.6 10.1 8.1 7.2 5.9 Table 5 Mean geodesic errors ( ×100) of Ours, NDP, AMM based on the same initial maps. SCAPE rSHREC19 rSHREC07-H Ini. 5.5 8.1 11.5 NDP 5.4 11.4 8.9 AMM 11.4 10.7 8.8 Ours 2.6 5.1 5.9 Table 6 Mean geodesic errors ( ×100) on SCAPE-FAUST partial dataset from regular icosahedron views. The best is highlighted. Method Geo. error( ×100) DPFM [17] unsup 13.15 DPFM [17] sup 5.18 NIE [7] 11.17 DiffFmaps [6] 12.04 SSMSM [8] 8.12 HCLV2S [20] 6.98 Ours 2.33 Integration with Neural Enhanced Shape Registration: Throughout the experimental results, unless oth- erwise specified (Sec. 4.2.5), we always report the matching score of the full pipeline combin- ing our learned point-feature extractor with the registration component proposed in [38]. Baselines: We compare our method with an array of competitive baselines, including axiomatic shape registration methods: NDP [14], AMM [63]; learning-based registration meth- ods: DFR [38],HCLV2S [20]; deep functional maps frameworks: DPFM [17], GeomFMaps [59], DiffFMaps [6], NIE [7] SSMSM [8]. 4.2.2 Partial Shape Matching without Correspondence Label In this section, we train and test all the baselines in the same way as we train Ours-S&F , with anexception of HCLV2S [20], as the latter requires one-to-one correspondences between a template and all training partial shapes, which is unsatis- fied in the S&Fdataset. Instead we evaluate with the official checkpoint from HCLV2S [20], which is trained on 190K shapes from SURREAL dataset. Table 6 reports the quantitative results of our method and several recent baselines, where our method outperforms other methods by a large margin (55 .0% error reduction compared to the second best, supervised DPFM [17]). It is note- worthy that the DPFM is designed for connected meshes and inherently underperforms when han- dling meshes with disconnected regions. During both training and testing, we consistently retain only the largest connected region of each partial shape. Despite the extra care, the supervised ver- sion of DPFM still performs significantly worse than our method, let alone the unsupervised ver- sion of DPFM. The results reveal that, despite of significantly larger training set and correspon- dence supervision, it still struggles to generalize to unseen diverse partial views effectively. 4.2.3 Stress Tests on External Tasks In this part, we stress test our model on three external challenging tasks from [8, 17, 20] respec- tively. We emphasize that our model are trained without any knowledge of the task or the test data. PFARM In the first task, we consider the PFARM bench- mark proposed by DPFM [17], where the bench- mark is used for testing generalizability of meth- ods trained on CUTS [17]. Tab. 7 reports the test error of our two models, we highlight that our training shapes are all obtained from partial views evenly distributed in the sphere around the training shapes, therefore Cut partiality is alien to our models. Yet, either of our model achieves 13 Table 7 Mean geodesic errors ( ×100) comparison with other methods
https://arxiv.org/abs/2505.22445v1
on PFARM dataset. The best is highlighted. Method Geo. error( ×100) PFM(+ZO) [39] 42.34 FSP(+ZO) [64] 53.15 DOC(+ZO) [65] 51.78 GeomFMaps(+ZO) [59] 22.22 DPFM(+ZO) [17] 10.53 SSMSM [8] 12.32 Ours-S&F 7.35 Ours-M&P 8.30 Table 8 Mean geodesic errors ( ×100) on SURREAL partial view dataset from SSMSM. The best is highlighted. Method Geo. error( ×100) DPFM [17] unsup 12.0 DPFM [17] sup 7.8 SSMSM [8] 6.3 Ours-S&F 6.3 Ours-M&P 4.9 at least 30% improvement over the baselines listed in [17]. Moreover, all the baselines in Tab. 7 utilize ZoomOut [35] for post-processing, we refer readers to the Supplementary Material for the raw output performance. Surreal-PV In the second task, we test with the benchmark proposed in SSMSM [8], which consists of 5000 partial view point clouds cropped from SURREAL dataset. In particular, the baselines in Tab. 8 are all trained on 4000 shapes from it and test on the rest. In contrast, we directly test our model with the 1000 point clouds. Remarkably, the model trained on 171 shapes, Ours-S&F has already achieved on-par performance with [8]. Further- more, our model trained on the larger dataset surpass the second best method by 22%. FAUST & SHREC19 In the last task, we follow the official reposi- tory of HCLV2S [20] to sample 100 partial view point clouds from each shape in the FAUST and SHREC19 dataset, resulting respectively 10000 and 4400 test cases. It is worth noting that while shapes in FAUST share the same triangulation with the template (as used in [20]), SHREC19 con- tains shapes of much more significant variability in pose, style and vertex numbers.As illustrated in Table 9, our method signifi- cantly surpasses HCLV2S, achieving over 20% and 40% relative error reduction on the FAUST and SHREC19 datasets, respectively. This further con- firm the robustness and efficiency of our method when applied to partial-view datasets. 4.2.4 Matching Alien Point Clouds In this section, we use Ours-M&P through- out. We demonstrate the robustness of our model to heterogeneous partial point clouds, noisy real scans with outlier points, and capacity of match- ing with unseen source shape during training. Fig. 7 Qualitative comparison with HCLV2S [20] on matching heterogeneous partial point clouds. To start with, as shown in Fig 7 ,we compare with HCLV2S [20] on matching heterogeneous par- tial point clouds with the template from HCLV2S. It is evident from the top row that the competing method can not handle alien body type (the male shape is dressed, and differs from most of training shape from SURREAL used in HCLV2S). Fig. 8 Qualitative results of noisy real scans. Our method transfer the facial details, strips in the T-shirt and arm precisely and smoothly 14 Table 9 Euclidean distance evaluation for partial-2-full correspondence task. We report average correspondence error, 5cm-recall and 10cm-recall on FAUST and SHREC’19 dataset from HCLV2S[20] baseline. The best is highlighted. FAUST SHREC19MethodAE(cm) 5cm-recall 10cm-recall AE(cm) 5cm-recall 10cm-recall DHBC [66] 10.91 0.503 0.772 17.24 0.401 0.646 SMPL [60] 1.98 0.932 0.973 5.48 0.751 0.897 HCLV2S [20] 1.90 0.953 0.998 4.81 0.810 0.970 Ours-M&P 1.46 0.968 0.996 2.68 0.886
https://arxiv.org/abs/2505.22445v1
0.951 In the second set of experiment illustrated in Fig 8, apart from HCLV2S, we also compare with non-rigid shape registration methods. Similar to the last example, we keep using the template in Fig. 7 as source shape. Visual inspection clearly shows that our texture transfer outperforms all the baselines by a significant margin. Note, espe- cially, how the facial details, strips in the T-shirt and arms are transferred by different methods. We remark that our texture transfer is purely geomet- ric, without using any texture information in the input. Fig. 9 Qualitative showcase of different source shape selection. Though test it on the largest connected com- ponent of the input mesh, DPFM [17] falls short in performance . In the last example, we showcase the flexibility of our pipeline in terms of source shape selec- tion. In contrast, HCLV2S [20] is trained to match any input with a fixed template shape. In the case of Fig. 9, the input point clouds are sampled from a sequence of girl in a robe dancing samba, which significantly deviates from the fixed tem- plate from SURREAL, thus leading to degraded output as shown in the right-most but second col- umn. Shape registration methods [14] fail due to the relative large deformation undergoing between the source and the target point clouds. Finally, we train DPFM [17] on the training set of S&Fdataset, and test it on the largest connected com- ponent of the input mesh (which none of the rest uses), yet it still fail to deliver reasonable results. 4.2.5 Ablation Study We report ablation studies in Table 11, in which we verify the effectiveness of core design formu- lated in Sec. 3. Throughout this part, we train on Ours-S&F and test on it. As demonstrated in Table 11, utilizing the dense full-to-full map directly as supervision, in place of the functional map, results in significant geodesic errors. Particularly, our methods outper- form DFR [38] over 30% in both stages( 2.33vs. 3.49,4.11vs. 6.75). 4.3 Medical Application To further validate the effectiveness of our approach on medical application, we normal- ized the stomach subset from 60 patients in MedShapeNet [71] (comprising 3,540 pairs), and obtained 12 partial shapes for each shape through the perspective processing of a regular dodeca- hedron (consistent with the method of process- ing partials in Table 6). Based on these pairs, we conducted training for both full and partial shapes using SSMSM[8], Point2SSM [70], and our method. Our method demonstrated the best per- formance in stomach medical dataset registration, whether full or partial cases. Besides, following Point2SSM [70], we test our method on the spleen and pancreas subset for quantitative results. As shown in Tab. 10, our method outperforms the second best by a 50% relative error reduction. 5 Conclusion, Limitations and Future Work In this paper, we propose a novel neural feature- guided shape registration (NFR) framework with- out correspondence supervision, which is also suit- able for partial shape. Our framework can perform 15 Table 10 Statistical shape analysis on spleen and pancreas medical dataset in terms of chamfer distance. The
https://arxiv.org/abs/2505.22445v1
best is highlighted. CD (mm) PN-AE [67] DG-AE [13] CPAE [68] ISR [69] DPC [5] Point2SSM [70] Ours Spleen 43.7 43.5 61.3 17.6 10.6 3.4 1.7 Pancreas 22.0 21.0 18.8 7.4 6.1 2.7 1.3 Table 11 Mean geodesic errors ( ×100) on SCAPE-FAUST dataset. The best is highlighted. Method Geo. error( ×100) Ours full 2.33 Ours feature 4.41 Ours w/o Φ in training 36.55 DFR [38] 3.49 DFR [38] feature 6.75 registration between shapes undergoing significant intrinsic deformations, and exhibits superior gen- eralizability over the learning-based competitors. Apart from several designs tailored for our intu- itive pipeline, we also introduce a data-driven solution to facilitate the burden of extrinsically aligning non-rigid points. To better handle the partial-2-full cases, NFR introduce a novel scheme that enhances the robustness of partial functional maps by computing spectral embeddings of partial shapes, conditioned on their complete versions. Further, we develop a point feature extractor trained on a dataset of complete meshes in a self-supervised manner. We verify our frameworks through a series of challenging tests, in which it shows superior performance but also remarkable robustness. Limitation and Future Work The primary limitation of our method lies in its optimization process, which involves iterative updates of corre- spondences between intermediate and target point clouds. This procedure requires several seconds to converge, indicating opportunity for improving efficiency. References [1] Yu, T., Zheng, Z., Guo, K., Zhao, J., Dai, Q., Li, H., Pons-Moll, G., Liu, Y.: Doublefusion: Real-time capture of human performances with inner body shapes from a single depth sensor. In: CVPR (2018) [2] Wang, J., Tan, S., Zhen, X., Xu, S., Zheng, F., He, Z., Shao, L.: Deep 3d human pose estimation: A review. Computer Vision andImage Understanding (2021) [3] Paravati, G., Lamberti, F., Gatteschi, V., Demartini, C., Montuschi, P.: Point cloud- based automatic assessment of 3d computer animation courseworks. IEEE Transactions on Learning Technologies 10(4), 532–543 (2016) [4] Zeng, Y., Qian, Y., Zhu, Z., Hou, J., Yuan, H., He, Y.: Corrnet3d: Unsupervised end-to-end learning of dense correspondence for 3d point clouds. In: CVPR, pp. 6052–6061 (2021) [5] Lang, I., Ginzburg, D., Avidan, S., Raviv, D.: Dpc: Unsupervised deep point correspon- dence via cross and self construction. In: 2021 International Conference on 3D Vision (3DV), pp. 1442–1451 (2021). IEEE [6] Marin, R., Rakotosaona, M.-J., Melzi, S., Ovsjanikov, M.: Correspondence learning via linearly-invariant embedding. Advances in Neural Information Processing Systems 33, 1608–1620 (2020) [7] Jiang, P., Sun, M., Huang, R.: Neural intrin- sic embedding for non-rigid point matching. In: CVPR (2023) [8] Cao, D., Bernard, F.: Self-supervised learning for multimodal non-rigid shape matching. In: CVPR (2023) [9] Li, L., Donati, N., Ovsjanikov, M.: Learning multi-resolution functional maps with spec- tral attention for robust shape matching. In: NeurIPS (2022) [10] Cao, D., Bernard, F.: Unsupervised deep multi-shape matching. In: ECCV (2022) [11] Sun, M., Mao, S., Jiang, P., Ovsjanikov, M., Huang, R.: Spatially and spectrally consistent deep functional maps. In: ICCV (2023) 16 [12] Sharp, N., Attaiki, S., Crane, K., Ovsjanikov, M.: Diffusionnet: Discretization agnostic learning on surfaces. ACM Transactions on Graphics (2022) [13] Wang, Y., Sun, Y., Liu,
https://arxiv.org/abs/2505.22445v1
Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG) (2019) [14] Li, Y., Harada, T.: Non-rigid point cloud reg- istration with neural deformation pyramid. In: NeurIPS (2022) [15] Yao, Y., Deng, B., Xu, W., Zhang, J.: Fast and robust non-rigid registration using accelerated majorization-minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) [16] Sharma, A., Ovsjanikov, M.: Weakly super- vised deep functional maps for shape match- ing. In: NeurIPS (2020) [17] Attaiki, S., Pai, G., Ovsjanikov, M.: Dpfm: Deep partial functional maps. In: 2021 Inter- national Conference on 3D Vision (3DV), pp. 175–185 (2021). IEEE [18] Loper, M., Mahmood, N., Romero, J., Pons- Moll, G., Black, M.J.: Smpl: A skinned multi- person linear model. ACM Trans. Graph. 34(6), 248–124816 (2015) https://doi.org/10. 1145/2816795.2818013 [19] Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: SCAPE: Shape Completion and Animation of People (2005) [20] Huang, X., Yang, H., Vouga, E., Huang, Q.: Dense correspondences between human bodies via learning transformation synchro- nization on graphs. In: NeurIPS (2020) [21] Amberg, B., Romdhani, S., Vetter, T.: Opti- mal step nonrigid icp algorithms for surface registration (2007) [22] Zhou, Q.-Y., Park, J., Koltun, V.: Fast global registration. ECCV (2016)[23] Li, K., Yang, J., Lai, Y.-K., Guo, D.: Robust non-rigid registration with reweighted posi- tion and transformation sparsity. IEEE trans- actions on visualization and computer graph- ics25(6), 2255–2269 (2018) [24] Bozic, A., Zollhofer, M., Theobalt, C., Nießner, M.: Deepdeform: Learning non-rigid rgb-d reconstruction with semi-supervised data. In: CVPR, pp. 7002–7012 (2020) [25] Bozic, A., Palafox, P., Zollh¨ ofer, M., Dai, A., Thies, J., Nießner, M.: Neural non-rigid tracking. In: NeurIPS, vol. 33, pp. 18727– 18737 (2020) [26] Huang, J., Birdal, T., Gojcic, Z., Guibas, L.J., Hu, S.-M.: Multiway Non-rigid Point Cloud Registration via Learned Functional Map Synchronization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1 (2022) https://doi.org/10.1109/TPAMI. 2022.3164653 . Accessed 2022-04-11 [27] Trappolini, G., Cosmo, L., Moschella, L., Marin, R., Melzi, S., Rodol` a, E.: Shape regis- tration in the time of transformers. Advances in Neural Information Processing Systems 34, 5731–5744 (2021) [28] Boscaini, D., Masci, J., Melzi, S., Bron- stein, M.M., Castellani, U., Vandergheynst, P.: Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. Computer Graphics Forum 34(5), 13–23 (2015) https://doi.org/ 10.1111/cgf.12693 [29] Li, Y., Harada, T.: Lepard: Learning partial point cloud matching in rigid and deformable scenes. In: CVPR (2022) [30] Pinkall, U., Polthier, K.: Computing Dis- crete Minimal Surfaces and their Conju- gates. Experimental mathematics 2(1), 15–36 (1993) [31] Ovsjanikov, M., Ben-Chen, M., Solomon, J., Butscher, A., Guibas, L.: Functional Maps: A Flexible Representation of Maps Between Shapes. ACM Transactions on Graphics (TOG) 31(4), 30 (2012) 17 [32] Nogneng, D., Ovsjanikov, M.: Informative descriptor preservation via commutativity for shape matching. Computer Graphics Forum 36(2), 259–267 (2017) [33] Huang, R., Ovsjanikov, M.: Adjoint map rep- resentation for shape analysis and matching. In: Computer Graphics Forum, vol. 36, pp. 151–163 (2017). Wiley Online Library [34] Ren, J., Poulenard, A., Wonka, P., Ovs- janikov, M.: Continuous and orientation- preserving correspondences via
https://arxiv.org/abs/2505.22445v1
functional maps. ACM Trans. Graph. 37(6), 248–124816 (2018) [35] Melzi, S., Ren, J., Rodol` a, E., Wonka, P., Ovsjanikov, M.: Zoomout: Spectral upsam- pling for efficient shape correspondence. Proc. SIGGRAPH Asia (2019) [36] Huang, R., Ren, J., Wonka, P., Ovsjanikov, M.: Consistent zoomout: Efficient spectral map synchronization. In: Computer Graph- ics Forum, vol. 39, pp. 265–278 (2020). Wiley Online Library [37] Litany, O., Remez, T., Rodol` a, E., Bron- stein, A.M., Bronstein, M.M.: Deep func- tional maps: Structured prediction for dense shape correspondence. In: ICCV (2017) [38] Jiang, P., Sun, M., Huang, R.: Non-rigid shape registration via deep functional maps prior. In: NeurIPS (2023) [39] Rodol` a, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial Func- tional Correspondence. In: Computer Graph- ics Forum (2016) [40] Rodol` a, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial func- tional correspondence. In: Computer Graph- ics Forum, vol. 36, pp. 222–236 (2017). Wiley Online Library [41] Sharp, N., Crane, K.: A laplacian for non- manifold triangle meshes. Computer Graph- ics Forum (2020) [42] Yu, T., Guo, K., Xu, F., Dong, Y., Su, Z.,Zhao, J., Li, J., Dai, Q., Liu, Y.: Bodyfusion: Real-time capture of human motion and sur- face geometry using a single depth camera. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 910–919 (2017) [43] Lin, W., Zheng, C., Yong, J.-H., Xu, F.: Occlusionfusion: Occlusion-aware motion estimation for real-time dynamic 3d recon- struction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1736–1745 (2022) [44] Groueix, T., Fisher, M., Kim, V.G., Rus- sell, B.C., Aubry, M.: 3d-coded: 3d corre- spondences by deep deformation. In: ECCV (2018) [45] Hanocka, R., Fish, N., Wang, Z., Giryes, R., Fleishman, S., Cohen-Or, D.: Alignet: Partial-shape agnostic alignment via unsu- pervised learning. ACM Transactions on Graphics (TOG) 38(1), 1–14 (2018) [46] Chen, J., Yin, Y., Birdal, T., Chen, B., Guibas, L.J., Wang, H.: Projective manifold gradient layer for deep rotation regression. In: CVPR (2022) [47] Varol, G., Romero, J., Martin, X., Mah- mood, N., Black, M.J., Laptev, I., Schmid, C.: Learning from synthetic humans. In: CVPR (2017) [48] Huang, R., Achlioptas, P., Guibas, L., Ovs- janikov, M.: Limit shapes – a tool for under- standing shape differences and variability in 3d model collections. Computer Graphics Forum 38(5), 187–202 (2019) [49] Guo, C., Chen, X., Song, J., Hilliges, O.: Human performance capture from monocular video in the wild. In: 3DV (2021) [50] Levi, Z., Gotsman, C.: Smooth rotation enhanced as-rigid-as-possible mesh anima- tion. IEEE transactions on visualization and computer graphics 21(2), 264–277 (2014) [51] Bogo, F., Romero, J., Loper, M., Black, M.J.: FAUST: Dataset and evaluation for 3D mesh 18 registration. In: CVPR (2014) [52] Melzi, S., Marin, R., Rodol` a, E., Castellani, U., Ren, J., Poulenard, A., Wonka, P., Ovs- janikov, M.: Matching Humans with Different Connectivity. In: Eurographics Workshop on 3D Object Retrieval (2019). https://doi.org/ 10.2312/3dor.20191070 [53] Giorgi, D., Biasotti, S., Paraboschi, L.: Shape retrieval contest 2007: Watertight models track. SHREC competition (2007) [54] Magnet, R., Ren, J., Sorkine-Hornung, O., Ovsjanikov, M.: Smooth non-rigid shape matching via effective
https://arxiv.org/abs/2505.22445v1
dirichlet energy opti- mization. In: 3DV (2022) [55] L¨ ahner, Z., Rodol` a, E., Bronstein, M.M., Cremers, D., Burghard, O., Cosmo, L., Dieck- mann, A., Klein, R., Sahillioglu, Y.: Shrec’16: Matching of deformable shapes with topolog- ical noise. Proc. 3DOR 2(10.2312) (2016) [56] Eisenberger, M., Lahner, Z., Cremers, D.: Smooth shells: Multi-scale shape registration with functional maps. In: CVPR (2020) [57] Eisenberger, M., Toker, A., Leal-Taix´ e, L., Cremers, D.: Deep shells: Unsupervised shape correspondence with optimal transport. In: NeurIPS (2020) [58] Roufosse, J., Sharma, A., Ovsjanikov, M.: Unsupervised deep learning for structured shape matching. In: ICCV (2019) [59] Donati, N., Sharma, A., Ovsjanikov, M.: Deep geometric functional maps: Robust fea- ture learning for shape correspondence. In: CVPR (2020) [60] Loper, M., Mahmood, N., Romero, J., Pons- Moll, G., Black, M.J.: SMPL: A skinned multi-person linear model. TOG 34(6), 248– 124816 (2015) [61] Eisenberger, M., Novotny, D., Kerchenbaum, G., Labatut, P., Neverova, N., Cremers, D., Vedaldi, A.: Neuromorph: Unsupervised shape interpolation and correspondence in one go. In: CVPR (2021)[62] Mahmood, N., Ghorbani, N., Troje, N.F., Pons-Moll, G., Black, M.J.: AMASS: Archive of motion capture as surface shapes. In: ICCV (2019) [63] Yao, Y., Deng, B., Xu, W., Zhang, J.: Fast and robust non-rigid registration using accelerated majorization-minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) [64] Litany, O., Rodol` a, E., Bronstein, A.M., Bronstein, M.M.: Fully spectral partial shape matching. In: Computer Graphics Forum, vol. 36, pp. 247–258 (2017). Wiley Online Library [65] Cosmo, L., Rodola, E., Masci, J., Torsello, A., Bronstein, M.M.: Matching deformable objects in clutter. In: 3D Vision (3DV), 2016 Fourth International Conference On, pp. 1–10 (2016). IEEE [66] Wei, L., Huang, Q., Ceylan, D., Vouga, E., Li, H.: Dense human body correspondences using convolutional networks. In: Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1544– 1553 (2016) [67] Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.J.: Learning representations and generative models for 3d point clouds. Pro- ceedings of the 35th International Conference on Machine Learning (2018) [68] Cheng, A.-C., Li, X., Sun, M., Yang, M.-H., Liu, S.: Learning 3d dense correspondence via canonical point autoencoder. Advances in Neural Information Processing Systems 34, 6608–6620 (2021) [69] Chen, N., Liu, L., Cui, Z., Chen, R., Cey- lan, D., Tu, C., Wang, W.: Unsupervised learning of intrinsic structural representation points. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9121–9130 (2020) [70] Adams, J., Elhabian, S.: Point2ssm: Learning morphological variations of anatomies from point cloud. arXiv preprint arXiv:2305.14486 19 (2023) [71] Li, J., Pepe, A., Gsaxner, C., Luijten, G., Jin, Y., Ambigapathy, N., Nasca, E., Solak, N., Melito, G.M., Memon, A.R., et al.: Medshapenet–a large-scale dataset of 3d medical shapes for computer vision. arXiv preprint arXiv:2308.16139 (2023) 20
https://arxiv.org/abs/2505.22445v1
arXiv:2505.22451v1 [cs.AI] 28 May 2025AI Mathematician: Towards Fully Automated Frontier Mathematical Research Yuanhang Liu4∗, Yanxing Huang3∗, Yanqiao Wang4∗, Peng Li2†, Yang Liu1,2† 1Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China 2Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China 3Department of Mathematical Sciences, Tsinghua University 4Qiuzhen College, Tsinghua University Abstract Large Reasoning Models (LRMs) have made significant progress in mathematical capabilities in recent times. However, these successes have been primarily confined to competition-level problems. In this work, we propose AI Mathematician (AIM) framework, which harnesses the reasoning strength of LRMs to support frontier mathematical research. We have identified two critical challenges of mathematical research compared to competition, the intrinsic complexity of research problems and the requirement of procedural rigor . To address these challenges, AIM incorporates two core strategies: an exploration mechanism to foster longer solution paths, and the pessimistic reasonable verification method to ensure reliability. This early version of AIM already exhibits strong capability in tackling research-level tasks. We conducted extensive experiments across several real-world mathematical topics and obtained promising results. AIM is able to autonomously construct substantial portions of proofs and uncover non-trivial insights within each research area. These findings highlight the potential of LRMs in mathematical discovery and suggest that LRM-based agent systems could significantly accelerate mathematical research in the future. Mathematical Research Problems Quantum Algorithm ProblemAbsorbing Boundary ConditionHigh Contrast LimitHomogenization Problem AIM effectively completes the problem with a detailed solution process.AIM provides a substantially complete proof for this problem.AIM completes main proof of the conclusion and produces other correct results.AIM provides partially correct conclusions and reasoning, offering instructive guidance for this problem. Table 1: Summary of mathematical problems addressed by our proposed AIM. ∗Indicates equal contribution.†Correspondence to Peng Li <lipeng@air.tsinghua.edu.cn>, Yang Liu <liuyang2011@tsinghua.edu.cn>. 1 Introduction Mathematics has long been considered as the pinnacle of human intellect. After the emergency of large language models (LLMs), the capability border of AI systems has been expanded to an unprecedented level. Among these advancements, the progress in mathematical reasoning stands out as one of the most surprising and closely watched developments. It also brings to mind the earliest insights we had about artificial intelligence [Newell et al., 1957]. Only four years ago, in 2021, the most advanced LLM GPT-3 175B can not solve more than 35% of grade school math problems even after thousands of training steps [Cobbe et al., 2021]. Then in early 2024, state-of-the-art models can already solve more than 50% of high-school level math problems, while their accuracy on the former dataset already reaches around 90% [Shao et al., 2024]. And within the last year 2024, we have witnessed the breakthrough of Large Reasoning Models (LRMs), which brings a significant boost in math reasoning [OpenAI et al., 2024, DeepSeek-AI et al., 2025, Team et al., 2025]. Recently the cutting-edge model OpenAI o3 and o4-mini even achieved nearly 100% accuracy on the most challenging competition level AIME 2024 and AIME 2025 datasets, surpassing almost all that humans can do [OpenAI, 2025]. The rapidly advancing mathematical capabilities of LRMs seem to make it possible for
https://arxiv.org/abs/2505.22451v1
them to undertake frontier mathematical research. In this work, we take initial steps in this promising and underexplored direction, reporting preliminary yet encouraging results to inspire future exploration. We propose an LRM-based mathematical research agent framework named AIM (AI Mathematician), with a special focus on frontier mathematical research. We have identified two prime challenges in transitioning from mathematical competitions to mathematical research: •Complexity . Frontier math papers typically require tens of pages of proofs of lemmas or intermediate steps towards a theorem, while competitions require attendants to accomplish technically sophisticated short proofs with hours. The complexity of mathematical research is also reflected in the breadth of domain knowledge it involves. •Procedural Rigor . Frontier mathematical research mainly works with proof problems without an explicit answer. Their correctness relies on the procedural rigor and vast manual reviews. Currently, we still lack widely accepted and effective methods to assess a proof in natural language. We proposed two approaches to fill the gap between competition level problem solving and frontier mathematical research. They appear to be simple techniques that already effectively enabled LRMs to possess to some extent the ability of frontier mathematical research. To conclude, our contributions can be divided into three parts: •Exploration and Memory Mechanism that tackles a research topic into multiple steps. The AIM agent can explore around the statements of the given problem and obtain several lemmas. It will finally try to complete the solution when time is right. •Pessimistic Reasonable Verification (PRV) that requires an LRM to generate multiple different reviews to the same proof, and selects the worst review as the final assessment of it. In our experiments this method can notably improve the output quality. •Preliminary Experiments on some frontier math problems. We have done some com- prehensive experiments and manual reviews on several open problems in math. Our AIM can already find the proper approach to solve these problems, and made some impressive findings. Solutions generated by AIM still contain some flaws, but most of them could be resolved with the help of a professor, and result in a concrete discovery in mathematics. 2 Method 2.1 Overview The overview of our proposed AIM framework is shown in Figure 1. It consists of three LRM-based agents in this early version of AIM, which are explorer ,verifier , and refiner . These agents will interact with each other and form two action loops, an exploration loop that enables longer reasoning paths, 2 Background and Contextual Information Research ProblemComplete ProofAIM Workflow Verifier Explorer RefinerMemory New Conjecture and ProofNew Lemma History Findings Proof and Comments Refined ProofFigure 1: The illustration of AIM workflow. AIM adopts a direct workflow to explore through the given research topic. AIM used an exploration and memory mechanism to tackle down a research problem into multi-step exploration, and verification and refining mechanism to guarantee the correctness and quality of proofs. and a refine loop that guarantees the output quality. These action loops will be further discussed in the following subsections. To leverage AIM for mathematical research, a human expert should firstly provide some background and contextual
https://arxiv.org/abs/2505.22451v1
information of this research topic. This could include the definitions of terminologies, and some preliminary conclusions required for this problem. These contents will then be formatted and treated as a system prompt, which is visible to all three agents. After this, we can directly pass the research problem to the agent, which could be a certain intermediate problem, whether minor or significant, or the research topic as a whole. AIM will then try to solve this problem step by step, and report all its findings to the user, which could be especially useful for human mathematicians. 2.2 Exploration and Memory Mechanism When faced with complex research problems, even state-of-the-art LRMs are unable to produce correct and rigorous solutions in a single attempt. In our experiments, providing an LRM with the necessary background and directly prompting it to solve a research-level problem often leads to incorrect conclusions and unsound proofs lacking the required rigor. This is where our exploration mechanism comes into play. Rather than attempting to solve the problem outright, the explorer first engages in open-ended reasoning and exploration around the topic. It formulates a set of intermediate observations and hypotheses, presenting them as conjectures, each accompanied by a detailed proof. Importantly, the explorer refrains from asserting final conclusions unless it has sufficient confidence in the result. This entire mechanism is implemented purely through prompt engineering. In practice, the explorer demonstrates the ability to distinguish uncertain situations and typically only presents final proofs after a comprehensive exploration process. The conjectures proposed during exploration are extracted and stored in memory. Following a verification step, valid conjectures are promoted to lemmas. The exploration process is then invoked iteratively, either until the problem is solved or a predefined exploration limit is reached. In each iteration, previously validated lemmas are provided to guide further reasoning and facilitate deeper exploration. 3 2.3 Verification and Refine Mechanism LRMs are prone to errors when proving lemmas and often produce proofs that lack the rigor required in mathematical reasoning. While we currently lack a formal method for verifying natural language proofs, we implement the Pessimistic Reasonable Verification (PRV) process to address this challenge. For each generated proof, we employ a separate, high-capacity LRM as a verifier to critically evaluate the reasoning and identify potential flaws. The verifier performs multiple independent reviews in parallel, and the proof is rejected if any one of these reviews deems it incorrect. This aligns with the standard in mathematical practice, where a valid proof must convincingly satisfy all reviewers. PRV thus serves as a practical simulation of such rigorous peer review in an automated setting. Moreover, each LRM-based review is required to include a detailed, constructive feedback explaining the reasoning behind any objections. These rationales are then used to improve the proof. A dedicated refiner agent receives the original conjecture, its proof, and the comments from the verifier, and attempts to revise the proof to address the identified issues. If necessary, the refiner is also allowed to make minor adjustments to the conjecture itself. Additionally, correction suggestions can be provided to ensure that the refiner
https://arxiv.org/abs/2505.22451v1
module consistently incorporates certain fixed essential requirements when refining proofs. The refined proof is then returned to the verifier for re-evaluation. This creates an iterative refinement- verification loop that continuously enhances proof quality. Once a proof passes verification, it is accepted as a lemma and reintroduced into the exploration loop. If it fails verification after several iterations, it is eventually discarded. 3 Background and Related Work 3.1 Large Reasoning Model Large Reasoning Models (LRMs) represent a new class of large language models characterized by their strong reasoning capabilities and the ability to generate exceptionally long chains of thought [OpenAI et al., 2024]. The concept was first introduced by OpenAI [OpenAI et al., 2024], while DeepSeek-R1 independently developed and disclosed an effective training methodology for such models [DeepSeek-AI et al., 2025]. LRMs are commonly trained using reinforcement learning with verifiable reward signals, particularly in domains such as mathematics and programming. More recent research has also explored alternative training strategies that significantly reduce the reliance on annotated data [Zuo et al., 2025, Sun et al., 2025, Wang et al., 2025]. Most existing LRMs have been trained primarily on high school level or competition level mathematical problems, and we do not have a quantitative metric on their performance on theorem proving. Nevertheless, LRMs have already shown promising capabilities in tackling advanced mathematical proofs. With a care- fully designed reasoning workflow, they hold the potential to address significantly more complex, research-level problems while mitigate certain limitations. 3.2 Agents Current agentic AI systems are driven by LLMs, which often incorporate with prompt engineering, dynamic task decomposition, persistent memory, and predefined workflow [Sapkota et al., 2025]. Agent systems can endow large language models with greater autonomy, enabling them to perform multi-step tasks, observe their environment, and invoke external tools, among other capabilities. LLM-based agent systems have already demonstrated strong capabilities in various domains [Wang et al., 2024]. And with the advancement of LLMs and LRMs, recently some researchers have attempted to leverage agent systems for frontier research. AI Scientist [Lu et al., 2024] and AIGS [Liu et al., 2024] primarily attempted to automate research tasks in the field of AI. AlphaEvolve [Google DeepMind, 2025] adopted a coding agent for mathematical research, and made remarkable progress on several long-standing math problems. These works are primarily centered around coding tasks or tasks that can be transformed into coding problems. To the best of our knowledge, we are the first to explore automated general mathematical research with LLM agent, and already obtained certain achievements. 4 4 Experiments We employed AIM to tackle four mathematically challenging research problems, three proved theorems, and one open problem. Directly applying a single LRM to these problems yields incorrect results and does not constitute a valid proof process in mathematical theory. We conducted a systematic analysis and interpretation of preliminary experimental results, categoriz- ing the data into three distinct groups with color-coded annotations: correct proofs (“[Correct]”) , plausible inferences with missing details (“[Vague]”) , and erroneous conclusions (“[Error]”) . Our experiments mainly used two distinct models, DeepSeek-R1 [DeepSeek-AI et al., 2025] and OpenAI o4-mini [OpenAI, 2025].
https://arxiv.org/abs/2505.22451v1
The output syntax of OpenAI o4-mini is encoded in Unicode, which we subsequently transcribed into standard L ATEXwith DeepSeek-V3 [DeepSeek-AI et al., 2025] for better readability. The following provides a brief overview of the targeted mathematical research problems and the current progress of the proofs produced by AIM. •Quantum Algorithm Problem . Linear Combination of Hamiltonian Simulation (LCHS, [An et al., 2023]) method is an efficient method in scientific computation. Black-Scholes-Merton model is a fundamental mathematical framework in finance for pricing European-style options. This problem is to use LCHS to simulate the BSM model and design a corresponding quantum algorithm. AIM can basically solve the problem with a relatively detailed proof. •Absorbing Boundary Condition . This is a classic theoretical problem in analytical mathe- matics involving equations, where precise approximate solutions are derived through boundary construction. Given the method, AIM can basically complete the proof of this problem. •High Contrast Limit . This is an error analysis problem concerning the parametric limits of the Lame-Stokes system, representing a complex theoretical issue in mathematical analysis and an important special case in practical applications. AIM gives the main process and proof of the core conclusion, although there are some irregularities in the derivation. In addition, the agent draws other correct conclusions. •Homogenization Problem . Homogenization theory is a central field of study in mathematical analysis and applied mathematics, focusing on deriving the equivalent macroscopic properties of materials or systems with multiscale structures, such as periodic or stochastic features. In the process of exploring this problem, AIM gives some correct conclusions and ideas, which is of guiding significance for mathematicians to complete this research. 4.1 Quantum Algorithm Problem 4.1.1 Problem Description Our problem is applying Linear Combination of Hamiltonian Simulation lemma [An et al., 2023] to deal with Black-Scholes-Merton partial differential equation. Our input of the problem description is as follow: Lemma (Linear combination of Hamiltonian simulation, LCHS): For t∈[0, T], let A(t)∈CN×Nbe decomposed into Hermitian and anti-Hermitian parts such that A(t) = L(t) +iH(t), where L(t) =1 2[A(t) +A†(t)]andH(t) =1 2i[A(t)−A†(t)]. As- sume that L(t)is positive semi-definite for all t∈[0, T]. Denoting the time order- ing operator by T, we have Texpn −Rt 0A(s)dso =R Rη(k)u(t, k)dk, where u(t, k)is the propagator for a time-dependent Hamiltonian simulation problem such that u(t, k) = Texpn −iRt 0[H(s) +kL(s)]dso , andη(k) =1 π(1+k2)is the kernel function with respect to k. BSM model: The PDE in the BSM model is given by ∂V(S, t) ∂t+1 2σ2S2∂2V ∂S2+rS∂V ∂S−rV= 0. 5 Boundary conditions are characterized by: V(0, t) = 0 for all 0< t≤T;V(S, t)→S− Ke−r(T−t)forS→ ∞ ;V(S, T) = max {S−K,0}. Your tasks are to: Use the idea of LCHS to deal with the BSM model. Design a quantum algorithm to solve the equation. And compute the complexity (query complexity to state preparation oracle and operator oracles, gate complexity, ancilla qubits, (parameters: norm of boundary and initial, T, approximation error ϵ, etc)). 4.1.2 Summary of Proof Results We use AIM based on DeepSeek-R1 to explore this problem. The solution of our framework AIM addresses the Black-Scholes-Merton (BSM)
https://arxiv.org/abs/2505.22451v1
model by transform- ing the PDE into a heat equation and leveraging quantum algorithms. The core steps are: •PDE Transformation and Spatial Discretization : Use variable substitution to convert the BSM PDE into a standard heat equation. Then discretize the spatial variable (Lemma 1). •Operator Decomposition and Integral Discretization : Apply the Linear Combination of Hamiltonian Simulation (LCHS) lemma to decompose the heat equation’s solution operator into a continuous linear combination of unitary propagators. Then approximate the continuous integral via dscretization (Lemmas 2-3). •Quantum Implementation and Complexity Analysis : Use the Linear Combination of Unitaries (LCU) framework and Hamiltonian simulation techniques for quantum implemen- tation, and analyze complexity (Lemmas 3-7). The overall approach is correct. Transforming the BSM partial differential equation into the heat equation and disctretizing the spatial variable first, then using LCHS lemma to transform equations and discretizing integrals are all correct and detailed. However, the process of complexity calculation is vague and insufficiently detailed, with some errors. 4.1.3 Detailed Analysis of Selected Proofs 1.The Black-Scholes-Merton (BSM) partial differential equation (PDE) can be transformed into the standard heat equation via a logarithmic substitution. Specifically, under the transformation x= ln( S/K) + (r− σ2/2)(T−t),τ=σ2(T−t)/2, andV(S, t) =e−r(T−t)U(x, τ), the BSM PDE reduces to: ∂U ∂τ=∂2U ∂x2. Discretizing this PDE spatially yields a matrix equation ∂τ|U⟩=−B|U⟩, where B=−∂2 x. The operator Bis positive semi-definite and can be directly interpreted as B=L, where L=−∂2 x. While the original Hermitian decomposition B=L+iH(with H= 0) trivializes the LCHS lemma, the dissipative nature ofBpermits alternative quantum simulation methods tailored to parabolic equations. Proof. **Step 1: Transform BSM to the Heat Equation** Start with the BSM PDE: ∂V ∂t+1 2σ2S2∂2V ∂S2+rS∂V ∂S−rV= 0. Apply the substitution V(S, t) =e−r(T−t)U(x, τ),x= ln( S/K) + (r−σ2/2)(T−t), and τ= σ2(T−t)/2. Compute derivatives rigorously: - ∂tV=e−r(T−t) rU−σ2 2∂τU−(r−σ2/2)∂xU , - ∂SV=e−r(T−t)∂xU·1 S, -∂2 SV=e−r(T−t) ∂2 xU−∂xU ·1 S2. Substitute into the BSM PDE: e−r(T−t) rU−σ2 2∂τU−(r−σ2/2)∂xU+1 2σ2(∂2 xU−∂xU) +r∂xU−rU = 0. 6 Multiply through by er(T−t), cancel terms, and simplify: −σ2 2∂τU+σ2 2∂2 xU= 0 = ⇒∂τU=∂2 xU. **Step 2: Spatial Discretization** Discretize xon a grid with spacing ∆x. Approximate ∂2 xUvia finite differences: ∂2 xU≈Uj+1−2Uj+Uj−1 ∆x2. LetD2be the discretized second derivative matrix. The operator becomes: B=−D2. **Step 3: Positive Semi-Definite Operator** The matrix B=−D2is positive semi-definite because D2(a centered second derivative discretization) is negative semi-definite. This aligns with the dissipative nature of the heat equation. **Step 4: Implications for Quantum Simulation** While the original decomposition B=L+iH(with H=−ic∂x) is invalid due to the absence of a first-order term, the operator Bitself is suitable for quantum simulation methods designed for parabolic equations. Specifically, the time evolution ∂τ|U⟩=−B|U⟩can be simulated using Trotterization or other dissipative quantum algorithms, bypassing the need for the LCHS lemma. [Correct] Variable substitution and equation transformation are correct. Discretization of the spatial variable is correct. And AIM finds the reduced PDE trivializes the LCHS lemma. It’s right! Firstly, AIM transforms the BSM equation to a heat equation. This step simplifies the problem. Then AIM discretizes the spatial variable. This makes it satisfy the conditions of the theorem. 2.The integral
https://arxiv.org/abs/2505.22451v1
representation in the LCHS lemma for the BSM model can be approximated with precision ϵ using O(1/ϵ2)terms through an adaptive discretization of the k-integral, leveraging the rapid decay of the Cauchy kernel η(k) =1 π(1+k2). Proof. **Truncation Error Analysis**: The integral I=R∞ −∞η(k)u(t, k)dkis truncated to [−K, K ]. The tail error is bounded by: Z |k|>Kη(k)dk=2 πZ∞ K1 1 +k2dk=2 ππ 2−arctan( K) ≈1 πKforK≫1. Setting1 πK≤ϵ/2gives K≥2 πϵ. Thus, K=O(1/ϵ). **Adaptive Discretization**: 1. **Central Interval [−K, K ]**: The entire truncated domain [−K, K ] must be discretized. To bound the discretization error, use a uniform grid with spacing ∆k= Θ( ϵ). The number of points is N=O(K/∆k) =O(1/ϵ2). The discretization error using the trapezoidal rule scales as O(K∆k2·max|∂2 k(η(k)u(t, k))|). Since η(k)andu(t, k)are smooth, max|∂2 k(η(k)u(t, k))| is bounded over [−K, K ]. The total error becomes O(K∆k2) =O1 ϵ·ϵ2 =O(ϵ). **Outer Interval Contribution**: The truncated integral already includes all significant contributions, as the tail beyond K=O(1/ϵ)contributes at most ϵ/2. The central interval [−K, K ]is approximated withO(1/ϵ2)points, ensuring the total discretization error is O(ϵ). Additional adaptive methods in outer intervals (e.g., geometrically spaced grids) would only reduce logarithmic factors, which are asymptotically dominated by O(1/ϵ2). **Total Terms**: The uniform grid over [−K, K ]requires O(1/ϵ2)terms. While adaptive grids in outer regions could marginally reduce constants, the dominant term remains O(1/ϵ2). **Final Error**: Truncation error O(ϵ/2)+ discretization error O(ϵ/2) = O(ϵ). Thus, O(1/ϵ2)terms suffice for ϵ-precision approximation. [Correct] Truncate and discretize the integral correctly. 7 AIM analyzes the truncation error and determines the discretization spacing. This lays the groundwork for the construction of quantum algorithms. 3.The solution operator for the heat equation derived from the BSM model, U(τ) =eτB, can be approxi- mated via the LCHS lemma as a linear combination of Hamiltonian simulations. Specifically, the integral representationR Rη(k)e−ikLτdk(where L=−B) can be discretized into M=O(1/ϵ2)terms with quadrature weights η(kj)∆kj. Each term e−ikjLτis a Hamiltonian simulation of Lscaled by kjτ. The total query complexity to the L-oracle is ˜O ∥L∥T/ϵ3, where T=2τ σ2, and the gate complexity scales polynomially with ∥L∥,T,1/ϵ, and logarithmic factors. Proof: **Step 1: Integral Representation via LCHS** From Lemma-0, the solution operator for the heat equation isU(τ) =eτB. Let L=−B, a positive semi-definite operator. By the LCHS lemma with H= 0, we express: eτB=Z Rη(k)e−ikLτdk, η (k) =1 π(1 +k2). **Step 2: Quadrature Discretization** Lemma-1 guarantees the integral can be discretized into M= O(1/ϵ2)terms with weights η(kj)∆k, yielding: eτB≈MX j=1η(kj)∆k·e−ikjLτ. The truncation and discretization error is O(ϵ)due to the O(1/k2)decay of η(k). [Correct] Properly utilizing the LCHS lemma and memory lemmas for problem transformation. **Step 3: Hamiltonian Simulation Costs** Each term e−ikjLτrequires simulating Lfor time tj=kjτ. Using optimal Hamiltonian simulation algorithms (e.g., quantum signal processing or Taylor series) with query complexity O(∥L∥tj+ log(1 /ϵsim)), setϵsim=ϵ/M to bound the total simulation error by ϵ. Since kj=O(1/ϵ)dominates the quadrature, tj=O(τ/ϵ). The per-term complexity becomes: O(∥L∥τ/ϵ+ log( M/ϵ)) = ˜O(∥L∥τ/ϵ). **Step 4: Total Query Complexity** Summing over M=O(1/ϵ2)terms, the total queries to the L-oracle are: ˜O M·∥L∥τ ϵ =˜O∥L∥τ ϵ3 . Expressing in terms of T=2τ σ2and absorbing constants:
https://arxiv.org/abs/2505.22451v1
˜O∥L∥T ϵ3 . **Step 5: Ancilla and Gate Complexity** The LCU framework requires logM=O(log(1 /ϵ))an- cilla qubits. Gate complexity inherits ˜O(M)-scaling from the superposition state, augmented by O(poly(log(∥L∥T/ϵ)))factors from simulation subroutines. The dominant terms remain polynomial in∥L∥,T, and1/ϵ, with polylogarithmic corrections. [Error] There are some mistakes about complexity computing. And the calculation process lacks detail. Based on the previous memory lemmas, AIM constructs a quantum algorithm with Hamiltonian simulation and Linear Combination of Unitaries (LCU). Then computes the complexity. Although there are some mistakes and lacks some detail, the core methods are correct. 4.2 Absorbing Boundary Condition 4.2.1 Problem Description Absorbing boundary conditions, formulated through modifications of conventional boundary for- mulations, significantly enhance solution clarity and computability at domain boundaries while preserving mathematical rigor. This specialized boundary treatment achieves mathematically precise 8 approximations by establishing quantitative error control mechanisms between approximate and exact solutions, thereby maintaining theoretical soundness while improving computational tractability. Our study employs to demonstrate systematic verification of solution well-posedness and uniqueness for the heat conduction equation under specifically designed absorbing boundary constraints [J., 2003]. Letube a function defined on (t, x)∈[0,∞]×Rnand satisfy the following equations: Lu=∂tu−∆u+ρ(x)u= 0, t > 0, x∈Rn u=u0, t = 0 where the operator L=∂t−∆+ρ(x),∆is the Laplace operator, and ρ(x)is a non-negative function ofx. We focus on a bounded open set ΩinRn, and denote ∂Ω =S. By restricting the function uto the bounded open set Ω, we investigate the behavior of the solution within this bounded open set. The absorbing boundary conditions are as follows:   Lu= 0, [0,∞)×Ω ∂u ∂ν=−βu−Pm k=1αk(∂t−∆S)φk=F1u,[0,∞)× S u=u0, t = 0(1) (∂t−∆S+dk)φk=u, t≥0 φk= 0, t = 0 The problem needed proved is that given any u0∈L2(Ω), there exists a unique u∈C([0, T];L2(Ω))∩H1([0, T];H−1(Ω))∩L2([0, T];H1(Ω)), φk∈C([0, T];L2(S))∩H1([0, T];H−1(S))∩L2([0, T];H1(S)) satisfy the equations above. 4.2.2 Summary of Proof Results In our experimental framework, we established a systematic approach for constructing and validating approximated solutions to the governing equations. The methodology comprises three key phases: 1) Formulation of approximate solutions through the Galerkin method; 2) Convergence analysis via energy estimation techniques; 3) Regularity verification and final convergence proof. The experiment demonstrates that AIM first proves a key conclusion of energy estimation through rigorous mathematical derivation and inequality techniques. The obtained energy estimate is then used to prove the uniqueness of the solution. After considering these, AIM constructed the space of the approximated solutions, proving the existence and convergence of the approximated solution. Finally, the above lemma conclusions are integrated to prove the core theorem of our problem. The analysis of AIM’s theorem demonstration reveals both methodological strengths and areas requir- ing refinement. As evidenced by the proof trajectory, the system establishes a well-structured logical framework and appropriately applies mathematical induction with rigorous bounding techniques . Crucially, AIM consistently identifies valid mathematical propositions and preserves the integrity of critical intermediate steps throughout the derivation process.At the same time, there are some shortcomings and problems in AIM. On the one hand, the details of the proof are not clear enough. On the other hand, there are also errors in the intermediate proof
https://arxiv.org/abs/2505.22451v1
process. 4.2.3 Detailed Analysis of Selected Proofs We explain some specific results of AIM. 9 **A priori energy estimate for the coupled system** Let uandφk(k= 1, . . . , m ) satisfy the system (2.10) withu0∈L2(Ω)andφk(0) = 0 . Then, there exists a constant C > 0depending on T, β, α k, dk, but independent of u0, such that: sup t∈[0,T]∥u(t)∥2 L2(Ω)+ZT 0∥∇u(t)∥2 L2(Ω)dt+ZT 0∥u(t)∥2 L2(S)dt +mX k=1 sup t∈[0,T]∥φk(t)∥2 L2(S)+ZT 0∥∇Sφk(t)∥2 L2(S)dt! ≤C∥u0∥2 L2(Ω). At the beginning of the proof, AIM gives a critical consistent energy estimation inequality. This is a very important intermediate conclusion and a key step in solving this problem. **Revised Proof of the a priori estimate:** Define the total energy functional: E(t) =1 2∥u(t)∥2 L2(Ω)+mX k=1α2 kdk 2β′∥φk(t)∥2 L2(S), β′=β+mX k=1αk. [Correct] The agent consider the energy method, and show the correct energy functional form. **Energy estimate for u:** Multiply the u-equation by uand integrate over Ω: 1 2d dt∥u∥2 L2(Ω)+∥∇u∥2 L2(Ω)+β′∥u∥2 L2(S)=mX k=1αkdkZ SuφkdS. Apply Young’s inequality to the right-hand side: mX k=1αkdkZ SuφkdS≤β′ 2∥u∥2 L2(S)+mX k=1α2 kd2 k 2β′∥φk∥2 L2(S). Subtract the Young’s bound and rearrange: 1 2d dt∥u∥2 L2(Ω)+∥∇u∥2 L2(Ω)+β′ 2∥u∥2 L2(S)≤mX k=1α2 kd2 k 2β′∥φk∥2 L2(S). [Correct] Here, the agent gets the accurate result with the energy method and the application of inequality techniques. **Energy estimate for φk:** Multiply the φk-equation byα2 kdk β′φkand integrate: α2 kdk 2β′d dt∥φk∥2 L2(S)+α2 kdk β′∥∇Sφk∥2 L2(S)+α2 kd2 k 2β′∥φk∥2 L2(S)≤α2 k 2β′∥u∥2 L2(S). Summing over k, the right-hand side becomes: mX k=1α2 k 2β′∥u∥2 L2(S)≤mX k=1αk 2∥u∥2 L2(S)≤β′−β 2∥u∥2 L2(S). [Correct] Similarly, the same approach is applied for the assistant function. Here is a partial proof process for the consistent energy estimation. AIM uses energy estimation method and combines the techniques of inequality to gradually deduce the estimation of the solution and the auxiliary functions. The entire derivation process is correct and rigorous, and the steps are sufficient. This is a very clear proof process, which meets our requirements and expectations for mathematical theoretical proof. 10 **Step 1: Construct H1-Conforming Subspaces** Let {wj} ⊂H1(Ω)and{θk,l} ⊂H1(S)be eigen- functions of the Laplace-Beltrami operators −∆Ωand−∆Swith Neumann/Dirichlet boundary conditions (as appropriate). These form orthonormal bases in L2(Ω)andL2(S), respectively, and are dense in H1(Ω) andH1(S). Define VN=span{w1, . . . , w N}andWk,N=span{θk,1, . . . , θ k,N}. [Correct] The agent applies the Galerkin method and construct the function spaces. **Step 2: Formulate the Projected Equations** For test functions v∈VNandψ∈Wk,N, the Galerkin approximations satisfy the projected weak form: Z Ω∂tuNv dx+Z Ω∇uN· ∇v dx+Z ΩρuNv dx +βZ SuNv ds+mX k=1αkZ S(∂tφN k−∆SφN k)v ds= 0, Z S∂tφN kψ ds+Z S∇SφN k· ∇Sψ ds+dkZ SφN kψ ds=Z SuNψ ds. **Step 3: Solve the ODE System** Substituting uN=PN j=1aj(t)wjandφN k=PN l=1bk,l(t)θk,linto the projected equations yields a linear ODE system for aj(t)andbk,l(t). The coefficients are smooth (as eigenfunctions are C∞), ensuring continuity. By the Cauchy-Lipschitz theorem, unique solutions exist on [0, T]. [Correct] The agent gets the ODE satisfied with the approximated solution. [Vague] It is better to have more detailed process. Here is the proof step of the existence and uniqueness
https://arxiv.org/abs/2505.22451v1
of the solution to the equation. The agent con- siders the basis functions in the functional space of the overall space and boundary space respectively, and constructs the existence space of approximated solutions. After that, the agent constructs the form of the approximated solutions and the weak form of the satisfying equations. It also expresses the ODE system of the approximated solutions , and uses mathematical theorem to complete the proof of the existence and uniqueness of approximated solutions. This is a clear and accurate proof idea and mathematical process. **Step 6: Passage to the Limit** For fixed v∈VN0andψ∈Wk,N0, the projected equations hold for allN≥N0. As N→ ∞ : - Weak convergence handles volume integrals (e.g.,R ∇uN· ∇v dx→R ∇u· ∇v dx). - Strong trace convergence ensuresR SuNv ds→R Suv ds and similar boundary terms. - The density ofS NVNinH1(Ω)andS NWk,NinH1(S)extends the equations to all v∈H1(Ω), ψ∈H1(S). Thus, (u,{φk})satisfies (2.10) in the weak sense. Uniqueness (by lemma) confirms the full sequence converges. [Correct] Here the regularity should have more deprivation process to make the proof sufficient accurate. [Error] The uniqueness should be prove by the difference function rather than the sequence convergence. The proof of convergence is not completely correct. The weak convergence properties proposed by the agent are correct, but it is necessary to prove later that this limited function is the solution to the original equation. And further analysis and verification of regularity are also required. Although there are still some proof errors, the proof results given by the agent are already relatively complete and rigorous. The agent gave the correct proving steps and the accurate derivation process, and used the correct mathematical method to obtain the conclusion of this mathematical theory. 11 4.3 High Contrast Limit 4.3.1 Problem Description Given some specific setting of the function space with boundary condition, we consider the following transmission problem.  Lλ,µuϵ= 0 inΩ\Dϵ, Leλ,eµuϵ= 0 inDϵ, uϵ|−=u|+and∂uϵ ∂ν(eλ,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω) anduϵ|∂Ω∈H1 2 R(∂Ω).(2) This problem is equivalent to (Lλ(x),µ(x)uϵ=∇ ·[λ(x)(∇ ·uϵ)Id+ 2µ(x)D(uϵ)] = 0 inΩ, ∂uϵ ∂νλ,µ ∂Ω=g∈H−1 2 R(∂Ω),uϵ|∂Ω∈H1 2 R(∂Ω),(3) where λ(x) =λIΩ\Dϵ+eλIDϵ,µ(x) =µIΩ\Dϵ+eµIDϵ. The existence and uniqueness of this problem comes from elliptic condition with the help of Lax-Milgram Theorem. Leteλ→ ∞ andeµfixed, then we arrive at the equations about the incompressible inclusion limit . In this case, the transmission problem is a coupled Lame-Stokes system :   Lλ,µuϵ= 0 inΩ\Dϵ, Leµ(uϵ, pϵ) = 0 anddivuϵ= 0 inDϵ, uϵ|−=uϵ|+and∂(uϵ,pϵ) ∂ν(∞,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω) anduϵ|∂Ω∈H1 2 R(∂Ω),(4) where Leµ(uϵ, pϵ) =eµ∆uϵ+∇pϵdenotes the Stokes operator with viscosity constant eµandpϵis the pressure field. Its exterior derivative is defined as∂(u,p) ∂ν(∞,µ) −:=pN+ 2µD(u)N. We are interested in this Lame-Stokes system for the High contrast limit and the corresponding error estimation which was proved by Fu X. and Jing W [W., 2024] . ∥ulim−uϵ∥ ≤C eλ∥g∥H−1 2(∂Ω) 4.3.2 Summary of Proof Results We use our agent based on two LRMs DeepSeek-R1 and OpenAI o4-mini, and we get different experimental outputs. AIM explores the energy control of multiple functions and derived several key
https://arxiv.org/abs/2505.22451v1
conclusions by applying the variational method. The agent first establishes energy control mechanisms through rigorous mathematical derivation and analysis for both the solution uϵand pressure term pϵof the governing equations. Building on this theoretical foundation, the analysis further develops an error control framework by investigating perturbation propagation mechanisms. Through careful quantification of error accumulation patterns, AIM ultimately derives a global error estimate. Overall, this proof process has uncovered novel and correct conclusions, providing consistent global control over the solutions of the equation—a pleasantly surprising outcome. However, some notable issues can be observed. During intermediate steps, the model evidently could have derived stronger conclusions but prematurely halted its exploration. Instead, it reengaged in intricate reasoning to reattain the desired conclusions through redundant complexity. When we use OpenAI o4-mini, the output is different. The proof process of this model is more open and comprehensive. The agent has taken into account specific spatial characteristics and homogenization properties. It is evident that the agent engaged in more extensive thinking and exploration, yielding additional intermediate conclusions. However, there are still instances of imprecision and errors that emerged during the process. 12 4.3.3 Detailed Analysis of Selected Proofs Generated by DeepSeek-R1 We firstly explain some results of AIM based on DeepSeek-R1 . **Step 3: Corrected Test with v=∇ϕ** Substituting vinto the variational equation: −eλZ Dϵ(divuϵ)2dx+ 2Z Ωµ(x)Duϵ:D(∇ϕ)dx=Z ∂Ωg· ∇ϕ ds. **Step 3: Corrected Test with v=∇ϕ** Substituting vinto the variational equation: −eλZ Dϵ(divuϵ)2dx+ 2Z Ωµ(x)Duϵ:D(∇ϕ)dx=Z ∂Ωg· ∇ϕ ds. (2) Rearranging and taking absolute values: eλ∥divuϵ∥2 L2(Dϵ)≤ 2Z ΩµDuϵ:D(∇ϕ)dx + Z ∂Ωg· ∇ϕ ds . Using Hölder inequalities and trace duality: 2Z ΩµDuϵ:D(∇ϕ)dx ≤C∥g∥H−1/2∥divuϵ∥L2(Dϵ), Z ∂Ωg· ∇ϕ ds ≤C∥g∥H−1/2∥divuϵ∥L2(Dϵ). **Step 4: Final Estimate** Combining these bounds: eλ∥divuϵ∥2 L2(Dϵ)≤2C∥g∥H−1/2∥divuϵ∥L2(Dϵ). Dividing by ∥divuϵ∥L2(Dϵ)yields: ∥divuϵ∥L2(Dϵ)≤2C eλ∥g∥H−1/2, completing the proof with modified constant C. [Correct] Use inequality techniques to derive the correct key conclusions from variational equations. The agent investigated the properties of divergence, leveraging the framework of variational equations and rigorous mathematical derivations to establish a uniform upper bound on the divuϵ. This deprivation process is correct and rigorous. **Step 1: Variational Formulations** The original problem (9) has the weak form: Find uϵ∈H1(Ω)such that for all v∈H1(Ω), Z Ω[λ(x) divuϵdivv+ 2µ(x)D(uϵ) :D(v)]dx=Z ∂Ωg·vds. [Correct] Consider the weak form of this equation. The limit problem (10) is a saddle-point problem: Find (ulim, plim)∈H1(Ω)×L2(Dϵ)such that for all v∈H1(Ω)andq∈L2(Dϵ),   R Ω\Dϵ[λdivulimdivv+ 2µD(ulim) :D(v)]dx+R Dϵ[2eµD(ulim) :D(v)−plimdivv]dx =R ∂Ωg·vds,R Dϵqdivulimdx= 0. [Vague] Here consider the limited function, but it does not show the detailed deprivation of the the existence and property of the function. 13 **Step 2: Subtract the Weak Forms** Let wϵ=uϵ−ulim. Subtract the weak form of the limit problem from the original problem, tested with v=wϵ: (R Ω\Dϵ λ(|divwϵ|2+ 2µ|D(wϵ)|2) dx+R Dϵh eλ|divuϵ|2+ 2eµ|D(wϵ)|2i dx =R Dϵplimdivwϵdx. Since divulim= 0inDϵ, substitute divwϵ= divuϵinDϵ: Z Dϵplimdivwϵdx=Z Dϵplimdivuϵdx. [Correct] According to the above weakly form and the setting condition of the problem, the model correctly obtains the above key conclusion. This proof process demonstrates the agent’s rigorous and precise mathematical reasoning capabilities. By analyzing the properties of limit functions, the agent exhibits a holistic understanding
https://arxiv.org/abs/2505.22451v1
of the problem. However, the critical steps lack detailed derivations to substantiate the reasoning, which diminishes the transparency of the logical progression. Under the given assumptions, the difference wε=ulim−uεbetween the solutions of the limit problem (10) and the original problem (9) satisfies the homogenized error estimate in the energy norm weighted by the Lamé parameters: Z Ω λ(x)|divwε|2+ 2µ(x)|D(wε)|2 dx1/2 ≤Cp eλ∥g∥ H−1 2(∂Ω), where Cis independent of eλ. This implies that the error in the energy norm decays proportionally to eλ−1 2. correctness : True The agent here incorporates a global control of errors, representing a novel conclusion in error control. Though the derived bound lacks optimal strength, it reflects the agent’s valid mathematical exploration in establishing this conclusion. 4.3.4 Detailed Analysis of Selected Proofs Generated by OpenAI o4-mini The following results are some output based on OpenAI o4-mini. Step 1. Weak formulation on W×Q. Since uextanduintmust agree on ∂Dε, we assemble them into a single global field u∈H1(Ω)which on Ωextcarries the elasticity operator and on Dεcarries the incompressible Stokes operator. The space Wenforces both the H1-regularity across ∂Dε(by matching traces) and the rigid-motion orthogonality on ∂Ω. The pressure plives only in Dεand has zero mean. The variational equations, obtained by integration by parts in Ωextand in each inclusion, read: (aext+aint)(u, v) +b(v, p) =L(v)for all v∈W, b(u, q) = 0 for all q∈Q, and incorporate the continuity of tractions and of displacements automatically. [Correct] Here, the agent actively construct the specific proof design. Step 2. Continuity of the bilinear forms. By Cauchy–Schwarz and standard trace estimates, there is M > 0 independent of εsuch that |aext(u, v)|+|aint(u, v)|+|b(v, p)| ≤M∥u∥H1(Ω)∥v∥H1(Ω)+M∥p∥L2(Dε)∥v∥H1(Ω). Step 3. Coercivity of A:=aext+aintonKerB. The kernel of BisK={v∈W:divv= 0inDε}. For anyv∈Kwe have A(v, v) =Z Ωext[2µ|D(v)|2+λ|divv|2]dx+Z Dε2˜µ|D(v)|2dx. 14 Define µmin= min {µ,˜µ}>0. Since v∈H1(Ω)andΩis Lipschitz, the global Korn inequality on Ω gives ∥v∥2 H1(Ω)≤CK∥D(v)∥2 L2(Ω) withCKindependent of ε. Hence A(v, v)≥2µmin∥D(v)∥2 L2(Ω)≥2µmin CK ∥v∥2 H1(Ω). Thus Ais coercive on Kwith constant α= 2µmin/CK, independent of ε. [Correct] The derivation process presented herein is correct. Step 4. Inf–sup stability of BonW×Q. We must show inf q∈Qsup v∈Wb(v, q) ∥v∥H1(Ω)∥q∥L2(Dε)≥γ >0 withγindependent of ε. Butb(v, q) =−R Dεqdivv, and divacts only in Dε. Restricting to test-functions vsupported in Dεwe recover the classical Stokes inf–sup condition on the perforated region Dε. Since Dε is a union of ε-scaled, well-separated Lipschitz inclusions, the local inf–sup constant on each inclusion is scale-invariant and the global constant γ(the minimum over finitely many identical blocks) is independent ofε. Step 5. Conclusion by Babuška–Brezzi. By Steps 2–4 the pair of forms (A, B)onW×Qsatisfies the continuity, coercivity on KerB, and the inf–sup condition with constants M, α, γ independent of ε. Hence there is a unique solution (u, p)∈W×Qand the a priori bound ∥u∥H1(Ω)+∥p∥L2(Dε)≤Csup (v,q)̸=0L(v) + 0 ∥v∥H1(Ω)+∥q∥L2(Dε)=Csup v∈WR ∂Ωg·v dS ∥v∥H1(Ω)=C∥g∥H−1/2(∂Ω), holds with Cdepending only on d, λ, µ, ˜µ, the Lipschitz character of Ω, and the reference inclusion ω, but notonε. This completes the proof. [Vague] Here, more detailed explanation is required for the deprivation and the application of the theorems. The agent here provides clear
https://arxiv.org/abs/2505.22451v1
proof steps and precise key conclusions. Although certain derivational details are omitted, the underlying reasoning framework and final conclusions remain rigorously accurate and align with the problem’s requirements. And we can also find that the agent verifies the condition of theorems. This demonstrates that the agent has achieved a commendable level of understanding and exploratory capability in mathematical theory. Next consider the difference w:=uϵ−ulim onD. Then w∈H1(D)solves the compressible-elastic Neumann problem eµ∆w+ (eλ+eµ)∇(divw) = 0 inD, σ(eλ,eµ)(w)N=Jϵon∂D, withR ∂DJϵ·r= 0for every rigid motion r. By Lemma 29 (Boundary-Trace Rate–Improvement via First- Order Asymptotics) on the C1,1domain D, there is C4depending only on D, d,eµand the C1,1-character of∂D, such that ∥w∥H1/2(∂D)≤C4eλ−1∥Jϵ∥H−1/2(∂D). Combining with (6) yields ∥uϵ−ulim∥H1/2(∂D)=∥w∥H1/2(∂D)≤Ceλ−1∥g∥H−1/2(∂Ω), withC=C3C4independent of ϵ,eλ. Since there are finitely many inclusions, the same bound holds uniformly over each ∂Di ϵ. This completes the proof. [Correct] The conclusion presented herein is methodologically derived through an extension of previously established procedural frameworks. 15 Based on previously established conclusions, the AI agent conducts an extensive analysis of error control mechanisms. The derivation process and the modifications to the equations demonstrate methodological correctness. However, the absence of explicit derivational specifics warrants further elaboration. 4.4 Homogenization Problem 4.4.1 Problem Description The setting of this problem is the same as the High Contrast Limit problem. But we concentrate on another open problem. We still consider this transmission problem   Lλ,µuϵ= 0 inΩ\Dϵ, Leµ(uϵ, pϵ) = 0 anddivuϵ= 0 inDϵ, uϵ|−=uϵ|+and∂(uϵ,pϵ) ∂ν(∞,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω) anduϵ|∂Ω∈H1 2 R(∂Ω),(5) where Leµ(uϵ, pϵ) =eµ∆uϵ+∇pϵdenotes the Stokes operator with viscosity constant eµandpϵis the pressure field. Its exterior derivative is defined as∂(u,p) ∂ν(∞,µ) −:=pN+ 2µD(u)N. Now we let ϵ→0, so that the scale of the cell is tend to be zero. We wonder what homogenization equation does the limited solution ulimsatisfies. At the same time, whether the estimate between the original solution and the limited solution has the form Cϵα∥g∥H−1 2(∂Ω)for some α∈(0,1). We use AIM based on OpenAI o4-mini to explore this open problem. 4.4.2 Summary of Proof Results The exploration process outlined by the agent comprises two components: homogenization equations and error control. Guided by the proposed two-scale expansion, AIM conducts an analysis of the homogenization equations. Specifically, it rigorously establishes the convergence of the expanded solutions and explicitly characterizes the mathematical form of the homogenized equations. Building upon the homogenized equations, the agent subsequently analyzes and derives error estimates through systematic reasoning. Throughout the experimental results, we observe that the agent has yielded valid conclusions while simultaneously revealing certain issues. In addressing this open- ended problem, the agent’s methodological approach overlooks crucial specific properties of the physical setting, leading to some improper process in the proofs of lemmas. Moreover, likely due to constraints inherent to the invoked model, the reasoning process remains deficient in derivational details and overrelies on conclusory statements, undermining the reproducibility and robustness of the arguments. 4.4.3 Detailed Analysis of Selected Proofs 1. (Cellwise Korn with uniform constant.) Since Yf=Y\ωis a bounded Lipschitz set, the classical Korn inequality on Yfgives for all v∈H1(Yf;Rd),∥v∥H1(Yf)≤C0 ∥v∥L2(Yf)+∥sym∇v∥L2(Yf) , withC0depending only on
https://arxiv.org/abs/2505.22451v1
Yf. By the scaling x=εyone deduces that on each ε-cellε(Yf+k) for all v∈H1(ε(Yf+k)),∥v∥H1(ε(Yf+k))≤C0 ∥v∥L2(ε(Yf+k))+∥sym∇v∥L2(ε(Yf+k)) . [Vague] Here the agent considers the Korn inequality but the derivation process is not detailed enough. 2. (Partition of unity subordinate to the periodic tiling.) Let {ψk}k∈Kεbe a smooth partition of unity on Ω: 16 • each ψk∈C∞ c(ε(k+Y)), •0≤ψk≤1,P kψk≡1onΩexcept in a boundary strip of width ≤2ε, • and|∇ψk| ≤C/ε. We also choose a finite set of boundary functions {ψb}supported in the ε-neighborhood of ∂Ωso thatP kψk+P bψb≡1onΩ. By standard Korn–Poincaré on the fixed boundary patches (of size O(ε)), one shows likewise for each boundary patch with cut-off ψb,∥ψbu∥H1≤C ∥u∥L2(supp ψb)+∥sym∇u∥L2(supp ψb) . Here, the agent has constructed a uniform estimate of the solution in the boundary region. It is evident that the mathematical theories of Korn’s inequality and partition of unity were leveraged. It can also be observed that the agent’s handling of spatial configurations implicitly employs established mathematical frameworks. However, the detailed derivations underpinning these steps remain unwritten. 2. Extraction of limits. By the reflexivity of the three spaces, there exist, along a subsequence, U∗(x, y)∈L2(Ω;H1(Yf)), W(x, y)∈L2(Ω×Yf;Rd×d) such that as ε→0: Tεuε⇀ U∗weakly in L2(Ω;H1(Yf)), ∇xTεuε⇀∇xU∗,∇yTεuε⇀ W weakly in L2(Ω×Yf). [Correct] The accurate convergence of the function spaces. 3. Independence of U∗onyand strong L2-convergence. Because ∥∇yTεuε∥L2(Ω×Yf)remains bounded but multiplied by εvanishes in the original variables, one shows by the unfolding-to-two-scale character- ization that ∂yiU∗= 0 in the distributional sense. Hence U∗(x, y) =:u0(x)is independent of y, and u0∈H1(Ω). Moreover, the standard compact embedding H1→L2inximplies Tεuε→u0strongly in L2(Ω×Yf), which by unfolding-to-two-scale equivalence yields uε→u0strongly in L2(Ω). [Vague] The process of the two-scale expansion here is not detailed enough. The proof of the convergence of the function lacks details. This line of mathematical reasoning is fundamentally valid. The analysis here focuses on the convergence properties of the expanded solutions, but the agent fails to provide critical details regarding the structure and control of the expansion functions themselves. Specifically, there is a lack of norm-controlled estimates. Then as ε→0one has, up to a subsequence, uε⇀ u 0inH1(Ω),uε→u0inL2(Ω),pεtwo-scale→ p1(x, y)inΩ×Yi, where u0∈H1 R(Ω;Rd)is the unique solution of the homogenized Lamé system −divx[ChomDx(u0)] = 0 inΩ,ChomDx(u0)·n=gon∂Ω, and the effective fourth-order tensor Chomis given by the following periodic cell transmission problem: for each fixed symmetric E∈Rd×d sym find (χE e, χE i, πE)∈H1 per(Ye;Rd)×H1 per(Yi;Rd)×L2 per(Yi)/Rsolving (1) In the elastic cell Ye:−divy[C(E+DyχE e)] = 0 ,divy(Ey+χE e) = 0 , (2) In the fluid cell Yi:−µ∆yχE i+∇yπE= 0,divy(Ey+χE i) = 0 , (3)On the material interface Γ =∂Yi:χE e=χE i,[C(E+DyχE e)·n] = [2 µ DyχE i−πEI]·n, 17 (4)(Ey+χE)isY–periodic, with zero-mean normalization on χE eandχE i. Then ChomE:E=R Ye(E+DyχE e) :C(E+DyχE e) dy +R Yi2µ DyχE i:DyχE idy. This represents that the agent’s conclusions on the cell problem. We observe that the agent correctly adopts the homogenization framework to derive the key homogenized equations. While the con- clusions here are broadly reasonable, localized issues can be identified, likely due to the agent’s insufficient clarity regarding the physical configuration of individual unit cells (e.g., boundary interac- tions, material periodicity assumptions). These
https://arxiv.org/abs/2505.22451v1
ambiguities in microscale definitions may propagate into approximations at the macroscale. 4. Homogenized tensor and macroscopic equation By choosing in the cell system the test-function θ=χE, one derives the energy representation ChomE:E=Z Ye(E+DyχE e) :C(E+DyχE e)dy+Z Yi2µDyχE i:DyχE idy. On the other hand, testing the original problem against φ(x)alone, passing to the limit shows Z ΩChomDxu0:Dxφ=⟨g, φ⟩ ∀φ∈H1 R(Ω;Rd), hence u0solves −divx[ChomDxu0] = 0 inΩ, ChomDxu0·n=gon∂Ω. [Vague] Here the agent gets the homogenization equation. But whether this conclusion is correct or not, more verification processes are needed. 5. Boundary-layer estimates By constructing ε–tubular cut-off functions near ∂Dεand∂Ω(supported in O(ε)–neighborhoods) and using the uniform H1–bound together with trace and Poincaré inequalities, one shows that all interface and boundary-layer remainders vanish as o(1)when ε→0(see Lemma A.5 for details). [Vague] The conditions for using the theorem need to be verified. 6. Error estimate and strong convergence Define the two-scale corrector uapp ε(x) =u0(x) +εχDxu0(x)(x/ε)inΩε, extended in each DεbyEy+χE i(y). A Céa-lemma–type argument in the perforated domain (again relying on uniform Korn and Stokes inf–sup, Lemma A.1–A.2) yields for some α∈(0,1) ∥uε−uapp ε∥H1(Ωε)≤Cεα∥g∥H−1/2 R(∂Ω). From this one deduces uε→u0strongly in L2(Ω)(and even in H1(Ωε)up to the corrector). This completes a fully rigorous derivation of the homogenized elastic limit and the explicit formula for Chom. Based on the homogenized equations, the agent conducts an analysis of the solutions. Although critical derivation steps are omitted, the following can be inferred. The agent employs the two-scale expansion structure consistent with homogenization theory. And techniques like energy estimates, Korn’s inequality, and weak convergence arguments are implicitly utilized to justify convergence. The final results align with theoretical expectations, suggesting the agent captures the essential mathematical machinery. However, the absence of explicit error bounds for truncation terms and insufficient discussion on the regularity of expansion functions weaken the analytical completeness. The conclusions remain plausible but require rigorous verification under specific functional space assumptions. 18 Step 2: Construction of the two–scale approximation. LetχE(y) = ( χE e, χE i)(y)be the periodic cell–correctors of Lemma 3 for each macroscopic strain E. Define the two–scale ansatz Uε(x) =u0(x) +εχDxu0(x)(x/ε) inΩε, extended by zero on Dε. A standard calculation (see Lemma 4) shows that Uεnearly solves the ε–problem up to an O(εα)remainder in H−1–norm. Step 3: Energy–error estimate. Setwε=uε−Uε. Subtracting the weak forms of the ε–problem for uε and the one nearly satisfied by Uεand testing with wεitself yields, via coercivity and the uniform Korn bound, ∥wε∥2 H1(Ωε)≤Cεα∥g∥H−1/2 R(∂Ω)∥wε∥H1(Ωε). Divide through by ∥wε∥H1(Ωε)to conclude ∥uε−Uε∥H1(Ωε)≤Cεα∥g∥H−1/2 R(∂Ω). Step 4: Passage to the limit. Since Uε→u0strongly in H1(Ωε)(by periodicity and u0∈H2), the triangle inequality immediately gives uε→u0with the same rate. This completes the proof of both the homogenized limit and the O(εα)–rate. Finally, the agent consolidates the aforementioned lemmas to formulate complete proof steps and presents the key conclusions. But it does not get the real value of α. It can be reasonably asserted that this reasoning framework adheres to normative mathematical logic and delineates potential avenues for addressing the problem. 5 Discussion AIM is a simple agent framework that aims at autonomous mathematical research and discovery. With
https://arxiv.org/abs/2505.22451v1
proper exploration and verification mechanisms, AIM already achieved impressive results in some open problems, and the same framework can be easily adopted for various research topics. However, our system still has many shortcomings in terms of reliability, that the proofs generated by AIM can not be directly accepted before manual review of an expert in math. We also observed several drawbacks in our AIM system during experiments. These drawbacks may be alleviated through better prompt engineering or other techniques, but they might ultimately stem from the inherent limitations of current LLMs and will require substantial further research to be fully resolved. •Repetitive Exploration . Our agent often explores through the same direction and proposes a series of similar conjectures. Although in our experiments it can ultimately find the correct proof or obtain some useful findings, repetitive exploration still increases the cost, while reducing efficiency and the potential upper bound of performance. It is crucial to encourage the model to engage in more diverse exploration. •Deficiency in Comprehending Certain Mathematical Configurations . Despite significant advancements in certain modalities, large models still demonstrate suboptimal comprehen- sion of spatial configurations. Mathematical theories often involve intricate physical setups and precisely defined boundary conditions. Weak interpretative capability regarding these configurations results in analyses that lack clarity and mathematical precision. •Lack of Intermediate Steps . Current LRMs do not tend to produce complete and rigorous proofs. This might not be a critical problem, but still impacts the proof quality. This may stem from their training methodology, as reinforcement learning primarily emphasizes the correctness of the final answer rather than the intermediate steps of the proof. This phenomenon can be mitigated through multiple PRV and refine processes, but this may not be an elegant solution and consumes a large amount of resources. AIM has achieved some basic and important mathematical ability in mathematical research. It can achieve multiple functions in the research of mathematical theory. Our agent invokes foundational theorems within appropriate contexts and precisely applies the theorems. It also demonstrates the 19 ability to recognize and interpret human-provided guidance and prompts, enabling it to execute multiple proof procedures as instructed. In the experiment, AIM executes logically coherent proof steps (e.g., energy estimates, asymptotic expansions) with formal consistency. Then it generates and proposes some hypotheses or conclusions aligned with empirical patterns and theoretical constraints. Our agent currently demonstrates the following functionalities in mathematical research: •Proof Assistant . AIM can successfully complete proofs for various mathematical theories. It provides critical intermediate conclusions and core procedures, though certain derivation details may require supplementation. Mathematical researchers can utilize the agent’s outputs to refine specific proof components, thereby accomplishing the comprehensive theoretical work. •Verify and Prove Ideas . In mathematical theory research, researchers often generate multiple proof concepts and conjectures. These potential approaches can be systematically input into AIM for automated proof attempt generation. Researchers can then evaluate the agent’s output trajectories to determine the feasibility of validation methodologies, thereby refining their subsequent attempts to advance the theoretical construction. •Exploration for Open Problem . AIM can provide assistance in solving open mathematical problems.
https://arxiv.org/abs/2505.22451v1
Our agent can directly attempt to solve these open problems and provide core conclusions, key processes, and reasonable inferences. Mathematicians can analyze the output results obtained, try more methods and ideas, and continue to use our agents to further explore the conclusions of this open problem. 6 Future Works Our AIM is still in a very early stage. In addition to addressing the aforementioned limitations, we have also identified the following potential directions for future work. •Memory Reflection Mechanism . A proper memory reflection or summarize mechanism might help stabilize inference process and enable longer exploration trajectories. Also, this can help to retain more mathematical techniques for our agent to enhance its mathematical capacity. •Retrieve Augmentation . For some frontier open problems our AIM agent need to retrieve through relevant papers to provide background context of it. •Model Capability Optimization . Faced with the problems of the model itself, we can consider reducing or even solving its negative impact on mathematical theory problems. We can consider addressing the issue of insufficient detail in the model output process and unclear understanding of problem settings so that our agent can work out mathematical theory more rigorously and accurately. •Multi-Agent Collaboration . We propose enhancing mathematical problem-solving capa- bilities by establishing a multi-agent collaborative framework. For instance, constructing dedicated agents specializing in evaluating the validity, rigor, and logical coherence of mathematical proofs. Such a structure would enable specialized agents to perform their designated roles during mathematical theorem exploration, achieving superior collaborative outcomes through division of labor. •Reinforcement Learning Methods . Integrating reinforcement learning methodologies into proof derivation processes could significantly enhance system capabilities. We propose employing reinforcement learning techniques to fine-tune the models, thereby improving their deductive reasoning capacity in specific mathematical domains. This optimization would ultimately improve the overall performance of our intelligent agent system. Acknowledgment We would like to thank Zihan Tang, Zejin Lin, and Zhongwei Sun for their valuable assistance during the early exploration phase. Special thanks go to Associate Professor Wenjia Jing from Qiuzhen College, Tsinghua University, whose guidance in mathematics greatly supported this work. 20 Author Contributions Among the first three authors, Yanxing Huang was responsible for the system design and implemen- tation. Yuanhang Liu reinvented some ideas after the implementation of the agent. Yuanhang Liu and Yanqiao Wang contributed to the early-stage experimental exploration. In the later stages, Yanqiao Wang reviewed the quantum algorithm problem. Yuanhang Liu reviewed three major experimental results, the absorbing boundary condition problem, the high contrast limit problem and the homoge- nization problem. Peng Li and Yang Liu jointly supervised the project, and Yang Liu coined the term “AI Mathematician”. References D. An, J.-P. Liu, and L. Lin. Linear combination of Hamiltonian simulation for nonunitary dynamics with optimal state preparation cost. Phys. Rev. Lett. , 131:150603, Oct 2023. doi: 10.1103/ PhysRevLett.131.150603. URL https://link.aps.org/doi/10.1103/PhysRevLett.131. 150603 . K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168 . DeepSeek-AI, D. Guo,
https://arxiv.org/abs/2505.22451v1
D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, X. Zhang, X. Yu, Y . Wu, Z. F. Wu, Z. Gou, Z. Shao, Z. Li, Z. Gao, A. Liu, B. Xue, B. Wang, B. Wu, B. Feng, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, D. Dai, D. Chen, D. Ji, E. Li, F. Lin, F. Dai, F. Luo, G. Hao, G. Chen, G. Li, H. Zhang, H. Bao, H. Xu, H. Wang, H. Ding, H. Xin, H. Gao, H. Qu, H. Li, J. Guo, J. Li, J. Wang, J. Chen, J. Yuan, J. Qiu, J. Li, J. L. Cai, J. Ni, J. Liang, J. Chen, K. Dong, K. Hu, K. Gao, K. Guan, K. Huang, K. Yu, L. Wang, L. Zhang, L. Zhao, L. Wang, L. Zhang, L. Xu, L. Xia, M. Zhang, M. Zhang, M. Tang, M. Li, M. Wang, M. Li, N. Tian, P. Huang, P. Zhang, Q. Wang, Q. Chen, Q. Du, R. Ge, R. Zhang, R. Pan, R. Wang, R. J. Chen, R. L. Jin, R. Chen, S. Lu, S. Zhou, S. Chen, S. Ye, S. Wang, S. Yu, S. Zhou, S. Pan, S. S. Li, S. Zhou, S. Wu, S. Ye, T. Yun, T. Pei, T. Sun, T. Wang, W. Zeng, W. Zhao, W. Liu, W. Liang, W. Gao, W. Yu, W. Zhang, W. L. Xiao, W. An, X. Liu, X. Wang, X. Chen, X. Nie, X. Cheng, X. Liu, X. Xie, X. Liu, X. Yang, X. Li, X. Su, X. Lin, X. Q. Li, X. Jin, X. Shen, X. Chen, X. Sun, X. Wang, X. Song, X. Zhou, X. Wang, X. Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . Zhang, Y . Xu, Y . Li, Y . Zhao, Y . Sun, Y . Wang, Y . Yu, Y . Zhang, Y . Shi, Y . Xiong, Y . He, Y . Piao, Y . Wang, Y . Tan, Y . Ma, Y . Liu, Y . Guo, Y . Ou, Y . Wang, Y . Gong, Y . Zou, Y . He, Y . Xiong, Y . Luo, Y . You, Y . Liu, Y . Zhou, Y . X. Zhu, Y . Xu, Y . Huang, Y . Li, Y . Zheng, Y . Zhu, Y . Ma, Y . Tang, Y . Zha, Y . Yan, Z. Z. Ren, Z. Ren, Z. Sha, Z. Fu, Z. Xu, Z. Xie, Z. Zhang, Z. Hao, Z. Ma, Z. Yan, Z. Wu, Z. Gu, Z. Zhu, Z. Liu, Z. Li, Z. Xie, Z. Song, Z. Pan, Z. Huang, Z. Xu, Z. Zhang, and Z. Zhang. DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . DeepSeek-AI, A. Liu, B. Feng, B. Xue, B. Wang, B. Wu, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, D. Dai, D. Guo, D. Yang, D. Chen, D. Ji, E. Li, F. Lin, F. Dai, F. Luo, G. Hao, G. Chen, G. Li, H. Zhang, H. Bao,
https://arxiv.org/abs/2505.22451v1
H. Xu, H. Wang, H. Zhang, H. Ding, H. Xin, H. Gao, H. Li, H. Qu, J. L. Cai, J. Liang, J. Guo, J. Ni, J. Li, J. Wang, J. Chen, J. Chen, J. Yuan, J. Qiu, J. Li, J. Song, K. Dong, K. Hu, K. Gao, K. Guan, K. Huang, K. Yu, L. Wang, L. Zhang, L. Xu, L. Xia, L. Zhao, L. Wang, L. Zhang, M. Li, M. Wang, M. Zhang, M. Zhang, M. Tang, M. Li, N. Tian, P. Huang, P. Wang, P. Zhang, Q. Wang, Q. Zhu, Q. Chen, Q. Du, R. J. Chen, R. L. Jin, R. Ge, R. Zhang, R. Pan, R. Wang, R. Xu, R. Zhang, R. Chen, S. S. Li, S. Lu, S. Zhou, S. Chen, S. Wu, S. Ye, S. Ye, S. Ma, S. Wang, S. Zhou, S. Yu, S. Zhou, S. Pan, T. Wang, T. Yun, T. Pei, T. Sun, W. L. Xiao, W. Zeng, W. Zhao, W. An, W. Liu, W. Liang, W. Gao, W. Yu, W. Zhang, X. Q. Li, X. Jin, X. Wang, X. Bi, X. Liu, X. Wang, X. Shen, X. Chen, X. Zhang, X. Chen, X. Nie, X. Sun, X. Wang, X. Cheng, X. Liu, X. Xie, X. Liu, X. Yu, X. Song, X. Shan, X. Zhou, X. Yang, X. Li, X. Su, X. Lin, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . X. Zhu, Y . Zhang, Y . Xu, Y . Xu, Y . Huang, Y . Li, Y . Zhao, Y . Sun, Y . Li, Y . Wang, Y . Yu, Y . Zheng, Y . Zhang, Y . Shi, Y . Xiong, Y . He, Y . Tang, Y . Piao, Y . Wang, Y . Tan, Y . Ma, Y . Liu, Y . Guo, Y . Wu, Y . Ou, Y . Zhu, Y . Wang, Y . Gong, Y . Zou, Y . He, Y . Zha, Y . Xiong, Y . Ma, Y . Yan, Y . Luo, Y . You, Y . Liu, Y . Zhou, Z. F. Wu, Z. Z. Ren, Z. Ren, Z. Sha, Z. Fu, Z. Xu, Z. Huang, Z. Zhang, Z. Xie, Z. Zhang, Z. Hao, Z. Gou, Z. Ma, Z. Yan, Z. Shao, Z. Xu, Z. Wu, 21 Z. Zhang, Z. Li, Z. Gu, Z. Zhu, Z. Liu, Z. Li, Z. Xie, Z. Song, Z. Gao, and Z. Pan. DeepSeek-V3 technical report, 2025. URL http://arxiv.org/abs/2412.19437 . Google DeepMind. AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms, 2025. URL https://deepmind.google/discover/blog/ alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/ . Accessed: 2025-05-20. S. J. Absorbing boundary conditions for reaction-diffusion equations. IMA J. Appl.Math , 68:167–184, 2003. doi: https://doi.org/10.1093/imamat/68.2.167. Z. Liu, K. Liu, Y . Zhu, X. Lei, Z. Yang, Z. Zhang, P. Li, and Y . Liu. AIGS: Generating science from AI-powered automated falsification, 2024. URL https://arxiv.org/abs/2411.11910 . C. Lu, C. Lu, R. T. Lange, J. Foerster, J. Clune, and D. Ha. The AI Scientist: Towards fully automated open-ended scientific discovery, 2024. URL https://arxiv.org/abs/2408.06292 . A. Newell, J. C. Shaw, and H. A. Simon.
https://arxiv.org/abs/2505.22451v1
Empirical explorations with the logic theory machine: A case study in heuristics. In Proceedings of the Western Joint Computer Conference , volume 15, pages 218–239. ACM, 1957. doi: 10.1145/1455567.1455605. OpenAI. OpenAI o3 and o4-mini system card. https://cdn.openai.com/pdf/ 2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf , 2025. Accessed: 2025-05-20. OpenAI, A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney, A. Iftimie, A. Karpenko, A. T. Passos, A. Neitz, A. Prokofiev, A. Wei, A. Tam, A. Bennett, A. Kumar, A. Saraiva, A. Vallone, A. Duberstein, A. Kondrich, A. Mishchenko, A. Applebaum, A. Jiang, A. Nair, B. Zoph, B. Ghorbani, B. Rossen, B. Sokolowsky, B. Barak, B. McGrew, B. Minaiev, B. Hao, B. Baker, B. Houghton, B. McKinzie, B. Eastman, C. Lugaresi, C. Bassin, C. Hudson, C. M. Li, C. de Bourcy, C. V oss, C. Shen, C. Zhang, C. Koch, C. Orsinger, C. Hesse, C. Fischer, C. Chan, D. Roberts, D. Kappler, D. Levy, D. Selsam, D. Dohan, D. Farhi, D. Mely, D. Robinson, D. Tsipras, D. Li, D. Oprica, E. Freeman, E. Zhang, E. Wong, E. Proehl, E. Cheung, E. Mitchell, E. Wallace, E. Ritter, E. Mays, F. Wang, F. P. Such, F. Raso, F. Leoni, F. Tsimpourlas, F. Song, F. von Lohmann, F. Sulit, G. Salmon, G. Parascandolo, G. Chabot, G. Zhao, G. Brockman, G. Leclerc, H. Salman, H. Bao, H. Sheng, H. Andrin, H. Bagherinezhad, H. Ren, H. Lightman, H. W. Chung, I. Kivlichan, I. O’Connell, I. Osband, I. C. Gilaberte, I. Akkaya, I. Kostrikov, I. Sutskever, I. Kofman, J. Pachocki, J. Lennon, J. Wei, J. Harb, J. Twore, J. Feng, J. Yu, J. Weng, J. Tang, J. Yu, J. Q. Candela, J. Palermo, J. Parish, J. Heidecke, J. Hallman, J. Rizzo, J. Gordon, J. Uesato, J. Ward, J. Huizinga, J. Wang, K. Chen, K. Xiao, K. Singhal, K. Nguyen, K. Cobbe, K. Shi, K. Wood, K. Rimbach, K. Gu-Lemberg, K. Liu, K. Lu, K. Stone, K. Yu, L. Ahmad, L. Yang, L. Liu, L. Maksin, L. Ho, L. Fedus, L. Weng, L. Li, L. McCallum, L. Held, L. Kuhn, L. Kondraciuk, L. Kaiser, L. Metz, M. Boyd, M. Trebacz, M. Joglekar, M. Chen, M. Tintor, M. Meyer, M. Jones, M. Kaufer, M. Schwarzer, M. Shah, M. Yatbaz, M. Y . Guan, M. Xu, M. Yan, M. Glaese, M. Chen, M. Lampe, M. Malek, M. Wang, M. Fradin, M. McClay, M. Pavlov, M. Wang, M. Wang, M. Murati, M. Bavarian, M. Rohaninejad, N. McAleese, N. Chowdhury, N. Chowdhury, N. Ryder, N. Tezak, N. Brown, O. Nachum, O. Boiko, O. Murk, O. Watkins, P. Chao, P. Ashbourne, P. Izmailov, P. Zhokhov, R. Dias, R. Arora, R. Lin, R. G. Lopes, R. Gaon, R. Miyara, R. Leike, R. Hwang, R. Garg, R. Brown, R. James, R. Shu, R. Cheu, R. Greene, S. Jain, S. Altman, S. Toizer, S. Toyer, S. Miserendino, S. Agarwal, S. Hernandez, S. Baker, S. McKinney, S. Yan, S. Zhao, S. Hu, S. Santurkar, S. R. Chaudhuri, S. Zhang, S. Fu, S. Papay, S. Lin, S. Balaji, S. Sanjeev, S. Sidor, T. Broda, A. Clark,
https://arxiv.org/abs/2505.22451v1
T. Wang, T. Gordon, T. Sanders, T. Patwardhan, T. Sottiaux, T. Degry, T. Dimson, T. Zheng, T. Garipov, T. Stasi, T. Bansal, T. Creech, T. Peterson, T. Eloundou, V . Qi, V . Kosaraju, V . Monaco, V . Pong, V . Fomenko, W. Zheng, W. Zhou, W. McCabe, W. Zaremba, Y . Dubois, Y . Lu, Y . Chen, Y . Cha, Y . Bai, Y . He, Y . Zhang, Y . Wang, Z. Shao, and Z. Li. OpenAI o1 system card, 2024. URL https://arxiv.org/abs/2412.16720 . R. Sapkota, K. I. Roumeliotis, and M. Karkee. AI agents vs. agentic AI: A conceptual taxonomy, applications and challenges, 2025. URL https://arxiv.org/abs/2505.10468 . Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, X. Bi, H. Zhang, M. Zhang, Y . K. Li, Y . Wu, and D. Guo. DeepSeekMath: Pushing the limits of mathematical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.03300 . 22 H. Sun, Z. Qiao, J. Guo, X. Fan, Y . Hou, Y . Jiang, P. Xie, Y . Zhang, F. Huang, and J. Zhou. ZeroSearch: Incentivize the search capability of LLMs without searching, 2025. URL https: //arxiv.org/abs/2505.04588 . K. Team, A. Du, B. Gao, B. Xing, C. Jiang, C. Chen, C. Li, C. Xiao, C. Du, C. Liao, C. Tang, C. Wang, D. Zhang, E. Yuan, E. Lu, F. Tang, F. Sung, G. Wei, G. Lai, H. Guo, H. Zhu, H. Ding, H. Hu, H. Yang, H. Zhang, H. Yao, H. Zhao, H. Lu, H. Li, H. Yu, H. Gao, H. Zheng, H. Yuan, J. Chen, J. Guo, J. Su, J. Wang, J. Zhao, J. Zhang, J. Liu, J. Yan, J. Wu, L. Shi, L. Ye, L. Yu, M. Dong, N. Zhang, N. Ma, Q. Pan, Q. Gong, S. Liu, S. Ma, S. Wei, S. Cao, S. Huang, T. Jiang, W. Gao, W. Xiong, W. He, W. Huang, W. Wu, W. He, X. Wei, X. Jia, X. Wu, X. Xu, X. Zu, X. Zhou, X. Pan, Y . Charles, Y . Li, Y . Hu, Y . Liu, Y . Chen, Y . Wang, Y . Liu, Y . Qin, Y . Liu, Y . Yang, Y . Bao, Y . Du, Y . Wu, Y . Wang, Z. Zhou, Z. Wang, Z. Li, Z. Zhu, Z. Zhang, Z. Wang, Z. Yang, Z. Huang, Z. Huang, Z. Xu, and Z. Yang. Kimi k1.5: Scaling reinforcement learning with LLMs, 2025. URL https://arxiv.org/abs/2501.12599 . F. X. J. W. Uniform convergence for linear elastostatic systems with periodic high contrast inclusions. Partial Differ. Equ. Appl. , 5(2), 2024. doi: https://doi.org/10.1007/s42985-024-00271-5. L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y . Lin, W. X. Zhao, Z. Wei, and J. Wen. A survey on large language model based autonomous agents. Frontiers of Computer Science , 18(6), Mar. 2024. ISSN 2095-2236. doi: 10.1007/s11704-024-40231-1. URL http://dx.doi.org/10.1007/s11704-024-40231-1 . Y . Wang, Q. Yang, Z. Zeng, L. Ren, L. Liu, B. Peng, H. Cheng, X. He, K. Wang, J. Gao, W. Chen, S. Wang, S. S. Du, and Y
https://arxiv.org/abs/2505.22451v1
. Shen. Reinforcement learning for reasoning in large language models with one training example, 2025. URL https://arxiv.org/abs/2504.20571 . Y . Zuo, K. Zhang, L. Sheng, S. Qu, G. Cui, X. Zhu, H. Li, Y . Zhang, X. Long, E. Hua, B. Qi, Y . Sun, Z. Ma, L. Yuan, N. Ding, and B. Zhou. TTRL: Test-time reinforcement learning, 2025. URL https://arxiv.org/abs/2504.16084 . 23 The materials presented in the appendix are the experimental results of our four theoretical mathemat- ical problems. All content was generated by AIM, with only minor formatting adjustments made by us. "type" in the agent’s output refers to the categorization of the following content; " correctness " indicates the agent’s judgment on the validity of the aforementioned conclusions; and " comment " contains evaluative remarks regarding the proof process. A Proof of Quantum Algorithm Problem Generated by AIM type: lemma Lemma 1. The Black-Scholes-Merton (BSM) partial differential equation (PDE) can be transformed into the standard heat equation via a logarithmic substitution. Specifically, under the transformation x= ln( S/K) + (r−σ2/2)(T−t),τ=σ2(T−t)/2, and V(S, t) =e−r(T−t)U(x, τ), the BSM PDE reduces to: ∂U ∂τ=∂2U ∂x2. Discretizing this PDE spatially yields a matrix equation ∂τ|U⟩=−B|U⟩, where B=−∂2 x. The operator Bis positive semi-definite and can be directly interpreted as B=L, where L=−∂2 x. While the original Hermitian decomposition B=L+iH(with H= 0) trivializes the LCHS lemma, the dissipative nature of Bpermits alternative quantum simulation methods tailored to parabolic equations. correctness : True Proof. **Step 1: Transform BSM to the Heat Equation** Start with the BSM PDE: ∂V ∂t+1 2σ2S2∂2V ∂S2+rS∂V ∂S−rV= 0. Apply the substitution V(S, t) =e−r(T−t)U(x, τ),x= ln( S/K) + (r−σ2/2)(T−t), and τ= σ2(T−t)/2. Compute derivatives rigorously: - ∂tV=e−r(T−t) rU−σ2 2∂τU−(r−σ2/2)∂xU , -∂SV=e−r(T−t)∂xU·1 S, -∂2 SV=e−r(T−t) ∂2 xU−∂xU ·1 S2. Substitute into the BSM PDE: e−r(T−t) rU−σ2 2∂τU−(r−σ2/2)∂xU+1 2σ2(∂2 xU−∂xU) +r∂xU−rU = 0. Multiply through by er(T−t), cancel terms, and simplify: −σ2 2∂τU+σ2 2∂2 xU= 0 = ⇒∂τU=∂2 xU. **Step 2: Spatial Discretization** Discretize xon a grid with spacing ∆x. Approximate ∂2 xUvia finite differences: ∂2 xU≈Uj+1−2Uj+Uj−1 ∆x2. LetD2be the discretized second derivative matrix. The operator becomes: B=−D2. **Step 3: Positive Semi-Definite Operator** The matrix B=−D2is positive semi-definite because D2(a centered second derivative discretiza- tion) is negative semi-definite. This aligns with the dissipative nature of the heat equation. **Step 4: Implications for Quantum Simulation** 24 While the original decomposition B=L+iH(with H=−ic∂x) is invalid due to the absence of a first-order term, the operator Bitself is suitable for quantum simulation methods designed for parabolic equations. Specifically, the time evolution ∂τ|U⟩=−B|U⟩can be simulated using Trotterization or other dissipative quantum algorithms, bypassing the need for the LCHS lemma. comment : None type: lemma [Correct] Variable substitution and equation transformation are correct. Discretization of the spatial variable is correct. And AIM finds the reduced PDE trivializes the LCHS lemma. It’s right! Lemma 2. The integral representation in the LCHS lemma for the BSM model can be approximated with precision ϵusing O(1/ϵ2)terms through an adaptive discretization of the k-integral, leveraging the rapid decay of the Cauchy kernel η(k) =1 π(1+k2). correctness : True Proof. **Truncation Error
https://arxiv.org/abs/2505.22451v1
Analysis**: The integral I=R∞ −∞η(k)u(t, k)dkis truncated to [−K, K ]. The tail error is bounded by: Z |k|>Kη(k)dk=2 πZ∞ K1 1 +k2dk=2 ππ 2−arctan( K) ≈1 πKforK≫1. Setting1 πK≤ϵ/2gives K≥2 πϵ. Thus, K=O(1/ϵ). **Adaptive Discretization**: 1. **Central Interval [−K, K ]**: The entire truncated domain [−K, K ] must be discretized. To bound the discretization error, use a uniform grid with spacing ∆k= Θ(ϵ). The number of points is N=O(K/∆k) =O(1/ϵ2). The discretization error using the trapezoidal rule scales as O(K∆k2·max|∂2 k(η(k)u(t, k))|). Since η(k)andu(t, k)are smooth, max|∂2 k(η(k)u(t, k))|is bounded over [−K, K ]. The total error becomes O(K∆k2) =O1 ϵ·ϵ2 = O(ϵ). **Outer Interval Contribution**: The truncated integral already includes all significant contributions, as the tail beyond K=O(1/ϵ)contributes at most ϵ/2. The central interval [−K, K ]is approximated withO(1/ϵ2)points, ensuring the total discretization error is O(ϵ). Additional adaptive methods in outer intervals (e.g., geometrically spaced grids) would only reduce logarithmic factors, which are asymptotically dominated by O(1/ϵ2). **Total Terms**: The uniform grid over [−K, K ]requires O(1/ϵ2)terms. While adaptive grids in outer regions could marginally reduce constants, the dominant term remains O(1/ϵ2). **Final Error**: Truncation error O(ϵ/2)+ discretization error O(ϵ/2) = O(ϵ). Thus, O(1/ϵ2) terms suffice for ϵ-precision approximation. comment : None type: lemma [Correct] Truncate and discretize the integral correctly. Lemma 3. The solution operator for the heat equation derived from the BSM model, U(τ) =eτB, can be approximated via the LCHS lemma as a linear combination of Hamiltonian simulations. Specifically, the integral representationR Rη(k)e−ikLτdk(where L=−B) can be discretized into M=O(1/ϵ2)terms with quadrature weights η(kj)∆kj. Each term e−ikjLτis a Hamiltonian simulation of Lscaled by kjτ. The total query complexity to the L-oracle is ˜O ∥L∥T/ϵ3 , where T=2τ σ2, and the gate complexity scales polynomially with ∥L∥,T,1/ϵ, and logarithmic factors. correctness : True Proof. **Step 1: Integral Representation via LCHS** From Lemma-0, the solution operator for the heat equation is U(τ) =eτB. LetL=−B, a positive semi-definite operator. By the LCHS lemma withH= 0, we express: eτB=Z Rη(k)e−ikLτdk, η (k) =1 π(1 +k2). 25 **Step 2: Quadrature Discretization** Lemma-1 guarantees the integral can be discretized into M=O(1/ϵ2)terms with weights η(kj)∆k, yielding: eτB≈MX j=1η(kj)∆k·e−ikjLτ. The truncation and discretization error is O(ϵ)due to the O(1/k2)decay of η(k). [Correct] Properly utilizing the LCHS lemma and memory lemmas for problem transformation. **Step 3: Hamiltonian Simulation Costs** Each term e−ikjLτrequires simulating Lfor time tj=kjτ. Using optimal Hamiltonian simulation algorithms (e.g., quantum signal processing or Taylor series) with query complexity O(∥L∥tj+ log(1 /ϵsim)), setϵsim=ϵ/M to bound the total simulation error by ϵ. Since kj=O(1/ϵ)dominates the quadrature, tj=O(τ/ϵ). The per-term complexity becomes: O(∥L∥τ/ϵ+ log( M/ϵ)) = ˜O(∥L∥τ/ϵ). **Step 4: Total Query Complexity** Summing over M=O(1/ϵ2)terms, the total queries to the L-oracle are: ˜O M·∥L∥τ ϵ =˜O∥L∥τ ϵ3 . Expressing in terms of T=2τ σ2and absorbing constants: ˜O∥L∥T ϵ3 . **Step 5: Ancilla and Gate Complexity** The LCU framework requires logM=O(log(1 /ϵ)) ancilla qubits. Gate complexity inherits ˜O(M)-scaling from the superposition state, augmented by O(poly(log(∥L∥T/ϵ)))factors from simulation subroutines. The dominant terms remain polynomial in∥L∥,T, and 1/ϵ, with polylogarithmic corrections. [Error] There are some mistakes about complexity computing.
https://arxiv.org/abs/2505.22451v1
And the calculation process lacks detail. comment : None type: lemma Lemma 4. The solution operator e−τBfor the heat equation derived from the BSM model can be approximated with error ϵusing a quantum algorithm that implements a discretized version of the LCHS lemma. This algorithm requires M=O1 ϵ2 terms in the quadrature approximation, and the total query complexity to the B-oracle (encoding the discretized Laplacian) is ˜O ∥B∥T ϵ3 , where T=2τ σ2. The gate complexity and ancilla qubits scale polynomially with log(1/ϵ),∥B∥, and T. correctness : True Proof. **Step 1: Integral Representation via LCHS** From the LCHS lemma, for a time-independent A=B(Hermitian and positive semi-definite), the operator e−τBis expressed as: e−τB=Z Rη(k)e−ikBτdk, η (k) =1 π(1 +k2). This follows by decomposing A=L+iH, where L=BandH= 0, directly applying the lemma. **Step 2: Quadrature Discretization** The integral is approximated by a finite sum: e−τB≈MX j=1η(kj)∆kje−ikjBτ. The error from truncating the integral to k∈[−kmax, kmax]and discretizing is bounded by ϵ. Since η(k)decays as O(1/k2), truncating at kmax=O(1/ϵ)ensures the tail contributes O(ϵ). Choosing M=O1 ϵ2 quadrature points achieves ∆kj=O(ϵ), yielding total discretization error O(ϵ). 26 **Step 3: Hamiltonian Simulation Cost** Each term e−ikjBτis simulated using a Hamiltonian simulation algorithm (e.g., Trotter-Suzuki or QSP). The cost for each simulation scales as ˜O(∥kjBτ∥) =˜O(kjτ∥B∥). The dominant contribu- tions come from kj=O(1/ϵ), so each simulation costs ˜O ∥B∥T ϵ (since τ=σ2T/2, and σ2is absorbed into constants). **Step 4: Total Query Complexity** Summing over M=O(1/ϵ2)terms, the total query complexity to the B-oracle is: ˜O M·∥B∥T ϵ =˜O∥B∥T ϵ3 . The logarithmic factors from simulation algorithms are included in the ˜O. **Step 5: Ancilla Qubits and Gate Complexity** The LCU requires O(logM) =O(log(1 /ϵ))ancilla qubits to index the Mterms. The gate com- plexity for state preparation (encoding η(kj)∆kj) and controlled unitaries scales polynomially with log(1/ϵ),∥B∥, and T, consistent with standard LCU methods. comment : None type: lemma Lemma 5. The BSM model PDE can be solved via a quantum algorithm using the LCHS lemma with query complexity ˜O ∥B∥T ϵ3 to the B-oracle, where B=−∂2 xis the discretized positive semi-definite Laplacian, Tis the time horizon, and ϵis the approximation error. The algorithm approximates the LCHS integral with M=O(1/ϵ2)terms, each requiring Hamiltonian simulation of kjBfor time τ=σ2T/2. The gate complexity and ancilla qubits scale polynomially with log(1/ϵ),∥B∥, and T. correctness : True Proof. **Step 1: Transform BSM to Heat Equation** Under the substitution x= ln( S/K) + (r− σ2/2)(T−t),τ=σ2(T−t)/2, andV(S, t) =e−r(T−t)U(x, τ), the BSM PDE reduces to the heat equation (Lemma-0): ∂U ∂τ=∂2U ∂x2. Discretizing spatially yields ∂τ|U⟩=−B|U⟩, where B=−∂2 x. The solution is |U(τ)⟩= e−τB|U(0)⟩. **Step 2: Apply LCHS Integral Representation** Using Lemma-2, express e−τBas a discretized integral (Lemma-1): e−τB≈MX j=1η(kj)∆kje−ikjBτ, where η(k) =1 π(1+k2),M=O(1/ϵ2), and quadrature error O(ϵ). **Step 3: Adaptive Quadrature and Sum Scaling** The truncation |kj| ≤K=O(1/ϵ)ensures tail error O(ϵ). Crucially, adaptive quadrature weights nodes near k= 0, where η(k)dominates. This ensuresPM j=1|kj|=O(M)(Lemma-1), as high- kterms are sparse and contribute minimally to the sum. **Step 4: Per-Term Simulation Complexity** Each term e−ikjBτsimulates Hj=kjBfor time τ. The spectral norm ∥Hj∥=|kj|∥B∥ ≤O(∥B∥/ϵ). To bound the total simulation error
https://arxiv.org/abs/2505.22451v1
by ϵ, each term requires precision δ=O(ϵ). Using high-order Trotter or QSP, the per-term query complexity is ˜O ∥Hj∥τ δ =˜O |kj|∥B∥τ ϵ . **Step 5: Total Query Complexity** Summing over Mterms and leveragingP j|kj|=O(M): ˜O ∥B∥τ ϵ·X j|kj| =˜O∥B∥τ ϵ·M . 27 Substituting M=O(1/ϵ2)andτ=σ2T/2: ˜O∥B∥T ϵ3 . **Step 6: Ancilla Qubits and Gate Complexity** Each simulation requires O(log(∥B∥τ/ϵ))ancillas. The LCU framework adds O(logM) =O(log(1 /ϵ))ancillas. Total ancillas: O log ∥B∥T ϵ . Gate complexity scales as poly(log(1 /ϵ),∥B∥, T), inheriting polylog factors from simulation and LCU steps. comment : None type: lemma Lemma 6. The quantum algorithm for solving the BSM PDE using the LCHS lemma achieves an approximation error ϵwith query complexity ˜O ∥B∥T ϵ3 to the operator oracle encoding B, where B=−∂2 xis the discretized Laplacian, Tis the time horizon, and ϵis the error tolerance. The ancilla qubit count is O log1 ϵ , and the gate complexity scales as poly ∥B∥, T,log1 ϵ,1 ϵ . correctness : True Proof. **Proof:** **Step 1: Transform BSM PDE to Heat Equation** Applying the transformations in Lemma-0, the BSM PDE is converted to the standard heat equation: ∂U ∂τ=∂2U ∂x2, with solution U(τ) =e−τBU(0), where B=−∂2 xis the positive semi-definite discretized Laplacian operator. The matrix equation derived from spatial discretization is: ∂τ|U⟩=−B|U⟩. **Step 2: Apply LCHS Lemma** By Lemma LCHS, the solution operator e−τBcan be represented as: e−τB=Z Rη(k)e−ikBτdk, where η(k) =1 π(1+k2). This integral is exact by the LCHS lemma, as verified via Fourier analysis. **Step 3: Discretize the Integral** To approximate the integral, truncate kto[−K, K ]withK= O(1/ϵ)and use a quadrature rule (e.g., trapezoidal) with M=O(1/ϵ2)points kjand weights wj=η(kj)∆kj. The truncation and quadrature errors contribute O(ϵ)due to the rapid decay of η(k). **Step 4: Implement Linear Combination of Unitaries (LCU)** The discretized operator becomes: e−τB≈MX j=1wje−ikjBτ. The LCU method prepares this linear combination using a control register of logMqubits (ancillas) and applies each term e−ikjBτconditioned on the register. **Step 5: Hamiltonian Simulation Cost** Each term e−ikjBτrequires simulating Hamiltonian kjB for time τ. The maximum |kj|isO(1/ϵ), leading to simulation time tj=|kj|τ=O(τ/ϵ). Using quantum signal processing (QSP), the query complexity per simulation is: O∥kjB∥τ log(1/ϵ) =O∥B∥τ/ϵ log(1/ϵ) . Total queries over M=O(1/ϵ2)terms yield: ˜O∥B∥τ ϵ3 =˜O∥B∥T ϵ3 , 28 since τ=σ2T/2 =O(T). **Step 6: Ancilla Qubits and Gate Complexity** The LCU requires O(logM) =O(log(1 /ϵ)) ancilla qubits. Each Hamiltonian simulation uses additional O(1)ancillas for QSP. The gate com- plexity per simulation is polynomial in log(1/ϵ), and with Mterms, the total gate complexity is poly(∥B∥, T,log(1/ϵ),1/ϵ). **Conclusion:** The corrected derivation aligns the initial solution operator with the LCHS lemma application, ensuring consistency. The algorithm’s query complexity, ancilla requirements, and gate complexity match the conjecture’s claims. comment : None type: lemma Lemma 7. The quantum algorithm for solving the BSM PDE using the LCHS lemma requires simulating the discretized Laplacian operator B=−∂2 xfor a total time τ=σ2T 2. The inte- gralR Rη(k)e−ikBτdkis approximated by M=O1 ϵ2 quadrature terms. Each term involves Hamiltonian simulation of kjB, where |kj| ≤O1 ϵ . The total query complexity to the B-oracle is˜O ∥B∥T ϵ3 , and the gate
https://arxiv.org/abs/2505.22451v1
complexity scales as poly ∥B∥, T,log1 ϵ,1 ϵ , with ancilla qubit count O log1 ϵ . correctness : True Proof. **Step 1: Heat Equation Reduction** From Lemma-0, the BSM PDE transforms into the heat equation ∂τU=BU, where B=−∂2 x. The solution is U(τ) =e−BτU(0). The LCHS lemma provides the integral representation e−Bτ=R Rη(k)e−ikBτdk, with η(k) =1 π(1+k2). **Step 2: Quadrature Approximation** Using Lemma-1, we approximate the integral with M= O1 ϵ2 terms such that e−Bτ−PM j=1wje−ikjBτ ≤ϵ 2. The quadrature nodes satisfy |kj| ≤ O1 ϵ , and weights wjpreserveP j|wj| ≤1. **Step 3: Hamiltonian Simulation Error Allocation** Each term e−ikjBτmust be simulated with error≤ϵ 2in spectral norm. Since ∥kjB∥τ≤O ∥B∥τ ϵ , optimal Hamiltonian simulation algorithms (qubitization or QSP) require ˜O ∥B∥τ ϵ queries to the B-oracle per term. The logarithmic factors in simulation time and error are absorbed into ˜O-notation. **Step 4: Total Query Complexity** With M=O1 ϵ2 terms, the total query complexity is ˜O ∥B∥τ ϵ·1 ϵ2 =˜O ∥B∥T ϵ3 , where τ=σ2T 2. This matches the conjectured scaling, as overlapping logarithmic factors from quadrature and simulation are negligible. **Step 5: Ancilla and Gate Complexity** State preparation for M-term superposition requires O(logM) =O log1 ϵ ancilla qubits. Gate complexity for arithmetic operations (weighted sums, Hamiltonian simulation) scales polynomially in ∥B∥,T,log1 ϵ, and1 ϵ, consistent with standard quantum subroutines. comment : None type: theorem [Vague] Lemmas 4-7 merely reiterate the results derived from Lemmas 1-3. Theorem 1. Lemma(Linear combination of Hamiltonian simulation, LCHS): For t∈[0, T], letA(t)∈CN×Nbe decomposed into Hermitian and anti-Hermitian parts such that A(t) = L(t) +iH(t), where L(t) =1 2[A(t) +A†(t)]andH(t) =1 2i[A(t)−A†(t)]. Assume that L(t)is positive semi-definite for all t∈[0, T]. Denoting the time ordering operator by T, we have Texpn −Rt 0A(s)dso =R Rη(k)u(t, k)dk, where u(t, k)is the propagator for a time- dependent Hamiltonian simulation problem such that u(t, k) =Texpn −iRt 0[H(s) +kL(s)]dso , andη(k) =1 π(1+k2)is the kernel function with respect to k. BSM model: The PDE in the BSM model is given by ∂V(S, t) ∂t+1 2σ2S2∂2V ∂S2+rS∂V ∂S−rV= 0. 29 Boundary conditions are characterized by: V(0, t) = 0 for all 0< t≤T;V(S, t)→S−Ke−r(T−t) forS→ ∞ ;V(S, T) = max {S−K,0}. Your tasks are to: Use the idea of LCHS to deal with the BSM model. Design a quantum algorithm to solve the equation. And compute the complexity (query complexity to state preparation oracle and operator oracles, gate complexity, ancilla qubits, (parameters: norm of boundary and initial, T, approximation error ϵ, etc)). correctness : True Proof. **Step 1: Transform BSM to Heat Equation** As per Lemma-0, apply the substitution x= ln( S/K) + (r−σ2/2)(T−t),τ=σ2 2(T−t), and V(S, t) =e−r(T−t)U(x, τ). The PDE reduces to ∂τU=∂2 xU. Spatial discretization yields ∂τ|U⟩=−B|U⟩, where B=−∂2 xis positive semi-definite. **Step 2: LCHS Integral Representation** From the LCHS lemma, e−Bτ=R Rη(k)e−ikBτdk, where η(k) =1 π(1+k2). By Lemma-1, approximate the integral as a quadrature sum with M= O(1/ϵ2)terms: e−Bτ≈MX j=1wje−ikjBτ, where wj=η(kj)∆kj, and|kj| ≤O(1/ϵ). The quadrature error is ≤ϵ/2. **Step 3: Quantum Algorithm via LCU** Implement the linear combinationPM j=1wje−ikjBτ using the Linear Combination of
https://arxiv.org/abs/2505.22451v1
Unitaries (LCU) method: 1. **Ancilla Preparation**: Prepare an m-qubit ancilla register in the state1√βPM j=1√wj|j⟩, where β=P jwj=O(1). This requires O(logM) =O(log1 ϵ)ancillas and poly(M)gates. 2. **Controlled Hamiltonian Simulation**: For each ancilla state |j⟩, apply the controlled unitary |j⟩⟨j| ⊗e−ikjBτ. Each e−ikjBτis simulated using a QSVT-based method with query complexity ˜O(|kj|τ∥B∥)to the B-oracle. 3. **Amplitude Amplification**: Due to the LCU normalization factor β, repeat amplitude amplification O(√β) = O(1)times to project onto the success subspace. **Step 4: Error and Complexity Analysis** - **Quadrature Error**: ≤ϵ/2by Lemma-1. - **Simu- lation Error**: Each e−ikjBτis approximated with error ≤ϵ/(2M), contributing total error ≤ϵ/2. - **Total Query Complexity**: Each |kj| ≤O(1/ϵ), and τ=O(T). QSVT requires ˜O(|kj|τ∥B∥) queries per term. Summing over M=O(1/ϵ2)terms gives total queries: ˜O∥B∥T ϵ3 . - **Ancilla Qubits**: - O(log1 ϵ)for quadrature indexing. - O(1)for QSVT internal ancillas. - **Gate Complexity**: Dominated by QSVT, scaling as poly (∥B∥, T,log1 ϵ,1 ϵ). **Step 5: State Preparation** The initial state |U(0)⟩, encoding max{ex−1,0}, is prepared using quantum arithmetic circuits with poly (n)gates for n-qubit discretization, independent of ϵ. **Conclusion** The quantum algorithm solves the BSM model with total error ≤ϵ, adhering to the stated complexity bounds. 30 B Proof of Absorbing Boundary Condition Generated by AIM type: conjecture Lemma 8. **A priori energy estimate for the coupled system** Let uandφk(k= 1, . . . , m ) satisfy the system (2.10) with u0∈L2(Ω)andφk(0) = 0 . Then, there exists a constant C >0depending onT, β, α k, dk, but independent of u0, such that: sup t∈[0,T]∥u(t)∥2 L2(Ω)+ZT 0∥∇u(t)∥2 L2(Ω)dt+ZT 0∥u(t)∥2 L2(S)dt +mX k=1 sup t∈[0,T]∥φk(t)∥2 L2(S)+ZT 0∥∇Sφk(t)∥2 L2(S)dt! ≤C∥u0∥2 L2(Ω). correctness : True Proof. **Revised Proof of the a priori estimate:** Define the total energy functional: E(t) =1 2∥u(t)∥2 L2(Ω)+mX k=1α2 kdk 2β′∥φk(t)∥2 L2(S), β′=β+mX k=1αk. [Correct] The agent consider the energy method, and show the correct energy functional form. **Energy estimate for u:** Multiply the u-equation by uand integrate over Ω: 1 2d dt∥u∥2 L2(Ω)+∥∇u∥2 L2(Ω)+β′∥u∥2 L2(S)=mX k=1αkdkZ SuφkdS. Apply Young’s inequality to the right-hand side: mX k=1αkdkZ SuφkdS≤β′ 2∥u∥2 L2(S)+mX k=1α2 kd2 k 2β′∥φk∥2 L2(S). Subtract the Young’s bound and rearrange: 1 2d dt∥u∥2 L2(Ω)+∥∇u∥2 L2(Ω)+β′ 2∥u∥2 L2(S)≤mX k=1α2 kd2 k 2β′∥φk∥2 L2(S). [Correct] Here, the agent gets the accurate result with the energy method and the application of inequality techniques. **Energy estimate for φk:** Multiply the φk-equation byα2 kdk β′φkand integrate: α2 kdk 2β′d dt∥φk∥2 L2(S)+α2 kdk β′∥∇Sφk∥2 L2(S)+α2 kd2 k 2β′∥φk∥2 L2(S)≤α2 k 2β′∥u∥2 L2(S). Summing over k, the right-hand side becomes: mX k=1α2 k 2β′∥u∥2 L2(S)≤mX k=1αk 2∥u∥2 L2(S)≤β′−β 2∥u∥2 L2(S). [Correct] Similarly, the same approach is applied for the assistant function. **Combine energy estimates:** Add the u-estimate and modified φk-estimates: dE dt+∥∇u∥2 L2(Ω)+β 2∥u∥2 L2(S)+mX k=1α2 kdk β′∥∇Sφk∥2 L2(S)≤0. Integrate from 0toT: E(T) +ZT 0 ∥∇u∥2 L2(Ω)+β 2∥u∥2 L2(S)+mX k=1α2 kdk β′∥∇Sφk∥2 L2(S)! dt≤E(0). 31 Since E(0) =1 2∥u0∥2 L2(Ω)andα2 kdk β′≥α2 mindmin β+Pαk, the coefficients are absorbed into C, yielding: sup t∈[0,T]∥u(t)∥2 L2(Ω)+ZT 0∥∇u(t)∥2 L2(Ω)dt+ZT 0∥u(t)∥2 L2(S)dt+mX k=1 sup t∈[0,T]∥φk(t)∥2 L2(S)+ZT 0∥∇Sφk(t)∥2 L2(S)dt! ≤C∥u0∥2 L2(Ω), where Cdepends on T, β, α k, dk, but not on u0. [Correct]
https://arxiv.org/abs/2505.22451v1
Finally, by integral over time, the agent proves the energy estimate conclusion. comment : None type: conjecture Lemma 9. **Uniqueness of solutions** The system (2.10) admits at most one solution (u,{φk})in the spaces specified. correctness : True Proof. **Proof of uniqueness:** Suppose two solutions (u1,{φk1})and(u2,{φk2})exist with the same initial data u0. Define w=u1−u2andψk=φk1−φk2. Then wandψksatisfy:   Lw= 0, on[0, T]×Ω, ∂w ∂ν=−βw−Pm k=1αk(∂t−∆S)ψk,on[0, T]× S, w(0) = 0 , (∂t−∆S+dk)ψk=w, ψ k(0) = 0 . [Correct] The agent takes into account the equations that the difference function satisfies. **Energy estimate for w:** Follow the same steps as in the a priori estimate, but with initial data zero. The energy equation becomes: 1 2d dt∥w∥2 L2(Ω)+∥∇w∥2 L2(Ω)+β∥w∥2 L2(S)+mX k=1αk∥w∥2 L2(S)=mX k=1αkdkZ SwψkdS. Similarly, the equation for ψk: 1 2d dt∥ψk∥2 L2(S)+∥∇Sψk∥2 L2(S)+dk∥ψk∥2 L2(S)=Z SwψkdS. [Correct] The equation for estimating the energy that the difference function satisfies. **Combine and apply Grönwall:** Substitute bounds forR wψkdSusing Young’s inequality. Due to the zero initial data, integrating over tand applying Grönwall’s inequality implies ∥w(t)∥= 0and ∥ψk(t)∥= 0for all t∈[0, T]. Thus, u1=u2andφk1=φk2, proving uniqueness. [Vague] By Grönwall’s inequality, the agent gets the conclusion. But the process is not detailed. comment : None type: conjecture Lemma 10. **Existence of Approximate Solutions via Galerkin Method** For each integer N≥1, there exists a Galerkin approximation (uN,{φN k}m k=1)in finite-dimensional subspaces VN⊂H1(Ω) andWk,N⊂H1(S)such that: 1. uN(t, x) =PN j=1aj(t)wj(x), where {wj}is a basis of H1(Ω). 2.φN k(t, x) =PN l=1bk,l(t)θk,l(x), where {θk,l}is a basis of H1(S). 3. The approximations satisfy the projected weak form of (2.10) and the uniform energy estimate from the lemma. 4. Subsequences of{uN}and{φN k}converge weakly to a solution (u,{φk})asN→ ∞ . correctness : True Proof. **Step 1: Construct H1-Conforming Subspaces** Let {wj} ⊂H1(Ω)and{θk,l} ⊂H1(S) be eigenfunctions of the Laplace-Beltrami operators −∆Ωand−∆Swith Neumann/Dirichlet 32 boundary conditions (as appropriate). These form orthonormal bases in L2(Ω)andL2(S), re- spectively, and are dense in H1(Ω)andH1(S). Define VN=span{w1, . . . , w N}andWk,N= span{θk,1, . . . , θ k,N}. [Correct] The agent applies the Galerkin method and construct the function spaces. **Step 2: Formulate the Projected Equations** For test functions v∈VNandψ∈Wk,N, the Galerkin approximations satisfy the projected weak form: Z Ω∂tuNv dx+Z Ω∇uN· ∇v dx+Z ΩρuNv dx +βZ SuNv ds+mX k=1αkZ S(∂tφN k−∆SφN k)v ds= 0, Z S∂tφN kψ ds+Z S∇SφN k· ∇Sψ ds+dkZ SφN kψ ds=Z SuNψ ds. **Step 3: Solve the ODE System** Substituting uN=PN j=1aj(t)wjandφN k=PN l=1bk,l(t)θk,l into the projected equations yields a linear ODE system for aj(t)andbk,l(t). The coefficients are smooth (as eigenfunctions are C∞), ensuring continuity. By the Cauchy-Lipschitz theorem, unique solutions exist on [0, T]. [Correct] The agent gets the ODE satisfied with the approximated solution. [Vague] It is better to have more detailed process. **Step 4: Uniform Energy Estimates** Testing with v=uNandψ=φN k, and using Gronwall’s inequality, we derive: sup t∈[0,T]∥uN(t)∥2 L2(Ω)+ZT 0∥∇uN∥2 L2(Ω)dt+ZT 0∥uN∥2 L2(S)dt +mX k=1 sup t∈[0,T]∥φN k(t)∥2 L2(S)+ZT 0∥∇SφN k∥2 L2(S)dt! ≤C∥u0∥2 L2(Ω), uniformly in N. **Step 5: Weak and Strong Compactness** From the energy estimates, extract subsequences (still denoted uN,φN k) such that: - uN⇀ u weakly-* in
https://arxiv.org/abs/2505.22451v1
L∞(0, T;L2(Ω)) and weakly in L2(0, T;H1(Ω)), -∂tuN⇀ ∂ tuweakly in L2(0, T;H−1(Ω)), -φN k⇀ φ kanalogously in L∞(0, T;L2(S))∩L2(0, T;H1(S)), - By the compact trace embedding H1(Ω) ,→L2(S), uN|S→u|Sstrongly in L2(0, T;L2(S)). [Correct] Here the agents considers the weak convergence to get the limited function. **Step 6: Passage to the Limit** For fixed v∈VN0andψ∈Wk,N0, the projected equations hold for allN≥N0. AsN→ ∞ : - Weak convergence handles volume integrals (e.g.,R ∇uN· ∇v dx→R ∇u· ∇v dx). - Strong trace convergence ensuresR SuNv ds→R Suv ds and similar boundary terms. - The density ofS NVNinH1(Ω)andS NWk,NinH1(S)extends the equations to all v∈H1(Ω),ψ∈H1(S). Thus, (u,{φk})satisfies (2.10) in the weak sense. Uniqueness (by lemma) confirms the full sequence converges. [Vague] Here the regularity should have more deprivation process to make the proof sufficient accurate. [Error] The uniqueness should be prove by the difference function rather than the sequence convergence. comment : None type: conjecture Lemma 11. **Regularity of the Weak Solution** The solution ubelongs to C([0, T];L2(Ω))∩ H1([0, T];H−1(Ω))∩L2([0, T];H1(Ω)), and each φkbelongs to C([0, T];L2(S))∩ H1([0, T];H−1(S))∩L2([0, T];H1(S)). 33 correctness : True Proof. **Step 1: Continuity in Time** From the energy estimate, u∈L∞(0, T;L2(Ω)) and ∂tu∈L2(0, T;H−1(Ω)). By the Lions-Magenes lemma, this implies u∈C([0, T];L2(Ω)). Similarly, φk∈C([0, T];L2(S)). **Step 2: Sobolev Regularity** The energy estimate directly gives u∈L2(0, T;H1(Ω)). To ver- ifyu∈H1([0, T];H−1(Ω)), observe that the PDE implies ∂tu= ∆u−ρu−boundary terms . Since ∆u∈L2(0, T;H−1(Ω)),ρu∈L2(0, T;L2(Ω)) ,→L2(0, T;H−1(Ω)), and the bound- ary terms (involving φk) are in L2(0, T;H−1/2(S)),→L2(0, T;H−1(Ω)), it follows that ∂tu∈ L2(0, T;H−1(Ω)) . Hence, u∈H1([0, T];H−1(Ω)) . **Step 3: Regularity of φk** Analogously, φk∈L2(0, T;H1(S))∩H1([0, T];H−1(S))follows from their PDE and the energy estimate. Continuity in time is similarly established. [Vague] The proof details of the agent here are incomplete and detailed. comment : None type: conjecture Lemma 12. Please solve the following mathematical problem. You can use the galerkin method to get the approximated function series and try to prove the convergence and regularity by energy method so that finishing the theorem proving. Let ube a function defined on (t, x)∈[0,∞]×Rn and satisfy the following equations: Lu=∂tu−∆u+ρ(x)u= 0, t > 0, x∈Rn u=u0, t = 0 where the operator L=∂t−∆ +ρ(x),∆is the Laplace operator, and ρ(x)is a non-negative function of x. We focus on a bounded open set ΩinRn, and denote ∂Ω =S. By restricting the function uto the bounded open set Ω, we investigate the behavior of the solution within this bounded open set. The absorbing boundary conditions are as follows 1):   Lu= 0, [0,∞)×Ω ∂u ∂ν=−βu−Pm k=1αk(∂t−∆S)φk=F1u,[0,∞)× S u=u0, t = 0 (2.10)(∂t−∆S+dk)φk=u, t≥0 φk= 0, t = 0 Can you prove that for any u0∈L2(Ω), there exists a unique u∈C([0, T];L2(Ω))∩ H1([0, T];H−1(Ω))∩L2([0, T];H1(Ω)),φk∈C([0, T];L2(S))∩H1([0, T];H−1(S))∩ L2([0, T];H1(S))that satisfies the equations. correctness : True Proof. **Existence and Uniqueness Theorem** By the previous conjectures and the uniqueness lemma: 1. **Existence:** The Galerkin method constructs approximations (uN,{φN k})with uniform energy bounds. Weak limits (u,{φk})satisfy (2.10) and the regularity follows from Conjecture 2. 2. **Uniqueness:** The lemma ensures at most one solution. Thus, there exists a unique solution
https://arxiv.org/abs/2505.22451v1
(u,{φk})in the specified spaces. comment : None 34 C Proof of High Contrast Limit Generated by AIM Based on DeepSeek-R1 type: lemma Lemma 13. The pressure field plimin the Stokes inclusion satisfies the uniform bound: ∥plim∥L2(Dϵ)≤C∥g∥H−1 2(∂Ω), where Cdepends only on λ0, the domain Ω, and the reference cell Y, but is independent of eλ≥ λ0>0andε. correctness : True Proof. **Step 1: Two-Scale Formulation and Cell Problem Analysis** Consider the Stokes system inΩε∪Dεwith viscosity ratio eλ≥λ0. Letuεandpεsatisfy:   −2µ∇ ·(∇suε) +∇pε= 0 inΩε∪Dε, divuε= 0 inΩ, uε= 0 on∂Dε, σ(uε, pε)n=g on∂Ω, where µ= 1 inΩεandµ=eλinDε. Using two-scale convergence, assume uε↠u0(x), pε↠p0(x), with correctors u1(x, y),p1(x, y). [Vague] The proof process is lack of the detail of two-scale convergence. **Step 2: Cell Problem and Effective Viscosity** The cell problem for wj(y)andπj(y)inYis:   −2eλ∆ywj+∇yπj=ejinω, −2∆ywj+∇yπj=ej inYf, divywj= 0,wj= 0 on∂ω, periodic in Y. Testing with wj, we derive the energy identity: 2eλ∥∇ywj∥2 L2(ω)+ 2∥∇ywj∥2 L2(Yf)=Z Ywj·ejdy. Using Poincaré inequality in ωandYf, we bound ∥wj∥H1(Y)≤Cmax(1 ,eλ−1). Foreλ≥λ0, this becomes ∥∇wj∥L2(Y)≤C(λ0), uniformly in eλ. The effective viscosity µeff=⟨∇ywj+ (∇ywj)T⟩Ysatisfies µeff≥Cλ0due to the coercivity of the cell problem. [Correct] The proof process here is correct. The agent employs energy identities and some inequality techniques. **Step 3: Uniform Stability Estimates** The inf-sup condition for the homogenized system follows from the geometric regularity and the uniform bounds on wj. For all ε >0: ∥pε∥L2(Ω)≤Csup v∈H1 0(Ω)dR Ωpεdivvdx ∥v∥H1, withCindependent of εandeλ≥λ0. This is ensured by the uniform ellipticity of the homogenized operator and the eλ-independent bounds on the correctors. **Step 4: Two-Scale Pressure Decomposition** The pressure splits as pε(x) =p0(x)+εp1(x, x/ε )+ O(ε2), where p0solves the homogenized Stokes system: −div(µeff∇u0) +∇p0= 0,divu0= 0. Standard Stokes estimates give ∥p0∥L2(Ω)≤C∥g∥H−1/2. The corrector p1satisfies ∥p1∥L2(Y)≤ C(λ0)∥∇ywj∥L2(Y)≤C(λ0). [Vague] It is necessary to write out the equations satisfied by the expanded functions here and combine them with the properties of the specific boundary. 35 **Step 5: Oscillatory Pressure Contribution** In Dε=Sωn ε, the oscillatory term εp1(x, x/ε ) averages over inclusions: ∥εp1∥L2(Dε)≤ε∥p1∥L2(Dε)≤Cε∥p1∥L∞(Y)|Dε|1/2≤Cε·ε−d/2·εd/2=Cε→0. Thus,∥plim∥L2(Dε)is dominated by ∥p0∥L2(Ω)·p |ω|/|Y|+o(1), yielding the uniform bound as ε→0. [Correct] The derivation process by the agent here is correct, and it has obtained the correct conclusion. **Conclusion**: The uniform bounds on the cell problem correctors, homogenized pressure estimates, and vanishing oscillations ensure ∥plim∥L2(Dε)≤C(λ0,Ω, Y)∥g∥H−1/2, independent of εand eλ≥λ0. comment : None type: lemma Lemma 14. For the solutions uϵandulimof problems (9) and (10), respectively, the divergence of uϵinside the inclusion Dϵsatisfies the estimate: ∥divuϵ∥L2(Dϵ)≤Cp eλ∥g∥H−1 2(∂Ω), where Cis a constant independent of eλ. correctness : True Proof. Consider the variational formulation of problem (9). For any test function v∈H1(Ω): Z Ω[λ(x)divuϵdivv+ 2µ(x)D(uϵ) :D(v)]dx=⟨g,v⟩∂Ω. Testing with v=uϵyields: Z Ωλ(x)|divuϵ|2dx+ 2Z Ωµ(x)|D(uϵ)|2dx=⟨g,uϵ⟩∂Ω. Dropping the non-negative µ-term and splitting λ(x): eλZ Dϵ|divuϵ|2dx≤ ⟨g,uϵ⟩ ≤ ∥ g∥H−1/2∥uϵ∥H1/2. By the trace theorem, ∥uϵ∥H1/2≤CT∥uϵ∥H1. To bound ∥uϵ∥H1, use the coercivity from the µ-term: 2µ0∥D(uϵ)∥2 L2≤ ⟨g,uϵ⟩ ≤CT∥g∥H−1/2∥uϵ∥H1. [Vague] When applying trace theorem, clearly specifying the normed space is crucial for ensuring the rigor and correctness of the derivation. Korn’s inequality implies ∥uϵ∥H1≤CK∥D(uϵ)∥L2. Substituting: ∥D(uϵ)∥L2≤CTCK 2µ0∥g∥H−1/2. Thus,∥uϵ∥H1≤C∥g∥H−1/2. Substituting back: eλ∥divuϵ∥2 L2(Dϵ)≤C∥g∥2 H−1/2. Dividing by
https://arxiv.org/abs/2505.22451v1
eλand taking the square root: ∥divuϵ∥L2(Dϵ)≤Cp eλ∥g∥H−1/2. [Correct] The deprivation here is accurate. 36 comment : None type: lemma Lemma 15. The pressure field pϵin the Stokes inclusion of the original problem (9) satisfies the estimate: ∥pϵ+ (eλ+µ)divuϵ∥L2(Dϵ)≤Cp eλ∥g∥H−1 2(∂Ω), where Cis independent of eλ. correctness : True Proof. **Step 1: Stress Tensor Comparison and Pressure Definition** In the inclusion domain Dϵ, the stress tensor for problem (9) is: σ(uϵ) =eλ(divuϵ)I+ 2eµD(uϵ). By comparing with the stress tensor σlim=−plimI+ 2eµD(ulim)from the limit problem (10), we recognize that eλdivuϵapproximates −plimaseλ→ ∞ . Thus, define the pressure in Dϵas: pϵ:=−eλdivuϵ. **Step 2: Divergence Estimate via Energy Method** From the given lemma (validated by the reviewer), the divergence of uϵsatisfies: ∥divuϵ∥L2(Dϵ)≤Cp eλ∥g∥H−1 2(∂Ω). This result follows from the energy estimate derived by testing the weak formulation with uϵ, applying Korn’s inequality, and leveraging the high contrast parameter eλ. The trace theorem and boundedness ofσ(uϵ)non∂Dϵensure the scaling 1/p eλ, not1/eλ. **Step 3: Pressure-Divergence Combination** Substituting pϵ=−eλdivuϵinto the target expression: pϵ+ (eλ+µ)divuϵ=µdivuϵ. Taking the L2-norm and applying the divergence estimate: ∥pϵ+ (eλ+µ)divuϵ∥L2(Dϵ)=∥µdivuϵ∥L2(Dϵ)≤Cµp eλ∥g∥H−1 2(∂Ω). Absorbing µinto the constant C(which depends only on domain geometry and reference parameters) yields the refined estimate: ∥pϵ+ (eλ+µ)divuϵ∥L2(Dϵ)≤Cp eλ∥g∥H−1 2(∂Ω). comment : None type: theorem Theorem 2. The problem construction is following. A real pair (λ, µ)is called an admissible Lamé parameter pair if it satisfies: µ >0and dλ+ 2µ >0 (1) The static elasticity system (Lamé system) corresponding to the Lamé parameter pair (λ, µ)is: Lλ,µu:=µ∆u+ (λ+µ)∇divu 37 (2) where the displacement field is denoted as u= (u1, . . . , ud), and its divergence is: divu=dX i=1∂ui ∂xi The admissibility condition ensures the ellipticity of the Lamé operator, and physical laws guarantee that natural materials always satisfy this condition. The Lamé operator can be expressed as ∇ ·σ(u), where the stress tensor is: σ(u) :=λ(∇ ·u)Id+ 2µD(u) (3) D(u) =1 2(∇+∇T)u=1 2(∂iuj+∂jui)ij (4) Here, Idis the identity matrix of order d, andDrepresents the symmetrized differential operator. The normal derivative (boundary traction) on the boundary of a region Eis defined as: ∂u ∂νλ,µ ∂E:=σ(u)N=λ(divu)N+ 2µD(u)N on∂E (5) where Nis the unit outward normal vector on the boundary. We use the standard Sobolev space H1(Ω)and its trace space H1 2(∂E)on the boundary, with the dual space H−1 2(∂E). Consider the space of rigid body motions in Rd: R:= r= (r1, . . . , r d)T:D(r) = 0 inRd (6) The dimension of Risd(d+1) 2, spanned by the following basis vectors: e1, . . . ,ed, xjei−xiej,for1≤i < j≤d where eidenotes the standard basis vector. These basis vectors are denoted as rj, j= 1, . . . ,d(d+1) 2. Define the space orthogonal to rigid body motions: H−1 2 R(∂Dϵ) :=n ϕ∈H−1(∂Dϵ) : (ϕ,r)(H1 2(∂Dϵi),H−1 2(∂Dϵi))= 0,∀r∈Rand1≤i≤No (7) Similarly, define H1 2 R(∂Dϵ)and boundary spaces. Consider the following transmission problem:   Lλ,µuϵ= 0 inΩ\Dϵ, Leλ,eµuϵ= 0 inDϵ, uϵ|−=uϵ|+and∂uϵ ∂ν(λ,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω)anduϵ|∂Ω∈H1 2 R(∂Ω).(8) This is equivalent to: (Lλ(x),µ(x)uϵ=∇ ·[λ(x)(∇ ·uϵ)Id+ 2µ(x)D(uϵ)] = 0 inΩ, ∂uϵ ∂νλ,µ ∂Ω=g∈H−1 2 R(∂Ω),uϵ|∂Ω∈H1 2 R(∂Ω),(9) where λ(x) =λIΩ\Dϵ+eλIDϵ,µ(x)
https://arxiv.org/abs/2505.22451v1
=µχΩ\Dϵ+eµχDϵ Suppose the solution of equation (9) above is uϵ. The existence and uniqueness of the solution uϵare guaranteed by the ellipticity condition (1) and the Lax-Milgram theorem. Finally, the homogenization needed to be solved is the following. When eλ→ ∞ andeµis fixed, the coupled system in the incompressible inclusion limit is obtained:   Lλ,µuϵ= 0 inΩ\Dϵ, Leµ(uϵ, pϵ) = 0 anddivuϵ= 0 inDϵ, uϵ|−=uϵ|+and∂(uϵ,pϵ) ∂ν(∞,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω)anduϵ|∂Ω∈H1 2 R(∂Ω),(10) 38 whereLeµ(uϵ, pϵ) =eµ∆uϵ+∇pϵrepresents the Stokes operator with viscosity coefficient eµ, and pϵ is the pressure field. Its normal derivative is defined as ∂(u, p) ∂ν(∞,µ) −:=pN+ 2µD(u)N. We suppose the solution of this limit problem (10) is ulim. Please prove that for some constant C, there holds ∥ulim−uϵ∥ ≤C eλ∥g∥H−1 2(∂Ω). correctness : True Proof. To establish the estimate ∥ulim−uϵ∥H1(Ω)≤C eλ∥g∥H−1 2(∂Ω), we proceed with an energy estimate comparing the solutions uϵandulim, leveraging the provided lemmas. [Correct] The agent has constructed the steps and methods of the proof. **Step 1: Define the Difference and Apply Variational Formulation** Letw=ulim−uϵ. Subtracting the variational equations for uϵ(original problem) and ulim(limit problem) tested with v=wyields: Z Ω\Dϵ λ(divw)2+ 2µ|D(w)|2 dx+Z Dϵh 2eµ|D(w)|2+eλ(divuϵ)2i dx=Z DϵRdx, where the residual term Rarises from the mismatch in the equations and pressures. Using Lemma 3, Ris bounded by C/eλ∥g∥2. [Correct] The result of the above lemma is applied here, and the proof process is accurate. This step is well completed. **Step 2: Utilize the Lemmas to Bound Residual Terms** From Lemma 2, ∥divuϵ∥L2(Dϵ)≤C/p eλ∥g∥, and Lemma 3 implies pϵ+ (eλ+µ)divuϵis bounded similarly. Substituting these into the residual term: Z Dϵeλ(divuϵ)2dx≤C∥g∥2, while the Stokes pressure plimis bounded by Lemma 1. The remaining terms involving D(w)are controlled by the ellipticity of the Lamé and Stokes systems. **Step 3: Apply Korn’s Inequality and Ellipticity** Korn’s inequality ensures that ∥w∥H1(Ω)≤C ∥D(w)∥L2(Ω)+∥w∥L2(Ω) . The boundary condi- tions and orthogonality to rigid motions eliminate the L2term, yielding: ∥w∥H1(Ω)≤C∥D(w)∥L2(Ω). Combining this with the energy estimate and Lemma 2-3 results in: ∥w∥H1(Ω)≤C eλ∥g∥H−1 2(∂Ω). Thus, the desired estimate is proven. 39 [Vague] Finally, the model employs inequality techniques and the previously obtained conclusions to complete the final proof process. However, there are still some missing details in the proof here. comment : None type: conjecture Lemma 16. For the solution uϵof the original problem (9) with large eλ, the divergence in the inclusion region satisfies the uniform bound: ∥divuϵ∥L2(Dϵ)≤C eλ∥g∥H−1/2(∂Ω), where Cis independent of eλ. correctness : True Proof. **Step 1: Energy Estimate** Testing the variational formulation with v=uϵgives:Z Ωh eλχDϵ|divuϵ|2+ 2µ(x)|Duϵ|2i dx=Z ∂Ωg·uϵds. By Korn’s inequality and elliptic regularity, ∥uϵ∥H1(Ω)≤C∥g∥H−1/2(∂Ω). This implies: eλ∥divuϵ∥2 L2(Dϵ)+∥Duϵ∥2 L2(Ω)≤C∥g∥2 H−1/2. (1) [Correct] The agent considers variational formulation method to get the energy estimate and uses some inequality skills to deduce the result. **Step 2: Global Duality Argument** Let ϕ∈H1 0(Ω)solve the Poisson equation: −∆ϕ=χDϵdivuϵinΩ, ϕ = 0on∂Ω. By elliptic regularity, ∥∇ϕ∥H1(Ω)≤C∥divuϵ∥L2(Dϵ). Define v=∇ϕ∈H1(Ω), which inherits regularity from ϕ∈H2(Ω). [Vague] Here the agent considers the elliptic regularity, but it is better to have a more detailed specific process. **Step 3: Corrected Test with v=∇ϕ** Substituting
https://arxiv.org/abs/2505.22451v1
vinto the variational equation: −eλZ Dϵ(divuϵ)2dx+ 2Z Ωµ(x)Duϵ:D(∇ϕ)dx=Z ∂Ωg· ∇ϕ ds. (2) Rearranging and taking absolute values: eλ∥divuϵ∥2 L2(Dϵ)≤ 2Z ΩµDuϵ:D(∇ϕ)dx + Z ∂Ωg· ∇ϕ ds . Using Hölder inequalities and trace duality: 2Z ΩµDuϵ:D(∇ϕ)dx ≤C∥g∥H−1/2∥divuϵ∥L2(Dϵ), Z ∂Ωg· ∇ϕ ds ≤C∥g∥H−1/2∥divuϵ∥L2(Dϵ). **Step 4: Final Estimate** Combining these bounds: eλ∥divuϵ∥2 L2(Dϵ)≤2C∥g∥H−1/2∥divuϵ∥L2(Dϵ). Dividing by ∥divuϵ∥L2(Dϵ)yields: ∥divuϵ∥L2(Dϵ)≤2C eλ∥g∥H−1/2, completing the proof with modified constant C. [Correct] Use inequality techniques to derive the correct key conclusions from variational equations. 40 comment : None type: conjecture Lemma 17. The pressure pϵ=−eλdivuϵinDϵsatisfies: ∥pϵ∥L2(Dϵ)≤C∥g∥H−1/2(∂Ω). correctness : True Proof. From the definition pϵ=−eλdivuϵ, using the previous conjecture: ∥pϵ∥L2(Dϵ)=eλ∥divuϵ∥L2(Dϵ)≤C∥g∥H−1/2. The boundedness follows from compensating the eλfactor with the decay of divuϵ. [Correct] Based on the lemma above, it gets a norm bound concerning the pressure term. comment : None type: conjecture Lemma 18. The difference wϵ=uϵ−ulimbetween the solutions of the original problem (9) and the limit problem (10) satisfies the energy estimate: Z Ω λ(x)|divwϵ|2+ 2µ(x)|D(wϵ)|2 dx≤C eλ∥g∥2 H−1/2(∂Ω), where Cis independent of eλ. Consequently, by Korn’s inequality, ∥wϵ∥H1(Ω)≤C√ eλ∥g∥H−1/2(∂Ω). correctness : True Proof. **Step 1: Variational Formulations** The original problem (9) has the weak form: Find uϵ∈H1(Ω)such that for all v∈H1(Ω), Z Ω[λ(x) divuϵdivv+ 2µ(x)D(uϵ) :D(v)]dx=Z ∂Ωg·vds. [Correct] Consider the weak form of this equation. The limit problem (10) is a saddle-point problem: Find (ulim, plim)∈H1(Ω)×L2(Dϵ)such that for allv∈H1(Ω)andq∈L2(Dϵ), (R Ω\Dϵ[λdivulimdivv+ 2µD(ulim) :D(v)]dx+R Dϵ[2eµD(ulim) :D(v)−plimdivv]dx=R ∂Ωg·vds,R Dϵqdivulimdx= 0. [Vague] Although considering the limited function, but it does not show the detailed deprivation of the existence and the property of this solution. **Step 2: Subtract the Weak Forms** Let wϵ=uϵ−ulim. Subtract the weak form of the limit problem from the original problem, tested with v=wϵ: Z Ω\Dϵ λ(|divwϵ|2+ 2µ|D(wϵ)|2) dx+Z Dϵh eλ|divuϵ|2+ 2eµ|D(wϵ)|2i dx=Z Dϵplimdivwϵdx. Since divulim= 0inDϵ, substitute divwϵ= div uϵinDϵ: Z Dϵplimdivwϵdx=Z Dϵplimdivuϵdx. [Correct] According to the above weakly form and the setting condition of the problem, the model correctly obtains the above key conclusion. 41 **Step 3: Bound plim** From the limit problem’s variational equation, choose test functions v∈ H1 0(Dϵ)satisfying divv=plimwith∥v∥H1(Dϵ)≤C∥plim∥L2(Dϵ)(by the inf-sup condition). Substituting vinto the limit problem and using Korn’s inequality on ulim, we derive: ∥plim∥L2(Dϵ)≤C∥g∥H−1/2(∂Ω). By Lemma 1, ∥divuϵ∥L2(Dϵ)≤C eλ∥g∥H−1/2(∂Ω). Applying Cauchy-Schwarz: Z Dϵplimdivuϵdx ≤ ∥plim∥L2(Dϵ)∥divuϵ∥L2(Dϵ)≤C eλ∥g∥2 H−1/2(∂Ω). [Correct] The model correctly applied the above conclusions and mathematical techniques. **Step 4: Energy Estimate** Combining results: Z Ω λ(x)|divwϵ|2+ 2µ(x)|D(wϵ)|2 dx≤C eλ∥g∥2 H−1/2(∂Ω). **Step 5: Apply Korn’s Inequality** Using Korn’s inequality (applicable due to boundary conditions suppressing rigid motions) and the energy estimate: ∥wϵ∥2 H1(Ω)≤CZ Ω|D(wϵ)|2dx≤C µ0Z Ω2µ(x)|D(wϵ)|2dx≤C µ0eλ∥g∥2 H−1/2(∂Ω), where µ0= inf Ωµ(x)>0. Taking square roots: ∥wϵ∥H1(Ω)≤Cp eλ∥g∥H−1/2(∂Ω). [Correct] Finally, the final result is obtained through the application of Korn’s inequality and finish this proof. comment : None type: conjecture Lemma 19. The difference wϵ=ulim−uϵbetween the solutions of the limit problem (10) and the original problem (9) satisfies the energy estimate: Z Ω λ(x)|divwϵ|2+ 2µ(x)|D(wϵ)|2 dx≤C eλ∥g∥2 H−1/2(∂Ω), leading to ∥wϵ∥H1(Ω)≤C√ eλ∥g∥H−1/2(∂Ω). correctness : True [Correct] This is the same conclusion as the former lemma. Proof. 1. **Variational Equations**: Let uϵsolve (9) and ulimsolve (10). For any v∈H1(Ω): Z Ω[λdivuϵdivv+ 2µD(uϵ) :D(v)]dx=Z ∂Ωg·vds, Z
https://arxiv.org/abs/2505.22451v1
Ω[λdivulimdivv+ 2µD(ulim) :D(v)]dx+Z Dϵplimdivvdx=Z ∂Ωg·vds. 2. **Subtraction**: Taking v=wϵ=ulim−uϵ: Z Ω λ|divwϵ|2+ 2µ|D(wϵ)|2 dx=Z Dϵ(plim+eλdivuϵ) divwϵdx. 42 3. **Pressure Analysis**: By Lemma 2, ∥eλdivuϵ∥L2(Dϵ)≤C∥g∥H−1/2. From Stokes regularity, ∥plim∥L2(Dϵ)≤C∥g∥H−1/2. Thus: ∥plim+eλdivuϵ∥L2(Dϵ)≤C∥g∥H−1/2. 4. **Cauchy-Schwarz Inequality**: Z Dϵ(plim+eλdivuϵ) divwϵdx ≤C∥g∥H−1/2∥divwϵ∥L2(Dϵ). 5. **Divergence Bound**: From Lemma 1, ∥divwϵ∥L2(Dϵ)≤ ∥divuϵ∥L2(Dϵ)≤Ceλ−1∥g∥H−1/2. Substituting: RHS≤C∥g∥H−1/2·(Ceλ−1∥g∥H−1/2) =C eλ∥g∥2 H−1/2. 6. **Korn’s Inequality**: The left-hand side is coercive via Korn’s inequality: ∥wϵ∥2 H1(Ω)≤CZ Ω λ|divwϵ|2+ 2µ|D(wϵ)|2 dx≤C eλ∥g∥2 H−1/2, yielding ∥wϵ∥H1(Ω)≤Ceλ−1/2∥g∥H−1/2. comment : None type: conjecture Lemma 20. The difference wε=ulim−uεsatisfies the L2(Ω)error estimate: ∥wε∥L2(Ω)≤Cp eλ∥g∥H−1/2(∂Ω), where Cis independent of eλ. correctness : True Proof. **Revised Proof:** Letwε=ulim−uε. Consider the adjoint problem for the coupled Lamé-Stokes system: For f∈L2(Ω), findz∈H1(Ω)and pressure q∈L2(Dε)satisfying   Lλ,µz=f inΩ\Dε, −eµ∆z+∇q=f,divz= 0 inDε, z|−=z|+, eµ∂z ∂ν−qν −=∂z ∂ν(λ,µ) +on∂Dε, ∂z ∂νλ,µ ∂Ω= 0,z|∂Ω∈H1/2 R(∂Ω). By elliptic regularity, z∈H2(Ω)with: ∥z∥H2(Ω)+∥q∥H1(Dε)≤C∥f∥L2(Ω). Using the duality argument, we test wεagainst z: Z Ωwε·fdx=B(wε,z)−Z Dεqdivwεdx, where B(·,·)is the Lamé bilinear form. Since ulimis incompressible in Dε,divwε=−divuε. From Lemma 1: ∥divuε∥L2(Dε)≤C eλ∥g∥H−1/2(∂Ω). [Vague] Here the elliptic regularity and the bilinear form should have the specific definition and form. Therefore, in mathematical proofs, this is not rigorous enough. 43 **Pressure Term:** Substitute pε=−eλdivuε(Lemma 2) and use Hölder’s inequality: Z Dεqdivwεdx = Z Dεqdivuεdx ≤1 eλ∥q∥L2(Dε)∥pε∥L2(Dε)≤C eλ∥f∥L2(Ω)∥g∥H−1/2(∂Ω). **Bilinear Form:** From Lemma 4, ∥wε∥H1(Ω)≤C√ eλ∥g∥H−1/2(∂Ω). Using continuity of B(·,·)in H1(Ω): |B(wε,z)| ≤C∥wε∥H1(Ω)∥z∥H1(Ω)≤Cp eλ∥g∥H−1/2(∂Ω)∥f∥L2(Ω). [Vague] The same as the former, it should show the continuity of the bilinear form B(·,·)to get this conclusion. Combining both terms: Z Ωwε·fdx ≤ Cp eλ+C eλ! ∥g∥H−1/2(∂Ω)∥f∥L2(Ω). Foreλ≫1, the dominant term is O(1/p eλ). Choosing f=wεand applying duality: ∥wε∥2 L2(Ω)≤Cp eλ∥g∥H−1/2(∂Ω)∥wε∥L2(Ω), which simplifies to: ∥wε∥L2(Ω)≤Cp eλ∥g∥H−1/2(∂Ω). [Correct] Here, the model correctly applies mathematical techniques and comes to the right conclusions. And we can see the clear mathematical thought here. comment : None type: conjecture Lemma 21. The difference wε=ulim−uεbetween the solutions of the limit problem (10) and the original problem (9) satisfies the improved energy estimate: ∥wε∥H1(Ω)≤Cp eλ∥g∥H−1/2(∂Ω), where Cis independent of eλ. correctness : True Proof. 1. **Energy Identity**: Subtract the weak formulations of (9) and (10) and test with wε. This gives: Z Ω λ(x)|divwε|2+ 2µ(x)|D(wε)|2 dx+Z Dεeλ(divuε)(divwε)dx= 0. 2. **Divergence Relationship**: Since ulimis incompressible in Dε(from (10)), we have divwε= −divuεinDε. Substitute this into the energy identity: Z Ω λ(x)|divwε|2+ 2µ(x)|D(wε)|2 dx=eλ∥divuε∥2 L2(Dε). 3. **Lemma 1 Application**: By Lemma 1, ∥divuε∥L2(Dε)≤C eλ∥g∥H−1/2(∂Ω). Thus: Energy integral ≤C eλ∥g∥2 H−1/2(∂Ω). 44 4. **Korn’s Inequality**: Apply Korn’s inequality to wε. Under boundary conditions orthogonal to rigid motions, there exists CK>0such that: ∥wε∥2 H1(Ω)≤CKZ Ω |D(wε)|2+|divwε|2 dx. Using the energy bound: ∥wε∥2 H1(Ω)≤CKC eλ∥g∥2 H−1/2(∂Ω). Taking square roots completes the proof: ∥wε∥H1(Ω)≤Cp eλ∥g∥H−1/2(∂Ω). comment : None type: conjecture Lemma 22. The difference wε=ulim−uεbetween the solutions of the limit problem (10) and the original problem (9) satisfies the homogenized error estimate in the energy norm: Z Ω λ(x)|divwε|2+ 2µ(x)|D(wε)|2 dx1/2 ≤Cp eλ∥g∥H−1/2(∂Ω), where Cis independent of eλ. Consequently, in the natural energy norm associated with the Lamé system, the error is of order O(eλ−1/2). correctness : True Proof. **Step 1: Variational Formulations and Key Bounds** The original problem (9) and limit problem
https://arxiv.org/abs/2505.22451v1
(10) yield solutions uεandulim, respectively. By Lemma 2, the divergence in Dεsatisfies: ∥divuε∥L2(Dε)≤C eλ∥g∥H−1/2(∂Ω). The pressure pε=−eλdivuεis bounded in L2(Dε)by Lemma 2. **Step 2: Energy Norm Decomposition** Define wε=ulim−uε. Let λ(x) =eλinDεandλ(x) =λ0(fixed) in Ω\Dε. The energy norm splits as:Z Ωλ(x)|divwε|2dx=Z Dεeλ|divwε|2dx | {z } I1+Z Ω\Dελ0|divwε|2dx | {z } I2. InDε,divwε=−divuε(since ulimis divergence-free there). Thus: I1=eλ∥divuε∥2 L2(Dε)≤C eλ∥g∥2 H−1/2(∂Ω). ForI2, use the H1-bound from Lemma 4: I2≤λ0∥divwε∥2 L2(Ω)≤λ0C∥wε∥2 H1(Ω)≤C eλ∥g∥2 H−1/2(∂Ω), where λ0is absorbed into C. [Correct] The energy terms are clearly split and the inequality technique is correctly applied to control the norms of the divergence of error. **Step 3: Symmetric Gradient via Korn’s Inequality** 45 By Lemma 4 and Korn’s inequality: Z Ω2µ|D(wε)|2dx≤C∥wε∥2 H1(Ω)≤C eλ∥g∥2 H−1/2(∂Ω). **Step 4: Final Energy Estimate** Combining all contributions: Z Ω λ|divwε|2+ 2µ|D(wε)|2 dx≤C eλ∥g∥2+C eλ∥g∥2=C eλ∥g∥2. Taking the square root confirms: Z Ω λ|divwε|2+ 2µ|D(wε)|2 dx1/2 ≤Cp eλ∥g∥H−1/2(∂Ω). The error is rigorously shown to be O(eλ−1/2)in the energy norm. [Vague] Here the Korn’s inequality is considered again, proving the final conclusion. But there is a stronger conclusion to be drawn here, but our agent is not aware of that. comment : None type: conjecture Lemma 23. Under the given assumptions, the difference wε=ulim−uεbetween the solutions of the limit problem (10) and the original problem (9) satisfies the homogenized error estimate in the energy norm weighted by the Lamé parameters: Z Ω λ(x)|divwε|2+ 2µ(x)|D(wε)|2 dx1/2 ≤Cp eλ∥g∥H−1 2(∂Ω), where Cis independent of eλ. This implies that the error in the energy norm decays proportionally to eλ−1 2. correctness : True Proof. To establish the energy estimate for wε, consider the variational formulations of problems (9) and (10). For any test function v∈H1(Ω), the original problem satisfies: Z Ω[λ(x) divuεdivv+ 2µ(x)D(uε) :D(v)]dx=⟨g,v⟩∂Ω, while the limit problem satisfies: Z Ω[λ(x) divulimdivv+ 2µ(x)D(ulim) :D(v)]dx+Z Dεpdivvdx=⟨g,v⟩∂Ω, where p∈L2(Dε)is the Lagrange multiplier enforcing the incompressibility constraint ( divulim= 0 inDε). Subtracting these equations and letting v=wε=ulim−uε, we obtain: Z Ω λ(x)|divwε|2+ 2µ(x)|D(wε)|2 dx+Z Dεpdivwεdx= 0. InDε,divulim= 0, sodivwε=−divuε. Substituting pε=−eλdivuε(from the original problem’s constitutive law) into the pressure term: Z Dεpdivwεdx=−Z Dεpdivuεdx=1 eλZ Dεp pεdx. 46 By the Cauchy-Schwarz inequality and Lemma 2 ( ∥pε∥L2(Dε)≤C∥g∥H−1/2(∂Ω)), along with the stability of the limit problem ( ∥p∥L2(Dε)≤C∥g∥H−1/2(∂Ω)via the inf-sup condition), we obtain: 1 eλZ Dεp pεdx≤C eλ∥p∥L2(Dε)∥pε∥L2(Dε)≤C eλ∥g∥2 H−1/2(∂Ω). Thus, the energy integral satisfies: Z Ω λ(x)|divwε|2+ 2µ(x)|D(wε)|2 dx≤C eλ∥g∥2 H−1/2(∂Ω). Taking the square root yields the refined estimate: Z Ωλ(x)|divwε|2+ 2µ(x)|D(wε)|2dx1/2 ≤Cp eλ∥g∥H−1/2(∂Ω), confirming the eλ−1 2decay rate in the energy norm. [Correct] The proof of the final conclusion is a detailed proof, correct mathematical derivation and analysis. According to the above conclusions, the error control in global space is obtained, and the results are correct and accurate. [Error] But this conclusion is not strong enough, and this is not the needed result. comment : None 47 DProof of High Contrast Limit Generated by AIM Based on OpenAI o4-mini type: assumption content : Consider a region D⊂Ω⊂Rd(d≥2), where Ωrepresents the domain of an elastic material, and Dis the inclusion region with high contrast. type: assumption content : 1.Ωis a bounded
https://arxiv.org/abs/2505.22451v1
open domain with a connected Lipschitz boundary ∂Ω. 2.Dis an open domain with a finite number of connected components, each having a Lipschitz boundary ∂D. 3.Ω\Dis connected and has a Lipschitz boundary ∂Ω∪∂D. The connected components of Dare denoted as Di,i= 1, . . . , N , where Nis finite. type: assumption content : Given ε∈(0,1),D=Dεconsists of small inclusions arranged in an ε-periodic pattern. The construction steps are as follows: 1. Take the unit cell Y= (−1 2,1 2)d. 2. ω⊂Yis a simply connected open subset with a Lipschitz boundary and satisfies dist(ω, ∂Y )>0. Here, Yf=Y\ωrepresents the model environment at the unit scale. 3. For a given ε >0and n∈Zd, denote ε(n+Y)andε(n+ω)asYn εandωn ε, respectively. Define the lattice point set Πε as the index set satisfying Ynε⊂Ω: Πε:= n∈Zd:Ynε⊂Ω The inclusion region D=Dεand the matrix region Ωεare defined as: Dε:=[ n∈Πεωn ε,Ωε:= Ω\Dε For each fixed ε, the number of connected components of DεisN=|Πε|. 4. Define Yεas the union of all ε-cells in Ω, and Kεas the buffer region: Kε= Ω\ [ n∈ΠεYnε! , Y ε= Ω\Kε ForDεconstructed according to Assumption 2, it can be verified that all conditions of Assumption 1 are satisfied. type: assumption content : Consider a region D⊂Ω⊂Rd(d≥2), where Ωrepresents the domain of an elastic material, and Dis the inclusion region with high contrast. type: assumption content : 1.Ωis a bounded open domain with a connected Lipschitz boundary ∂Ω. 2.Dis an open domain with a finite number of connected components, each having a Lipschitz boundary ∂D. 3.Ω\Dis connected and has a Lipschitz boundary ∂Ω∪∂D. The connected components of Dare denoted as Di,i= 1, . . . , N , where Nis finite. type: assumption content : Given ε∈(0,1),D=Dεconsists of small inclusions arranged in an ε-periodic pattern. The construction steps are as follows: 1. Take the unit cell Y= (−1 2,1 2)d. 2. ω⊂Yis a simply connected open subset with a Lipschitz boundary and satisfies dist(ω, ∂Y )>0. Here, Yf=Y\ωrepresents the model environment at the unit scale. 3. For a given ε >0and n∈Zd, denote ε(n+Y)andε(n+ω)asYn εandωn ε, respectively. Define the lattice point set Πε as the index set satisfying Ynε⊂Ω: Πε:= n∈Zd:Ynε⊂Ω The inclusion region D=Dεand the matrix region Ωεare defined as: Dε:=[ n∈Πεωn ε,Ωε:= Ω\Dε For each fixed ε, the number of connected components of DεisN=|Πε|. 4. Define Yεas the union of all ε-cells in Ω, and Kεas the buffer region: Kε= Ω\ [ n∈ΠεYnε! , Y ε= Ω\Kε ForDεconstructed according to Assumption 2, it can be verified that all conditions of Assumption 1 are satisfied. type: assumption content : Consider a region D⊂Ω⊂Rd(d≥2), where Ωrepresents the domain of an elastic material, and Dis the inclusion region with high contrast. type: assumption content : 48 1.Ωis a bounded open domain with a connected Lipschitz boundary ∂Ω. 2.Dis an open domain with a finite number of connected components, each having a Lipschitz boundary ∂D. 3.Ω\Dis connected and has a Lipschitz boundary ∂Ω∪∂D. The connected components of Dare denoted as Di,i= 1, . . . , N , where Nis finite. type: assumption content : Given ε∈(0,1),D=Dεconsists
https://arxiv.org/abs/2505.22451v1
of small inclusions arranged in an ε-periodic pattern. The construction steps are as follows: 1. Take the unit cell Y= (−1 2,1 2)d. 2. ω⊂Yis a simply connected open subset with a Lipschitz boundary and satisfies dist(ω, ∂Y )>0. Here, Yf=Y\ωrepresents the model environment at the unit scale. 3. For a given ε >0and n∈Zd, denote ε(n+Y)andε(n+ω)asYn εandωn ε, respectively. Define the lattice point set Πε as the index set satisfying Ynε⊂Ω: Πε:= n∈Zd:Ynε⊂Ω The inclusion region D=Dεand the matrix region Ωεare defined as: Dε:=[ n∈Πεωn ε,Ωε:= Ω\Dε For each fixed ε, the number of connected components of DεisN=|Πε|. 4. Define Yεas the union of all ε-cells in Ω, and Kεas the buffer region: Kε= Ω\ [ n∈ΠεYnε! , Y ε= Ω\Kε ForDεconstructed according to Assumption 2, it can be verified that all conditions of Assumption 1 are satisfied. type: lemma Lemma 24. Energy estimate for the difference in the matrix region. Let ulimsolve the limit (Stokes) problem in Dϵand elasticity in Ωϵ, and let uϵsolve the full Lamé problem with large first Lamé parameter ˜λ≫1inDϵ. Then the “error” w:=uϵ−ulimsatisfies in Ωϵthe homogeneous Lamé system with zero Neumann data on ∂Ωand a “jump–residue” source on ∂Dϵ, and one has the a priori bound ∥w∥H1(Ωϵ)≤C∥[traction jump] ∥H−1 2(∂Dϵ). The constant Cdepends only on the background Lamé parameters (λ, µ)and the geometry, not on ϵ or˜λ. correctness : True Proof. Proof. 1. (Equation for w.) By subtracting the two variational formulations one finds that w∈H1(Ωe)satisfiesZ Ωe(2µD(w) :D(φ) + (λ+µ) divw·divφ)dx=−⟨[traction jump ], φ⟩∂De, for all φ∈H1(Ωe)vanishing on ∂Ω. 2. (Coercivity.) By the uniform Korn inequality (Conjecture 1) and the ellipticity dλ+ 2µ >0we conclude the bilinear form in the left-hand side is coercive on H1 0(Ωe), namely a(w, w)≥c0∥w∥2 H1(Ωe). 3. (Duality bound.) The right-hand side is bounded by the standard trace-inequality: |⟨[jump], w⟩| ≤ ∥ [jump]∥H−1/2(∂De)∥w∥H1/2(∂De)≤C∥[jump]∥H−1/2(∂De)∥w∥H1(Ωe). 4. (Conclusion.) By Lax–Milgram, c0∥w∥2 H1(Ωe)≤C∥[jump]∥H−1/2(∂De)∥w∥H1(Ωe), so that ∥w∥H1(Ωe)≤(C/c0)∥[jump]∥H−1/2(∂De), as claimed. 49 comment : None type: lemma Lemma 25. (Uniform Coercivity of the Elasticity Bilinear Form) Let (λ, µ)satisfy the admissibility λ+2µ d>0,µ >0. For each perforated domain Ωεas above, define the bilinear form on Vεby aε(u, v) :=Z Ωε[2µD(u) :D(v) +λ(divu)(div v)]dx. Then there exists α >0, independent of ε, such that for all u∈Vε, aε(u, u)≥α∥u∥2 H1(Ωε). [Vague] Here the agent gives the energy estimate of the error, but the process form in the proof is not detailed. correctness : True Proof. Proof. We combine the uniform Korn inequality just proved with the ellipticity of the Lamé form. 1. Ellipticity on Symmetric Gradients. By the admissibility condition ( dλ+ 2µ >0,µ >0), the quadratic form on symmetric matrices E∈Rd×d sym, Q(E) := 2 µ E:E+λ(trE)2 satisfies the pointwise bound Q(E)≥c0|E|2, where one checks easily that c0= min {2µ, dλ + 2µ}>0. 2. Lower Bound on aε(u, u). Hence for any u∈H1(Ωε;Rd), aε(u, u) =Z ΩεQ(D(u))dx≥c0Z Ωε|D(u)|2dx. 3. Apply Korn–Poincaré. By Conjecture 1 above, for all u∈Vε, ∥u∥2 H1(Ωε)≤C2∥D(u)∥2 L2(Ωε), withCindependent of ε. Hence aε(u, u)≥c0∥D(u)∥2 L2(Ωε)≥(c0/C2)∥u∥2 H1(Ωε). 4. Set α=c0/C2. This αdepends only on µ,λand the geometry (through C), but is uniform in ε. Thus
https://arxiv.org/abs/2505.22451v1
the form is uniformly coercive on Vε. [Vague] The conclusion here is basically correct, but the process requires more proof details, which are not given here. comment : None type: lemma Lemma 26. (Uniform Korn Inequality on Periodically Perforated Domains, corrected) Let Ω⊂Rd (d≥2) be a bounded connected Lipschitz domain. Fix a simply-connected Lipschitz inclusion ω⊂(−1 2,1 2)dwithdist(ω, ∂(−1 2,1 2)d)>0, and for ε∈(0,1)letDε⊂Ωbe the ε–periodic union of translated copies of ε·ωas in Assumption 2. Assume in particular that each hole ε(ω+Zd∩Ω)is at distance ≥cεfrom∂Ωand from its neighbors, and that Ωε:= Ω\Dεis connected and Lipschitz with constants independent of ε. Define Vε:= u∈H1(Ωε;Rd) :Z ∂Ωu·r= 0for every rigid motion r . Then there exists a constant C >0, independent of ε∈(0,1), such that for all u∈Vε ∥u∥H1(Ωε)≤C∥D(u)∥L2(Ωε), where D(u) =∇u+(∇u)⊤ 2is the symmetrized gradient. 50 correctness : True Proof. Proof. We follow the standard three-step strategy of “uniform extension + Korn on the fixed domain + pull-back,” paying attention to the correct directions of the estimates. 1. Uniform extension from ΩεtoΩ. By the periodicity, the uniform separation of holes, and the Lipschitz regularity of ∂Ω, one constructs (e.g. via the methods of Oleinik–Shamaev–Yosifian or Cioranescu–Donato) a linear extension operator Eε:H1(Ωε;Rd)→H1(Ω;Rd) such that for some constant Cextindependent of ε, and for all u∈H1(Ωε;Rd), ∥Eεu∥L2(Ω)≤Cext∥u∥L2(Ωε), ∥∇Eεu∥L2(Ω)≤Cext∥∇u∥L2(Ωε), hence also ∥D(Eεu)∥L2(Ω)≤Cext∥D(u)∥L2(Ωε). Moreover one arranges Eεso that it preserves the boundary traces on ∂Ω, hence if u∈Vεthen U:=Eεulies in V:={U∈H1(Ω;Rd) :R ∂ΩU·r= 0∀rigid motions r}. [Correct] Here the agent correctly applied the extension theorem. [Vague] For this problem, this weak solution function space may not be accurate. 2. Korn’s inequality on the fixed domain Ω. Since Ωis a fixed Lipschitz domain, the classical Korn inequality on the rigid-motion-free subspace Vgives a constant CKornindependent of εsuch that for every U∈V, ∥U∥H1(Ω)≤CKorn∥D(U)∥L2(Ω). 3. Conclusion on Ωε. Take any u∈Vεand set U:=Eεu∈V. Then by Step 2, ∥U∥H1(Ω)≤CKorn∥D(U)∥L2(Ω). On the other hand, by construction of Eε, ∥D(U)∥L2(Ω)≤Cext∥D(u)∥L2(Ωε). Finally, since uis the restriction of UtoΩε, we trivially have ∥u∥H1(Ωε)≤ ∥U∥H1(Ω). Chaining these three inequalities yields ∥u∥H1(Ωε)≤ ∥U∥H1(Ω)≤CKorn∥D(U)∥L2(Ω)≤CKornCext∥D(u)∥L2(Ωε). Hence the desired estimate holds with C:=CKorn·Cext, which is independent of ε. [Correct] Here, the agent correctly employed mathematical inequality techniques to complete the derivation. comment : None type: lemma Lemma 27. LetD⊂Rdbe a bounded Lipschitz domain and let (˜λ,˜µ)satisfy ˜µ >0and˜λ+2˜µ/d > 0. OnDconsider (1) the compressible-elastic system ˜µ∆ue+ (˜λ+ ˜µ)∇(divue) = 0 , (2) the incompressible Stokes system ˜µ∆ulim+∇plim= 0,divulim= 0, both with the same Dirichlet boundary trace on ∂D. Define w:=ue−ulim, φ := div ue. Then the following exact energy identity holds: ˜µZ D|∇w|2dx+ (˜λ+ ˜µ)Z D|φ|2dx=Z Dplimφ dx. 51 correctness : True Proof. Proof. Since ueandulimshare the same trace on ∂D,wvanishes on ∂D. Subtracting the Stokes equation from the elasticity equation gives in D ˜µ∆w+ (˜λ+ ˜µ)∇(divue)− ∇plim= 0. Butdivue=φ, so (∗) ˜µ∆w+ (˜λ+ ˜µ)∇φ=∇plim. We now take the L2(D;Rd)inner-product of (∗)withwand integrate by parts. All boundary terms vanish because w|∂D= 0. Thus R D˜µ∆w·w dx +R D(˜λ+ ˜µ)∇φ·w dx =R D∇plim·w dx . We handle each term in turn. 1. By integration by parts and w|∂D= 0,R D˜µ∆w·w dx =−˜µR
https://arxiv.org/abs/2505.22451v1
D∇w:∇w dx = −˜µR D|∇w|2dx. 2. Again integrating by parts,R D(˜λ+ ˜µ)∇φ·w dx =−(˜λ+ ˜µ)R Dφdivw dx =−(˜λ+ ˜µ)R Dφ·φ dx =−(˜λ+ ˜µ)R D|φ|2dx, since divw= div ue−divulim=φ−0. 3. Finally,R D∇plim·w dx =−R Dplimdivw dx =−R Dplimφ dx . Putting these together, the left-hand side becomes −˜µR D|∇w|2−(˜λ+ ˜µ)R D|φ|2and the right- hand side is −R Dplimφ. Multiplying by −1yields exactly ˜µR D|∇w|2dx+ (˜λ+ ˜µ)R D|φ|2dx=R Dplimφ dx , which is the desired energy identity. [Correct] Here’s the accurate derivation process. comment : None type: lemma Lemma 28. Rate-of-Convergence Estimate inside the Inclusion Under the same hypotheses as above, define wandφas in the previous conjecture. Then there is a constant C(depending only on d,˜µand the geometry of D) such that ∥divue∥L2(D)≤C˜λ−1∥plim∥L2(D), ∥w∥H1(D)≤C˜λ−1/2∥plim∥L2(D). In particular both divue→0andue→uliminH1(D)as˜λ→ ∞ , at the above rates. correctness : True Proof. Proof. Starting from the energy identity proved above, 2˜µZ D|D(w)|2+˜λZ D|φ|2=Z Dplimφ. We estimate the right-hand side by Cauchy–Schwarz and then Young’s inequality: for any ε >0, Z Dplimφ ≤ ∥plim∥L2(D)∥φ∥L2(D)≤ε∥φ∥2 L2(D)+ε−1∥plim∥2 L2(D) 2. Choose ε=˜λ. ThenZ Dplimφ≤˜λ∥φ∥2+˜λ−1∥plim∥2 2. Substitute into the energy identity: 2˜µ∥D(w)∥2 L2(D)+˜λ∥φ∥2 L2(D)≤˜λ∥φ∥2 L2(D)+˜λ−1∥plim∥2 L2(D) 2. 52 Rearrange: 2˜µ∥D(w)∥2+˜λ 2∥φ∥2≤˜λ−1 2∥plim∥2. In particular, ˜λ∥φ∥2≤˜λ−1∥plim∥2=⇒ ∥φ∥L2(D)≤˜λ−1∥plim∥L2(D). Also, ∥D(w)∥2≤1 4˜µ˜λ−1∥plim∥2=⇒ ∥D(w)∥L2(D)≤C˜λ−1/2∥plim∥L2(D). Finally, by Korn’s inequality on Done has ∥w∥H1(D)≤C(∥D(w)∥L2(D)+∥w∥L2(D)). Butwvanishes on ∂D, so Poincaré’s inequality gives ∥w∥L2(D)≤C∥∇w∥hence also ≤C∥D(w)∥. Altogether, ∥w∥H1(D)≤C∥D(w)∥L2(D)≤C˜λ−1/2∥plim∥L2(D). This completes the proof of the convergence rates. [Correct] Here is the proof of the Poincare inequality in the spatial setting of our problem, which is basically accurate. And the derivation process presented herein is correct. comment : None type: lemma Lemma 29. (Neumann–Trace Continuity on a Lipschitz Domain) Let D⊂Rdbe a bounded Lipschitz domain, d≥2, with outward unit normal N. For any u∈H1(D;Rd), define the stress tensor σ(u) =λ(divu)I+ 2µD(u), where D(u) = (∇u+ (∇u)⊤)/2. Then the boundary traction t:=σ(u)Nis a well-defined element of H−1 2(∂D;Rd), and there exists a constant C(depending only on dand the Lipschitz character of D) such that ∥t∥H−1 2(∂D)≤C λ∥divu∥L2(D)+µ∥∇u∥L2(D) . correctness : True Proof. We prove continuity of the Neumann trace via duality and integration by parts. 1. Preliminaries. Let V=Dand set V∗=H−1(D). The trace space H1/2(∂D)is the restriction of H1(D)and its dual is H−1/2(∂D). We will show that the linear functional L:H1/2(∂D;Rd)→R, L(φ) :=Z ∂D(σ(u)N)·φ dS extends continuously with the stated bound. 2. Extension of boundary data. Given φ∈H1/2(∂D;Rd), there exists an extension Φ∈H1(D;Rd) such that Φ|∂D=φand ∥Φ∥H1(D)≤C1∥φ∥H1/2(∂D), where C1depends only on the Lipschitz character of D(standard trace-extension theorem). [Vague] Here, more detailed explanation is required for the application of boundary properties and theorems. 3. Integration by parts. Since u∈H1(D;Rd)andΦ∈H1(D;Rd), both vanish of course for test supports in D, we may integrate by parts in the weak sense: Z ∂D(σ(u)N)·φ dS =Z ∂Dσ(u) : [Φ⊗N]dS =Z D∇ ·σ(u)·Φdx+Z Dσ(u) :∇Φdx, 53 where we used the identity div(σ(u))·Φ +σ(u) :∇Φ = div( σ(u) Φ). In fact uhas only H1 regularity, so both integrals are interpreted distributionally. Since σ(u)∈L2(D)d×d, the boundary term is well-defined. 4. Estimation of the volume integrals. (i) The first termR D∇ ·σ(u)·Φdx: in elasticity it holds in the sense of distributions that ∇
https://arxiv.org/abs/2505.22451v1
·σ(u)∈H−1(D)with ∥∇ ·σ(u)∥H−1(D)≤C2[λ∥divu∥L2+µ∥∇u∥L2], by the definition of σ(u). Hence Z D∇ ·σ(u)·Φdx ≤ ∥∇ · σ(u)∥H−1(D)∥Φ∥H1(D) ≤C2[λ∥divu∥+µ∥∇u∥]·C1∥φ∥H1/2. (ii) The second termR Dσ(u) :∇Φdx: since σ(u)∈L2(D),∇Φ∈L2(D), Z Dσ(u) :∇Φdx ≤ ∥σ(u)∥L2(D)∥∇Φ∥L2(D). But ∥σ(u)∥L2 ≤λ∥divu∥L2+ 2µ∥D(u)∥L2 ≤λ∥divu∥+µ∥∇u∥, and∥∇Φ∥ ≤ ∥ Φ∥H1≤C1∥φ∥H1/2. Hence Z Dσ(u) :∇Φ ≤(λ∥divu∥+µ∥∇u∥)·C1∥φ∥H1/2. 5. Conclusion. Combining the two estimates, |L(φ)| ≤C′[λ∥divu∥+µ∥∇u∥]· ∥φ∥H1/2(∂D), so by duality ∥σ(u)N∥H−1/2(∂D)= sup φ̸=0|L(φ)| ∥φ∥H1/2 ≤C[λ∥divu∥L2(D)+µ∥∇u∥L2(D)], as claimed. [Correct] This proof process uses the basic idea of energy estimation, and the derivation process and the application of the formula are accurate. comment : None type: lemma Lemma 30. (Uniform H1–Stability for the Coupled Stokes–Elastic Limit Problem, corrected) Let Ω⊂Rd(d≥2) be a bounded connected Lipschitz domain, and let Dε⊂Ωbe the ε–periodic union of small inclusions as in Assumptions 1–2. Assume the prescribed boundary traction g∈H−1 2(∂Ω;Rd) satisfiesZ ∂Ωg·r dS= 0 for every rigid motion r. LetΩext= Ω\DεandDεbe as above. Define the trial space W:= u∈H1(Ω;Rd) :Z ∂Ωu·r dS= 0for all rigid motions r , 54 and the pressure space Q:=L2 0(Dε) = q∈L2(Dε) :Z Dεq dx= 0 . We seek a pair (u, p)∈W×Qsatisfying the weak transmission-Stokes–elastic system: for all (v, q)∈W×Q, aext(u, v) +aint(u, v) +b(v, p) =L(v), b(u, q) = 0 , where aext(u, v) =Z Ωext[2µD(u) :D(v) +λ(divu)(divv)]dx, aint(u, v) =Z Dε2˜µD(u) :D(v)dx, b(v, p) =−Z Dεp(divv)dx, L(v) =Z ∂Ωg·v dS. Then there exists C >0, independent of ε, such that the unique solution (u, p)satisfies ∥u∥H1(Ω)+∥p∥L2(Dε)≤C∥g∥H−1 2(∂Ω). correctness : True Proof. We rewrite the coupled transmission problem as a single saddle-point system in W×Qand verify the Babuška–Brezzi conditions with constants independent of ε. Step 1. Weak formulation on W×Q. Since uextanduintmust agree on ∂Dε, we assemble them into a single global field u∈H1(Ω)which on Ωextcarries the elasticity operator and on Dεcarries the incompressible Stokes operator. The space Wenforces both the H1-regularity across ∂Dε(by matching traces) and the rigid-motion orthogonality on ∂Ω. The pressure plives only in Dεand has zero mean. The variational equations, obtained by integration by parts in Ωextand in each inclusion, read: (aext+aint)(u, v) +b(v, p) =L(v)for all v∈W, b(u, q) = 0 for all q∈Q, and incorporate the continuity of tractions and of displacements automatically. [Correct] Here, the agent actively construct the specific proof design. Step 2. Continuity of the bilinear forms. By Cauchy–Schwarz and standard trace estimates, there is M > 0independent of εsuch that |aext(u, v)|+|aint(u, v)|+|b(v, p)| ≤M∥u∥H1(Ω)∥v∥H1(Ω)+M∥p∥L2(Dε)∥v∥H1(Ω). Step 3. Coercivity of A:=aext+aintonKerB. The kernel of BisK={v∈W:divv= 0inDε}. For any v∈Kwe have A(v, v) =Z Ωext[2µ|D(v)|2+λ|divv|2]dx+Z Dε2˜µ|D(v)|2dx. Define µmin= min {µ,˜µ}>0. Since v∈H1(Ω)andΩis Lipschitz, the global Korn inequality on Ωgives ∥v∥2 H1(Ω)≤CK∥D(v)∥2 L2(Ω) withCKindependent of ε. Hence A(v, v)≥2µmin∥D(v)∥2 L2(Ω)≥2µmin CK ∥v∥2 H1(Ω). 55 Thus Ais coercive on Kwith constant α= 2µmin/CK, independent of ε. [Correct] The derivation process presented herein is correct. Step 4. Inf–sup stability of BonW×Q. We must show inf q∈Qsup v∈Wb(v, q) ∥v∥H1(Ω)∥q∥L2(Dε)≥γ >0 withγindependent of ε. But b(v, q) =−R Dεqdivv, and divacts only in Dε. Restricting to test- functions vsupported in Dεwe recover the classical Stokes inf–sup condition on the perforated region Dε. Since Dεis a union of ε-scaled, well-separated Lipschitz inclusions,
https://arxiv.org/abs/2505.22451v1
the local inf–sup constant on each inclusion is scale-invariant and the global constant γ(the minimum over finitely many identical blocks) is independent of ε. Step 5. Conclusion by Babuška–Brezzi. By Steps 2–4 the pair of forms (A, B)onW×Qsatisfies the continuity, coercivity on KerB, and the inf–sup condition with constants M, α, γ independent of ε. Hence there is a unique solution (u, p)∈W×Qand the a priori bound ∥u∥H1(Ω)+∥p∥L2(Dε)≤Csup (v,q)̸=0L(v) + 0 ∥v∥H1(Ω)+∥q∥L2(Dε)=Csup v∈WR ∂Ωg·v dS ∥v∥H1(Ω)=C∥g∥H−1/2(∂Ω), holds with Cdepending only on d, λ, µ, ˜µ, the Lipschitz character of Ω, and the reference inclusion ω, but notonε. This completes the proof. [Vague] Here, more detailed explanation is required for the deprivation and the application of the theorems. comment : None type: lemma Lemma 31. (Estimate of the Traction–Jump Remainder in the High–Contrast Limit) Let uεsolve the full compressible–elastic system in Dεwith Lamé parameters (˜λ,˜µ)satisfying ˜µ >0,˜λ≫1, and let (ulim, plim)be the coupled Stokes–elastic limit (Stokes in Dε, elasticity in Ωε= Ω\Dε), normalized so that (−plimI+ 2˜µ D(ulim))N= (λ(divulim)I+ 2µ D(ulim))N on∂Dε. Define the traction–jump Jε:= σ(˜λ,˜µ)(uε)N −− σ(λ,µ)(ulim)N + and split off the leading pressure term Jε=plimN+Rε. Then there exists Cindependent of εand˜λsuch that ∥Rε∥H−1/2(∂Dε)≤C∥plim∥L2(Dε). In particular, as ˜λ→ ∞ , Jε=plimN+OH−1/2(1)∥plim∥L2. correctness : True Proof. Proof. 1. Set w:=uε−ulimonDε. On∂Dεthe interior compressible-elastic stress is σ(˜λ,˜µ)(uε)N=h ˜λ(divuε)I+ 2˜µ D(uε)i N, while by definition of the limit solution σ(λ,µ)(ulim)N= (−plimI+ 2˜µ D(ulim))N. Hence Jε=σ(˜λ,˜µ)(uε)N−σ(λ,µ)(ulim)N=σ(˜λ,˜µ)(w)N+plimN=:Rε+plimN. 2. By the Neumann-trace continuity on the Lipschitz domain Dε(Lemma 8), σ(˜λ,˜µ)(w)N H−1/2(∂Dε)≤C1h ˜λ∥divw∥L2(Dε)+ ˜µ∥∇w∥L2(Dε)i . 56 3. Inside Dεthe difference w=uε−ulimsatisfies the rate-of-convergence estimates (Lemma 7): ∥divw∥L2(Dε)=∥divuε∥L2(Dε)≤C2˜λ−1∥plim∥L2(Dε), ∥∇w∥L2(Dε)≤C3˜λ−1/2∥plim∥L2(Dε). 4. Plugging these into the trace-estimate, ∥Rε∥H−1/2(∂Dε)≤C1h ˜λ· C2˜λ−1∥plim∥ + ˜µ· C3˜λ−1/2∥plim∥i =C1h C2+ ˜µ C3˜λ−1/2i ∥plim∥L2(Dε)≤C∥plim∥L2(Dε), where in the last line we use ˜λ≥1to absorb ˜µ C3˜λ−1/2into a constant Cindependent of ˜λ, ε. 5. This proves ∥Rε∥H−1/2(∂Dε)≤C∥plim∥L2(Dε), and hence Jε=plimN+OH−1/2(1)∥plim∥L2(Dε), completing the proof. [Correct] The agent correctly employes mathematical inequality techniques to complete the derivation and gets a reasonable conclusion. comment : None type: lemma Lemma 32. (Estimate of the Traction–Jump in the High-Contrast Limit) Let uϵbe the solution of the full Lamé problem in Dϵwith parameters (eλ,eµ)and let (ulim, plim)be the incompressible-limit (Stokes–elastic) pair, normalized so that (−plimI+ 2eµ D(ulim))N= (λdivulimI+ 2µ D(ulim))N on∂Dϵ. Define the traction-jump Jϵ:= σ(eλ,eµ)(uϵ)N −− σ(λ,µ)(ulim)N +=plimN+Rϵ. Then there is C >0, independent of ϵandeλ, such that ∥Rϵ∥H−1/2(∂Dϵ)≤C∥plim∥L2(Dϵ), and in particular ∥Jϵ∥H−1/2(∂Dϵ)≤ ∥plimN∥H−1/2(∂Dϵ)+O(1)∥plim∥L2≤C∥plim∥L2. correctness : True Proof. Proof. 1. By Lemma 6 (exact energy identity) and Lemma 7 (rate inside the inclusion) one shows ∥divuϵ∥L2(Dϵ)=O(eλ−1)and∥uϵ−ulim∥H1(Dϵ)=O(eλ−1/2). 2. On each connected component of ∂Dϵ, apply the Neumann-trace continuity (Lemma 8): σ(eλ,eµ)(uϵ)N− −plimI+ 2eµD(ulim) N H−1/2≤Ceλ∥divuϵ∥L2+eµ∥∇(uϵ−ulim)∥L2 . 3. Inserting the O(eλ−1)andO(eλ−1/2)bounds yields ∥Rϵ∥H−1/2(∂Dϵ)≤C∥plim∥L2. [Vague] The process here is lack of details of theorem derivation. comment : None type: lemma Lemma 33. (Energy Estimate Relating the Solution-Difference to the Traction-Jump) Let w:= uϵ−ulimonΩϵ. Then wsatisfies the homogeneous Lamé system in Ωϵwith zero Neumann data on ∂Ωand traction-jump Jϵon∂Dϵ. There exists C(independent of ϵ,eλ) so that ∥w∥H1(Ωϵ)≤C∥Jϵ∥H−1/2(∂Dϵ). 57 correctness : True Proof. Proof. 1. By construction wsolves Lλ,µw= 0inΩϵ, ∂ νw ∂Ω= 0, ∂ νw ∂Dϵ=Jϵ. 2. The
https://arxiv.org/abs/2505.22451v1
variational formulation plus the uniform ellipticity (Lemma 4) and Korn’s inequality (Lemma 5) give a(w, w) =Z ∂DϵJϵ·w≤ ∥Jϵ∥H−1/2∥w∥H1/2(∂Dϵ)≤C∥Jϵ∥H−1/2∥w∥H1(Ωϵ). 3. Coercivity a(w, w)≳∥w∥2 H1then yields the stated bound. comment : None type: lemma Lemma 34. LetD⊂Rd(d≥2) be a bounded Lipschitz domain with outward unit normal N, and let the Lamé parameters satisfy ˜µ >0and˜λ+ 2˜µ/d > 0. For any traction t∈H−1/2(∂D;Rd)whose H−1/2–action annihilates all rigid motions, there is a unique (modulo rigid motions) displacement u∈H1(D;Rd)solving the Neumann boundary-value problem ˜µ∆u+ (˜λ+ ˜µ)∇(divu) = 0 inD, σ(u)N=ton∂D, where σ(u) =˜λ(divu)I+2˜µD(u)andD(u) = ∇u+(∇u)⊤ /2. Moreover there are constants C1, C2>0, depending only on Dandd(but not on ˜λ,˜µ,t), such that (i)∥u∥H1(D)≤C1˜µ−1∥t∥H−1/2(∂D), (ii)∥divu∥L2(D)≤C2(˜λ+ 2˜µ)−1/2∥t∥H−1/2(∂D). correctness : True Proof. Proof. We divide the argument into two steps. Step 1. (Existence, uniqueness, and the H1-estimate.) Set V= v∈H1(D;Rd) :Z ∂Dv·r dS= 0for every rigid motion r . Define the bilinear form a:V×V→Rand the linear functional L:V→Rby a(u, v) =Z Dh 2˜µD(u) :D(v) +˜λ(divu)(div v)i dx, L(v) =⟨t, v⟩H−1/2,H1/2. By Korn’s inequality and a Poincaré-type argument on V, there is α >0such that a(v, v)≥2˜µ∥D(v)∥2 L2(D)≥α∥v∥2 H1(D). Moreover, the trace theorem gives |L(v)| ≤ ∥t∥H−1/2(∂D)∥v∥H1/2(∂D)≤Ctr∥t∥H−1/2(∂D)∥v∥H1(D). By Lax–Milgram, there is a unique u∈Vwitha(u, v) =L(v)for all v∈V, and ∥u∥H1(D)≤α−1∥L∥V′≤(α−1Ctr)˜µ−1∥t∥H−1/2(∂D). This establishes existence, uniqueness (modulo rigid motions), and the H1-bound with C1=α−1Ctr. Step 2. (Sharp bound for divuin˜λ.) Taking v=uin the weak form a(u, v) =L(v)gives the energy identity 2˜µ∥D(u)∥2 L2(D)+˜λ∥divu∥2 L2(D)=⟨t, u⟩. 58 Hence in particular ˜λ∥divu∥2 L2(D)≤ |⟨t, u⟩| ≤ ∥ t∥H−1/2∥u∥H1/2(∂D)≤Ctr∥t∥H−1/2∥u∥H1(D). From the H1-estimate of Step 1, we have ∥u∥H1(D)≤C1˜µ−1∥t∥H−1/2. Therefore, ˜λ∥divu∥2 L2(D)≤CtrC1˜µ−1∥t∥2 H−1/2. Since ˜λ+ 2˜µ≥˜λ, we deduce ∥divu∥L2(D)≤[CtrC1]1/2˜µ−1/2(˜λ+ 2˜µ)−1/2∥t∥H−1/2(∂D). Setting C2= [CtrC1]1/2yields the claimed bound. [Correct] The proof logic here is clear, and the result is correct. [Vague] The derivation process here is not detailed enough. comment : None type: lemma Lemma 35. (Neumann-trace regularity for the Poisson problem: H−1(D)→H−1/2(∂D)) Let D⊂Rdbe a bounded Lipschitz domain. For every f∈H−1(D)letu∈H1 0(D)be the unique weak solution of −∆u=finD, u = 0 on∂D. Then the normal derivative (Neumann trace) ∂nubelongs to H−1/2(∂D)and there is a constant C, depending only on dand the Lipschitz character of D, such that ∥∂nu∥H−1/2(∂D)≤C∥f∥H−1(D). correctness : True Proof. 1. Existence and energy estimate. Define the bilinear form a(v, w) :=Z D∇v· ∇w dx, which is continuous and coercive on H1 0(D). By the Lax–Milgram theorem there is a unique u∈H1 0(D)with a(u, w) =⟨f, w⟩H−1,H1 0∀w∈H1 0(D), and moreover ∥u∥H1(D)≤C1∥f∥H−1(D). Here and below C1depends only on dand the Lipschitz character of D. 2. Construction of the Neumann trace as a bounded functional on H1/2(∂D). Let T:H1(D)−→H1/2(∂D) be any bounded right-inverse of the trace map (such an extension operator exists on Lipschitz domains). Given an arbitrary boundary datum g∈H1/2(∂D), pick an extension G∈H1(D)with G|∂D=gand∥G∥H1(D)≤C2∥g∥H1/2(∂D). We now define a linear functional τonH1/2(∂D)by ⟨τ, g⟩H−1/2,H1/2:=Z D∇u· ∇G dx− ⟨f, G⟩H−1,H1 0. Because u∈H1 0(D)andf∈H−1(D), the right-hand side depends only on the trace g=G|∂D, not on the particular extension. Indeed, if G1andG2are two extensions with the same trace then G1−G2∈H1 0(D), and Z D∇u· ∇(G1−G2)dx=⟨f, G 1−G2⟩H−1,H1 0, 59 so the difference vanishes. Hence τis well-defined. 3. Boundedness
https://arxiv.org/abs/2505.22451v1
of the Neumann trace. Using Cauchy–Schwarz and the two energy bounds above, |⟨τ, g⟩| ≤ ∥∇ u∥L2(D)∥∇G∥L2(D)+∥f∥H−1∥G∥H1(D)≤(C1C2+C2)∥f∥H−1(D)∥g∥H1/2(∂D). Therefore τ∈H−1/2(∂D)and ∥τ∥H−1/2(∂D)≤C∥f∥H−1(D), withCdepending only on the domain constants C1, C2. 4. Identification of τwith the classical normal derivative. By standard elliptic-regularity arguments in Lipschitz domains the functional τjust constructed coincides with the distributional normal derivative ∂nuof the weak solution u. In particular one recovers the Green identity Z D∇u· ∇G dx− ⟨f, G⟩=⟨∂nu, G|∂D⟩H−1/2,H1/2. Hence the bound on ∥τ∥is precisely the desired estimate for ∥∂nu∥. This completes the proof that the map f∈H−1(D)7−→∂nu∈H−1/2(∂D) is well-defined and continuous. □ [Correct] This constitutes a verification of a previously established conclusion. comment : None type: lemma Lemma 36. LetD⊂Rd(d≥2) be a bounded Lipschitz domain with outward unit normal N. Then for every scalar p∈L2(D)the boundary-distribution p Ndefines a continuous linear functional on H1/2(∂D;Rd), i.e.p N∈H−1/2(∂D;Rd), and there is a constant C(depending only on dand the Lipschitz character of D) such that ∥p N∥H−1/2(∂D)≤C∥p∥L2(D). correctness : True Proof. The key is the well-known continuity of the “normal-trace” map for L2–tensor fields whose divergence lies in H−1. We spell out the argument for the special choice of stress-field σ:=p I, from which p Nwill emerge as the boundary traction. 1. Definition of the traction functional. Fix any φ∈H1 2(∂D;Rd)and choose an H1-extension v:=Eφ∈H1(D;Rd)with v|∂D=φ,∥v∥H1(D)≤Cext∥φ∥H1 2(∂D). Define a linear functional TponH1 2(∂D;Rd)by the generalized Green formula Tp, φ :=Z Dσ:∇v dx+⟨divσ, v⟩H−1,H1 0withσ:=p I. Here Idenotes the d×didentity, so σ∈L2(D;Rd×d)and in the sense of distributions divσ=∇p∈H−1(D;Rd). 2. Identification with the boundary integral. By the divergence theorem for Lipschitz domains (valid in the H(div)–H1duality), one checks for smooth pandφthat Z Dσ:∇v+Z D(divσ)·v=Z ∂D(σ N)·v=Z ∂D(pI N)·φ=Z ∂Dp N·φ dS. By density this shows that Tpis nothing but the boundary-pairing φ7→R ∂Dp N·φ dS , i.e. Tp realizes the distribution “ p N” on∂D. 60 3. Continuity estimate. From the definition of Tpand Cauchy–Schwarz one obtains ⟨Tp, φ⟩ ≤ ∥σ∥L2(D)∥∇v∥L2(D)+∥divσ∥H−1(D)∥v∥H1(D)≤ ∥p∥L2(D)+∥∇p∥H−1(D) Cext∥φ∥H1 2(∂D). Since∇p∈H−1(D)with norm ≤ ∥p∥L2(D), we conclude ∥p N∥H−1 2(∂D)=∥Tp∥(H1 2(∂D))′≤C∥p∥L2(D), for some Cdepending only on Dandd. This completes the proof that p N∈H−1 2(∂D;Rd)with the desired norm bound. comment : None type: lemma Lemma 37. (Estimate of the residual traction–jump in the high-contrast limit, corrected) LetD⊂Rdbe a bounded Lipschitz domain and let ˜µ >0be fixed. For each ˜λ >0consider (1) the compressible-elastic displacement uεinDsolving ˜µ∆uε+ (˜λ+ ˜µ)∇(divuε) = 0 inD, (2) the incompressible-limit (Stokes) pair (ulim, plim)inDsolving ˜µ∆ulim+∇plim= 0,divulim= 0 inD, both with the same Dirichlet boundary data on ∂D. Define the residual traction–jump Rε:=σ(˜λ,˜µ)(uε)N−(−plimI+ 2 ˜µ D(ulim))N on∂D. Then there is a constant C=C(D, d, ˜µ)independent of ˜λ, such that for all ˜λ≫1 ∥Rε∥H−1/2(∂D)≤C∥plim∥L2(D). In particular Rεremains uniformly bounded (no decay in ˜λ) as˜λ→ ∞ . correctness : True Proof. Proof. We set w:=uε−ulim, φ := div uε. By the exact interior energy identity (Lemma 6) one has ˜µZ D|∇w|2+ (˜λ+ ˜µ)Z D|φ|2=Z Dplimφ. Hence by Cauchy–Schwarz, (˜λ+ ˜µ)∥φ∥2 L2(D)≤ ∥plim∥L2(D)∥φ∥L2(D) ⇒ ∥φ∥L2(D)≤(˜λ+ ˜µ)−1∥plim∥L2(D), and then from the same identity, ˜µ∥∇w∥2 L2(D)≤Z D|plimφ| ≤ ∥plim∥∥φ∥ ≤(˜λ+ ˜µ)−1∥plim∥2 ⇒ ∥∇ w∥L2(D)≤C˜λ−1/2∥plim∥L2(D), withCdepending only on ˜µ. Next we expand the
https://arxiv.org/abs/2505.22451v1
traction-jump. On ∂D, σ(˜λ,˜µ)(uε) =˜λφI+ 2˜µD(uε), and (−plimI+ 2˜µD(ulim)) =−plimI+ 2˜µD(ulim). 61 Hence Rε=σ(uε)N−(−plimI+ 2˜µD(ulim))N= (˜λφ+plim)N+ 2˜µD(w)N. By the Neumann-trace continuity on a Lipschitz domain (Lemma 8), ∥t∥H−1/2(∂D)≤C1h ˜λ∥divv∥L2(D)+ ˜µ∥∇v∥L2(D)i for any v. Applying this with v=uε−ulimfor the two pieces in Rε, we get ∥Rε∥H−1/2(∂D)≤C1h ˜λ∥φ∥L2(D)+∥plim∥L2(D)+ ˜µ∥∇w∥L2(D)i . Substitute the interior bounds: ˜λ∥φ∥ ≤˜λ·(˜λ+ ˜µ)−1∥plim∥ ≤ ∥ plim∥, ˜µ∥∇w∥ ≤C˜µ˜λ−1/2∥plim∥ ≤C∥plim∥(since ˜λ≫1), and the middle term is ∥plim∥. Hence ∥Rε∥H−1/2(∂D)≤C2[∥plim∥+∥plim∥+C∥plim∥] =C∥plim∥L2(D), withCindependent of ˜λ. This completes the proof. [Correct] Building upon the aforementioned conclusions, the derivation process is rigorously extended, culminating in the corresponding result. comment : None type: lemma Lemma 38. (Uniform Lipschitz–Dependence of the Neumann–to–Dirichlet Map on the First Lamé Parameter) Let D⊂Rdbe a bounded Lipschitz domain and fix µ >0andλ0>0. For each λ≥λ0 letuλ∈H1(D;Rd)(unique modulo rigid motions) solve µ∆uλ+ (λ+µ)∇(divuλ) = 0 inD, σλ,µ(uλ)N=t on∂D, withσλ,µ(u) =λ(divu)I+ 2µ D(u)andt∈H−1/2(∂D;Rd)annihilating all rigid motions. Define the Neumann-to-Dirichlet map Λ(λ) :t7→uλ ∂D∈H1/2(∂D;Rd). Then there is a constant C, depending only on D,d,µandλ0, such that for all λ1, λ2≥λ0and all admissible t, Λ(λ1)t−Λ(λ2)t H1/2(∂D)≤C|λ1−λ2| ∥t∥H−1/2(∂D). correctness : True Proof. We set ui:=uλifori= 1,2, and put w=u1−u2. Our goal is to estimate w|∂Din H1/2(∂D). Step 1. Uniform well-posedness and a priori bounds. For each λ≥λ0the elasticity bilinear form aλ(u, v) =Z D 2µ D(u) :D(v) +λ(divu)(div v) dx is coercive on the subspace of H1(D;Rd)orthogonal to rigid motions, with coercivity constant α= min {µ, λ0}. By Lax–Milgram and the Neumann-trace theory (Lemma 8) one obtains ∥uλ∥H1(D)≤C1∥t∥H−1/2(∂D),∥divuλ∥L2(D)≤C2(λ+µ)−1/2∥t∥H−1/2(∂D), and by the continuous trace embedding ∥uλ∥H1/2(∂D)≤C3∥uλ∥H1(D). 62 Step 2. Equation for the difference and boundary-forcing. Since each uisatisfies σλi,µ(ui)N=t, we compute on ∂D σλ1,µ(w)N= σλ1,µ(u1)−σλ1,µ(u2) N=t−σλ1,µ(u2)N=− (λ1−λ2)(div u2) N. Hence wsatisfies in D µ∆w+ (λ1+µ)∇divw=−(λ1−λ2)∇(divu2), with non-homogeneous traction f:=−(λ1−λ2) (div u2)N∈H−1/2(∂D). Step 3. Energy identity with boundary term. Test the PDE for wagainst witself and integrate by parts: aλ1(w, w) =−(λ1−λ2)Z D(divw) (div u2)dx | {z } I+ f , w ∂D. Here coercivity gives aλ1(w, w)≥α∥w∥2 H1(D), and |I| ≤ |λ1−λ2|∥divw∥L2(D)∥divu2∥L2(D)≤ |λ1−λ2| ∥w∥H1(D)∥divu2∥L2(D). By the Neumann-trace continuity (Lemma 15) and the trace embedding, ∥f∥H−1/2(∂D)=|λ1−λ2| ∥divu2∥L2(D),|⟨f, w⟩| ≤ ∥ f∥H−1/2∥w∥H1/2(∂D)≤C∥f∥H−1/2∥w∥H1(D). Combining, α∥w∥2 H1(D)≤C|λ1−λ2| ∥divu2∥L2(D)∥w∥H1(D). Thus ∥w∥H1(D)≤C α|λ1−λ2| ∥divu2∥L2(D)≤C′|λ1−λ2| ∥t∥H−1/2(∂D), where we used the bound on ∥divu2∥from Step 1. Step 4. Conclusion by trace. Finally ∥w∥H1/2(∂D)≤Ctr∥w∥H1(D)≤C|λ1−λ2| ∥t∥H−1/2(∂D), which is exactly the asserted Lipschitz estimate for Λ(λ). [Correct] Here, the agent correctly deprives the proof process, using accurate mathematical techniques and proof thoughts. comment : None type: lemma Lemma 39. (Uniform Pressure–Estimate for the Stokes–Elastic Homogenization Limit) Retain all the notation and hypotheses of Lemma 9 (the coupled Stokes–elastic limit in Ω\DεandDεwith boundary traction gon∂Ω). Write its unique solution as (ulim, plim). Then there is a constant C, independent of εand of the specific pattern, such that ∥plim∥L2(Dε)≤C∥g∥H−1/2(∂Ω). correctness : True Proof. Proof. In Lemma 9 one proves the uniform well-posedness and the a priori bound ∥ulim∥H1(Ω)+∥plim∥L2(De)≤C∥g∥H−1 2(∂Ω). Since the left-hand side already controls ∥plim∥, the desired estimate follows directly. Concretely: 1. Lemma 9 shows coercivity of the form aext(u, u) +aint(u, u)≥c∥u∥2 H1(Ω), 63 and the inf–sup condition sup v∈Wb(v, q) ∥v∥H1(Ω)≥c′∥q∥L2(De). 2. By the Babuška–Brezzi theory (Lax–Milgram
https://arxiv.org/abs/2505.22451v1
+ inf–sup) the unique solution (ulim, plim)satisfies ∥ulim∥H1(Ω)+∥plim∥L2(De)≤˜Csup v,qL(v) + 0 coercivity +inf–sup=C∥g∥H−1 2(∂Ω). Thus the pressure in the inclusions is uniformly bounded by the boundary data, with no dependence onε. [Vague] Here, more detailed explanation is required for the application of boundary properties and theorems. comment : None type: lemma Lemma 40. (Global Exterior-Energy Error-Estimate for the High-Contrast Lamé–Transmission Problem, corrected) Let Ω⊂Rd(d≥2) be a bounded connected Lipschitz domain and let Dε⊂Ω be the ε–periodic union of small Lipschitz inclusions as in Assumptions 1–2. Let (λ, µ)be the background Lamé parameters on Ωε:= Ω\Dεand(eλ,eµ)the high-contrast parameters in Dε, witheµ > 0,eλ+ 2eµ/d > 0andeλ≫1. Let uεsolve the full compressible-elastic transmission problem in Ωwith prescribed boundary traction g∈H−1/2(∂Ω)(orthogonal to rigid motions), and let(ulim, plim)solve the incompressible-limit (Stokes–elastic) transmission problem in Ωε∪Dεwith the same g. Define the exterior difference wext:=uε−ulim onΩε. Then there is Cindependent of εandeλsuch that ∥wext∥H1(Ωε)≤C∥uε−ulim∥H1/2(∂Dε). Consequently, combining with the interior high-contrast estimate on each inclusion one obtains the global rate ∥uε−ulim∥H1(Ωε)≤C′eλ−1/2∥g∥H−1/2(∂Ω). correctness : True Proof. Proof. We follow the same two-step outline as before, but we now track the ˜λ–dependence correctly. 1. Reduction to an exterior Dirichlet problem. Set Ωe:= Ω\Deand define w:=ue−ulimonΩe. By construction both ueandulimsatisfy the (background) Lamé equations in Ωe, the same Neumann traction on ∂Ω, and thus wsatisfies Divσ(λ,µ)(w) = 0 inΩe,σ(w)N= 0 on∂Ω, with non-zero Dirichlet data on the inner boundary ∂De,φ:=w|∂De=ue|∂De−ulim|∂De. 2. Energy-to-boundary estimate in the exterior. By Lemma 12 (energy bound for the homogeneous exterior problem) and the uniform Korn coercivity (Lemma 4), the solution w∈H1(Ωe)of the above mixed problem satisfies ∥w∥H1(Ωe)≤C1∥σ(w)N∥H−1/2(∂De). Moreover the Dirichlet-to-Neumann map on ∂Deis continuous H1/2→H−1/2with norm indepen- dent of ε(by standard boundary-integral or PDE arguments), so ∥σ(w)N∥H−1/2(∂De)≤C2∥φ∥H1/2(∂De). Hence ∥w∥H1(Ωe)≤C∥φ∥H1/2(∂De). 3. Interior rate on each inclusion. Fix one inclusion D=Dε,i. Let wint:=ue−ulimonD. By Lemma 7 (interior high-contrast estimate) we have ∥wint∥H1(D)≤C˜λ−1/2∥plim∥L2(D). 64 Tracing to ∂Dand summing over igives max i∥ue−ulim∥H1/2(∂Dε,i)≤C˜λ−1/2∥plim∥L2(De)≤C′˜λ−1/2∥g∥H−1/2(∂Ω), where the last step uses the uniform L2–estimate on plim(Lemma 18). 4. Conclusion. Combining the exterior bound of step 2 with the boundary-trace estimate of step 3 yields ∥ue−ulim∥H1(Ωe)≤Cmax i∥ue−ulim∥H1/2(∂Dε,i)≤C′˜λ−1/2∥g∥H−1/2(∂Ω), as claimed. [Correct] This process is rightly reasonable. [Vague] Here we need more detailed deprivation process. comment : None type: lemma Lemma 41. (Lipschitz-Dependence of the Elasticity Resolvent on the First Lamé Parameter) Let D⊂Rdbe a bounded Lipschitz domain, and fix µ >0. For each λ≥0consider the bilinear form aλ(u, v) :=Z D[2µD(u) :D(v) +λ(divu)(div v)]dx, on the space V:= u∈H1(D;Rd) :Z ∂Du·r= 0for every rigid motion r . By Korn’s inequality and λ+ 2µ/d > 0,aλis coercive and defines an isomorphism Aλ:V→V∗,⟨Aλu, v⟩=aλ(u, v). Then for any two parameters λ1, λ2≥0one has the Lipschitz estimate ∥A−1 λ1−A−1 λ2∥L(V∗,V)≤C|λ1−λ2|, where Cdepends only on µ,d, and the Lipschitz character of D(but not on λ1orλ2). correctness : True Proof. Proof. 1.Setup. Lett∈V∗. Write u1=A−1 λ1t,u2=A−1 λ2t. Then by definition aλ1(u1, v) =⟨t, v⟩=aλ2(u2, v)for all v∈V. 2. Galerkin-difference equation. Subtracting, for all v∈V,aλ1(u1−u2, v) = [aλ2(u2, v)−aλ1(u2, v)] = ( λ2−λ1)R D(divu2)(div v)dx. 3.Coercivity and choice of test-function. Since aλ1is coercive on V, there is α >0(independent ofλ1≥0) so that
https://arxiv.org/abs/2505.22451v1
aλ1(w, w)≥α∥w∥2 H1(D)for all w∈V. Take v=w:=u1−u2. Then α∥u1−u2∥2 H1(D)≤aλ1(u1−u2, u1−u2) = ( λ2−λ1)R D(divu2)(div( u1−u2))dx. 4.Cauchy-Schwarz. R D(divu2)(div( u1−u2)) ≤ ∥divu2∥L2(D)∥div(u1−u2)∥L2(D) ≤C∥u2∥H1(D)∥u1−u2∥H1(D), where we used the continuous embedding H1→H(div) and Korn’s inequality to bound ∥div·∥by∥ · ∥H1. 5.Combine. Thus α∥u1−u2∥2 H1≤ |λ2−λ1| ·C∥u2∥H1∥u1−u2∥H1. Divide both sides by ∥u1−u2∥H1(unless trivial), to get ∥u1−u2∥H1(D)≤C′|λ2−λ1|∥u2∥H1(D). 6.Bound ∥u2∥in terms of t.By coercivity again, ∥u2∥H1≤C′′∥t∥V∗. Therefore ∥u1−u2∥H1(D) ≤C|λ2−λ1|∥t∥V∗. 7.Operator-norm conclusion. Since t∈V∗was arbitrary, ∥A−1 λ1−A−1 λ2∥L(V∗,V)= sup∥t∥V∗=1∥u1−u2∥H1≤C|λ1−λ2|, proving the conjecture. 65 [Correct] The reasoning framework demonstrates a valid and lucid proof strategy, although the intermediate derivations require additional elaboration to strengthen the rigor of the argument. comment : None type: lemma Lemma 42. (Sharp-rate Convergence of the Elasticity Resolvent to the Stokes Resolvent as λ→ ∞ ) LetD⊂Rd(d≥2) be a bounded Lipschitz domain. Fix µ >0andλ0>0. For each λ≥λ0, define the bilinear form on V:={u∈H1(D;Rd) :Z ∂Du·r= 0for all rigid motions r} by aλ(u, v) =Z D[2µ D(u) :D(v) +λ(divu)(div v)]dx, and denote by Aλ:V→V∗its Riesz isomorphism. On the divergence-free closed subspace V0={u∈V: divu= 0}, consider the Stokes form b(u, v) =Z D2µ D(u) :D(v)dx and its Riesz isomorphism B:V0→V∗ 0. Extend B−1by zero on (V∗ 0)⊥⊂V∗. Then there exists C, depending only on D,d,µ,λ0, such that for all λ≥λ0, ∥A−1 λ−B−1∥L(V∗,V)≤Cλ−1/2. Equivalently, for any traction t∈H−1/2(∂D;Rd)annihilating rigid motions, if uλ∈Vsolves µ∆uλ+ (λ+µ)∇(divuλ) = 0 inD, σ (uλ)N=ton∂D, and(u∞, p∞)∈V0×L2 0(D)solves the incompressible-Stokes Neumann problem with the same t, then ∥uλ−u∞∥H1(D)≤Cλ−1/2∥t∥H−1/2(∂D), and hence also ∥uλ−u∞∥H1/2(∂D)≤Cλ−1/2∥t∥H−1/2(∂D). correctness : True Proof. Proof. We split the argument into steps. 1. Existence and uniform stability. By standard Neumann–Stokes theory (e.g. Girault–Raviart) the incompressible problem µ∆u∞+∇p∞= 0,divu∞= 0inD, σ S(u∞, p∞)N=ton∂D, admits a unique (u∞, p∞)∈V0×L2 0(D)and satisfies the a priori bound ∥u∞∥H1(D)+∥p∞∥L2(D)≤C1∥t∥H−1/2(∂D), withC1independent of λ. Similarly the compressible-elastic Neumann problem for λ≥λ0is well-posed in Vand enjoys uniform coercivity aλ(u, u)≥c2∥u∥2 H1by Korn’s inequality. 2. Energy identity for the difference. Set w=uλ−u∞andφ= div uλ. Subtracting the two PDEs and testing against wfor the vector-equation and against φfor the divergence-part yields the exact identity (Lemma 6) µZ D|∇w|2dx+ (λ+µ)Z D|φ|2dx=Z Dp∞φ dx. 3. Bounding the divergence term. By Cauchy–Schwarz, (λ+µ)∥φ∥2 L2≤ ∥p∞∥L2∥φ∥L2 ⇒ ∥φ∥L2(D)≤(λ+µ)−1∥p∞∥L2(D)≤C2λ−1∥t∥H−1/2(∂D). 66 4. Bounding the H1-difference. From the energy identity again, µ∥∇w∥2 L2≤ ∥p∞∥L2∥φ∥L2≤C3λ−1∥t∥2 H−1/2. By Korn’s inequality (constant independent of λ), ∥w∥H1(D)≤C4∥∇w∥L2≤Cλ−1/2∥t∥H−1/2(∂D). 5. Operator-norm bound. Since t7→uλ−u∞is linear and the above estimate is uniform for all admissible tandλ≥λ0, we conclude ∥A−1 λ−B−1∥L(V∗,V)≤Cλ−1/2. 6. Trace estimate. Finally the continuous trace H1(D)→H1/2(∂D)gives ∥uλ−u∞∥H1/2(∂D)≤CT∥uλ−u∞∥H1(D)≤C′λ−1/2∥t∥H−1/2(∂D). This completes the proof of the sharp λ−1/2convergence rate. comment : None type: lemma Lemma 43. (Asymptotic Expansion of the Compressible-Elastic Neumann Solution in Powers of 1/˜λ) Let D⊂Rdbe a bounded C1,1domain, fix ˜µ >0, and let ˜λ≫1. For each ˜λconsider the unique (modulo rigid motions) displacement u˜λ∈H1(D;Rd)solving ˜µ∆u˜λ+ (˜λ+ ˜µ)∇(divu˜λ) = 0 inD, σ(˜λ,˜µ)(u˜λ)N=ton∂D, where t∈H−1/2(∂D;Rd)annihilates all rigid motions and σ(˜λ,˜µ)(u) =˜λ(divu)I+ 2˜µD(u). Define the “elastic pressure” p˜λ:=˜λdivu˜λ. Let(u0, p0)∈H1(D;Rd)×L2 0(D)be the unique solution of the incompressible-Stokes Neumann problem ˜µ∆u0+∇p0= 0,divu0= 0 inD, (−p0I+ 2˜µD(u0))N=ton∂D. Then for every integer K≥0there exist coefficient fields vj∈H1(D;Rd), p j∈L2 0(D), j = 1, . . . , K, and
https://arxiv.org/abs/2505.22451v1
remainders RK+1(˜λ)∈H1(D;Rd),QK(˜λ)∈L2 0(D)such that u˜λ=u0+˜λ−1v1+···+˜λ−(K+1)vK+1+RK+1(˜λ), p˜λ=p0+˜λ−1p1+···+˜λ−KpK+QK(˜λ), and one has the uniform remainder estimate ∥RK+1(˜λ)∥H1(D)+∥QK(˜λ)∥L2(D)=O(˜λ−(K+1)) as˜λ→ ∞ . In particular the first-order corrector (v1, p1)is the unique solution of a∞(v1,w)−b(w, p1) = 0 , b(v1, q) = (p0, q)L2(D) for all (w, q), and one may write formally v1=−B−1∇p0, where Bis the Stokes operator on the divergence-free subspace of H1(D;Rd). correctness : True 67 Proof. Proof. We give a concise operator-theoretic proof based on a stabilized mixed formulation and a Neumann-series expansion. 1. Mixed-variational reformulation. Define the trial spaces V:={u∈H1(D;Rd) :Z ∂Du·r= 0∀rigid motions r}, Q:=L2 0(D) ={q∈L2(D) :Z Dq= 0}. Introduce the bilinear forms a∞(u, v) := 2˜ µZ DD(u) :D(v)dx, b(v, q) :=Z Dqdivv dx, s(p, q) :=Z Dp q dx, and the linear form L(v) :=⟨t, v⟩∂D. Then integration by parts shows that the pair (u˜λ, p˜λ)satisfies the saddle-point problem: a∞(u˜λ, v)−b(v, p˜λ) =L(v)∀v∈V, b(u˜λ, q)−(1/˜λ)s(p˜λ, q) = 0 ∀q∈Q. Indeed the second line is equivalent to divu˜λ=p˜λ/˜λ. When ˜λ→ ∞ it formally reduces to b(u0, q) = 0 (divu0= 0) and the Stokes-Neumann problem for (u0, p0). By the Ladyzhenskaya–Babuška–Brezzi theory the above mixed system is well-posed for all ˜λ >0 and the solution operator A(˜λ)−1:V∗×Q∗→V×Q is bounded uniformly in ˜λ(for˜λ≥λ0>0). 2. Block-operator splitting and Neumann-series. Write the block operator A(˜λ) =" A11A12 A21A22# with A11=a∞:V→V∗, A 12=−bT:Q→V∗, A21=b:V→Q∗, A 22=−(1/˜λ)s:Q→Q∗. Equivalently, A(˜λ) =A∞−(1/˜λ)N, where A∞=" a∞−bT b 0# is the standard Stokes operator (an isomorphism V×Q→V∗×Q∗), and N=" 0 0 0s# is bounded Q→Q∗(hence bounded V×Q→V∗×Q∗). By continuity of A−1 ∞and smallness of (1/˜λ)Nin operator-norm for ˜λlarge, the Neumann-series converges: A(˜λ)−1= [I−(1/˜λ)A−1 ∞N]−1A−1 ∞ =∞X j=0(1/˜λ)j(A−1 ∞N)jA−1 ∞. 68 Truncation after j=K+ 1gives A(˜λ)−1=KX j=0(1/˜λ)j(A−1 ∞N)jA−1 ∞+O(˜λ−(K+1)) in operator-norm on V∗×Q∗→V×Q. Applied to the right-hand side (0, L), this yields the asserted expansions of (u˜λ, p˜λ)up to O(˜λ−(K+1))in the H1×L2norm. 3. Identification of coefficients. By equating powers of 1/˜λin (u˜λ, p˜λ) =KX j=0˜λ−j(vj, pj) + (RK+1, QK) one obtains recursively: (v0, p0) = (u0, p0)solves A∞(v0, p0) = (0 , L), and for j≥1, (vj, pj) =A−1 ∞(0, s(pj−1,·)) is the unique solution of a∞(vj, w)−b(w, pj) = 0 , b(vj, q) = (pj−1, q)L2(D) for all (w, q). In particular j= 1gives the mixed Stokes problem for (v1, p1), and in strong form −2˜µdivD(v1) +∇p1= 0,divv1=p0, with homogeneous Neumann data. Its divergence-free component may be written v1=−B−1∇p0. 4. Remainder estimate. The Neumann-series truncation at j=K+ 1yields the error operator EK+1(˜λ) :=A(˜λ)−1−KX j=0(1/˜λ)j(A−1 ∞N)jA−1 ∞ and one shows ∥EK+1(˜λ)∥L(V∗×Q∗, V×Q)=O(˜λ−(K+1)). Since (0, L)∈V∗×Q∗, applying EK+1(˜λ)to(0, L)gives the remainders (RK+1, QK)with ∥RK+1∥H1(D)+∥QK∥L2(D)=O(˜λ−(K+1)). This completes a correct, complete, and rigorous proof of the asymptotic expansion and the sharp remainder bound. comment : None type: lemma Lemma 44. (Boundary–Trace Rate–Improvement via First–Order Asymptotics—Revised Statement) LetD⊂Rd(d≥2) be a bounded C1,1domain and fix ˜µ >0. For each ˜λ≫1, letu˜λ∈H1(D;Rd) solve the compressible–elastic Neumann problem ˜µ∆u˜λ+ (˜λ+ ˜µ)∇(divu˜λ) = 0 inD, σ(˜λ,˜µ)(u˜λ)N=ton∂D, where t∈H−1 2(∂D;Rd)annihilates all rigid motions. Let (u0, p0)be the incompressible–Stokes Neumann solution in Dwith the same t, and let v1, v2be the first and second correctors from the asymptotic expansion of
https://arxiv.org/abs/2505.22451v1
Lemma 22. Then there is a constant C, depending only on D,d,˜µ, and the C1,1–character of ∂D, such that for all ˜λ≫1, ∥u˜λ−u0∥H1 2(∂D)≤C˜λ−1∥t∥H−1 2(∂D). In other words, the boundary–trace error between the compressible–elastic solution and its Stokes– limit vanishes at rate O(1/˜λ). 69 correctness : True Proof. Proof. We prove the O(˜λ−1)boundary-trace estimate in three steps: invoke a second-order interior expansion, bound the correctors, then apply the trace operator. Step 1. Second-order asymptotic expansion. By Lemma 22 (with K= 1) there exist v1, v2∈ H1(D;Rd), pressures p1∈L2 0(D), and a remainder R2(˜λ)∈H1(D;Rd)such that u˜λ=u0+˜λ−1v1+˜λ−2v2+R2(˜λ), with the uniform remainder bound ∥R2(˜λ)∥H1(D)≤CR˜λ−3 for all ˜λ≫1, where CRdepends only on D, d, ˜µ. Step 2. H1-bounds for the correctors. The correctors (vj, pj)solve Stokes-type problems on D withC1,1boundary. By coercivity of the Stokes bilinear form and standard H1–L2regularity on C1,1domains one shows there exists C1such that for j= 1,2 ∥vj∥H1(D)+∥pj∥L2(D)≤C1∥t∥H−1/2(∂D). Indeed: • For j= 1,v1satisfies a∞(v1, w)−b(w, p1) = 0 , b(v1, q) = (p0, q)L2(D), so∥v1∥H1≤C∥p0∥L2and∥p0∥L2≤C∥t∥H−1/2by Stokes–Neumann well-posedness and Neumann-trace regularity. •Forj= 2,(v2, p2)solves a similar Stokes system with right-hand-side given by (p1,∇·v1), each bounded by C∥t∥, hence ∥v2∥H1≤C∥t∥. Step 3. Boundary-trace estimate. Letγ:H1(D)→H1/2(∂D)be the continuous trace operator, ∥γ(w)∥H1/2(∂D)≤Ctr∥w∥H1(D). Then u˜λ−u0=˜λ−1v1+˜λ−2v2+R2(˜λ), so by γ-continuity and the triangle inequality, ∥u˜λ−u0∥H1/2(∂D)≤Ctr ˜λ−1∥v1∥H1+˜λ−2∥v2∥H1+∥R2(˜λ)∥H1 . By Step 2 and the remainder estimate, for ˜λ≫1one obtains ∥u˜λ−u0∥H1/2(∂D)≤Ctr C1˜λ−1+C1˜λ−2+CR˜λ−3 ∥t∥H−1/2(∂D)≤C˜λ−1∥t∥H−1/2(∂D), where C= 2Ctrmax{C1, CR}depends only on D, d, ˜µand the C1,1-character of ∂D. This com- pletes the proof. [Correct] The proof logic here is clear, but the process is not detailed enough. comment : None type: lemma Lemma 45. (Sharp Boundary-Trace Asymptotics with First-Order Corrector) Let Ω⊂Rdbe a bounded C1,1domain and for ε∈(0, ε0)letDε⊂Ωbe the ε-periodic union of identical C1,1 inclusions, with complement Ωε:= Ω\Dεconnected. Fix background Lamé parameters (λ, µ)on Ωεand high-contrast parameters (˜λ,˜µ)inDεwith˜µ >0and˜λ≫1. Given a boundary-traction g∈H−1 2(∂Ω)orthogonal to all rigid motions, let uεsolve the transmission problem (elastic in Ωε, compressible-elastic in Dε) with traction gon∂Ω. Let (ulim, plim)be the incompressible-limit solution (elastic in Ωε, Stokes in Dε), and for each inclusion-cell DletvD 1be the first-order corrector 70 from the asymptotic expansion of the local Neumann-problem (cf. Lemma 28). Define the global first-order corrector U(1)(x) :=˜λ−1X cellsD⊂DεχD(x)vD 1(x), where each vD 1is the pull-back of the unit-cell corrector to D. Then there is a constant C, independent ofεand of large ˜λ, such that (uε−ulim−U(1))|∂Dε H1 2(∂Dε)≤C˜λ−2∥g∥H−1 2(∂Ω). correctness : True Proof. We replace the incomplete “boundary-trace lemma” in the original sketch by a resolvent- expansion argument for the Neumann-to-Dirichlet map on a C1,1inclusion. The proof proceeds in three main steps. 1. Traction-Jump Decomposition on each inclusion. On each ε–inclusion Dwe denote by Nthe outward normal and by t:= (−plimI+ 2˜µD(ulim))Nthe Stokes-limit traction. From Lemma 16 (Estimate of the Traction-Jump) we have the exact decomposition (σ(˜λ,˜µ)(uε)N)−=t+JD ε, with the residual jump ∥JD ε∥H−1/2(∂D)≤C∥plim∥L2(D), uniformly in ε,˜λ. In particular the total Neumann datum on ∂Dfor the compressible-elastic problem ist+JD ε. 2. Local Neumann-to-Dirichlet expansion and first-order corrector. Fix one reference inclusion D0 (say of unit size) and write Λ(˜λ)for its Neumann-to-Dirichlet map Λ(˜λ) :H−1/2(∂D0)→H1/2(∂D0), t7→u˜λ|∂D0, where u˜λsolves ˜µ∆u+
https://arxiv.org/abs/2505.22451v1
(˜λ+ ˜µ)∇(divu) = 0 inD0, σ(˜λ,˜µ)(u)N=ton∂D0, normalized to kill rigid motions. By combining the asymptotic-expansion of the resolvent (Lemma 28 and Lemma 27) with the Lipschitz-dependence estimate (Lemma 23) one shows that Λ(˜λ)admits a convergent expansion in powers of 1/˜λ: Λ(˜λ) = Λ( ∞) +˜λ−1Λ1+R(˜λ),∥R(˜λ)∥H−1/2→H1/2≤C˜λ−2. Here Λ(∞)is the Stokes Neumann-to-Dirichlet map, and Λ1is the first-order corrector operator. Pulling back to any physical cell D=εD0+nwe obtain the same expansion on ∂D. Applying it to the datum t+JD εgives uε|∂D= Λ( ˜λ)[t+JD ε] = Λ( ∞)t+˜λ−1Λ1t+˜λ−1Λ1JD ε+R(˜λ)[t+JD ε]. By definition the Stokes-difference ulim|∂D= Λ(∞)t, and the prescribed first-order corrector on ∂D isvD 1|∂D= Λ 1t. Hence the boundary-trace remainder on Dis RD 2:=uε|∂D−ulim|∂D−˜λ−1vD 1|∂D=˜λ−1Λ1JD ε+R(˜λ)[t+JD ε]. Using the operator bounds and ∥JD ε∥H−1/2≤C∥plim∥L2(D),∥t∥H−1/2≤C∥plim∥L2(D), we obtain for each cell D ∥RD 2∥H1/2(∂D)≤˜λ−1∥Λ1∥∥JD ε∥H−1/2+∥R(˜λ)∥(∥t∥H−1/2+∥JD ε∥H−1/2)≤C˜λ−2∥plim∥L2(D). 71 3. Global assembly over all cells. By construction the H1/2–norm on ∂Dεis the ℓ2–sum of the norms on each inclusion boundary: ∥(uε−ulim−U(1))|∂Dε∥2 H1/2(∂Dε)=X cellsD∥RD 2∥2 H1/2(∂D). Hence, using the per-cell estimate above and the uniform pressure-estimate (Lemma 24) ∥plim∥L2(Dε)≤C∥g∥H−1/2(∂Ω), we get ∥(uε−ulim−U(1))|∂Dε∥H1/2(∂Dε)≤C˜λ−2 X cells∥plim∥2 L2(D)!1/2 =C˜λ−2∥plim∥L2(Dε)≤C′˜λ−2∥g∥H−1/2(∂Ω). This completes the proof of the claimed sharp O(˜λ−2)estimate in the H1/2–norm on ∂Dε. comment : None type: lemma Lemma 46. (Global H1-Error at Rate O(˜λ−1/2)) Retain all of the standing Assumptions 0–8 and notation. Let uε∈H1(Ωε;Rd) be the solution of the full high-contrast transmission problem with Lamé parameters (λ, µ)inΩεand (˜λ,˜µ)inDε(˜µ >0,˜λ+ 2˜µ/d > 0,˜λ≫1), subject to the prescribed boundary traction g∈H−1/2(∂Ω;Rd),Z ∂Ωg·r= 0∀rigid motions r. Let(ulim, plim)be the incompressible-limit (Stokes–elastic) transmission-pair. Then there is a constant C, independent of εand of ˜λ≳1, such that ∥uε−ulim∥H1(Ωε)≤C˜λ−1/2∥g∥H−1/2(∂Ω). correctness : True Proof. Proof. We denote the exterior error by wext:=uϵ−ulimonΩϵ= Ω\Dϵ. Step 1. Exterior H1–energy by Dirichlet trace. By the corrected Lemma 25 (Global Exterior-Energy Error-Estimate) we have a uniform constant C1so that ∥wext∥H1(Ωϵ)≤C1∥(uϵ−ulim)∥H1 2(∂Dϵ). Step 2. Boundary-trace convergence on each inclusion. Fix one inclusion D= one connected component of Dϵ, and write text= (σ(ulim)N)+∈H−1 2(∂D). OnDthe two Neumann problems are (1) compressible-elastic with parameter ˜λ: solution uϵ|D, (2) incompressible-Stokes with traction text: solution ulim|D. By the sharp resolvent-convergence Lemma 27 (or its equivalent Neumann-to-Dirichlet rate) there is C2independent of ˜λsuch that on ∂D ∥uϵ−ulim∥H1 2(∂D) ≤C2˜λ−1 2∥text∥H−1 2(∂D). Summing over all inclusions (they are disjoint and uniformly Lipschitz) gives ∥uϵ−ulim∥H1 2(∂Dϵ)≤C2˜λ−1 2∥text∥H−1 2(∂Dϵ). Step 3. Bound textby the given g. Since (ulim, plim)solves the well-posed Stokes–elastic transmission system in Ωwith traction gon∂Ω, the uniform stability Lemma 15 gives ∥ulim∥H1(Ω)+∥plim∥L2(Dϵ)≤C3∥g∥H−1 2(∂Ω). Moreover the Neumann-trace continuity es- timate (Lemma 14) on each Lipschitz inclusion shows ∥(σ(ulim)N)+∥H−1 2(∂Dϵ)≤ C4 λ∥divulim∥L2(Ωϵ)+µ∥∇ulim∥L2(Ωϵ) ≤C5∥ulim∥H1(Ω) ≤C6∥g∥H−1 2(∂Ω). Hence ∥text∥H−1 2(∂Dϵ)≤C6∥g∥H−1 2(∂Ω). 72 Step 4. Combine Steps 1–3. Putting the three estimates together, ∥uϵ−ulim∥H1(Ωϵ)≤C1∥uϵ−ulim∥H1 2(∂Dϵ) ≤C1C2˜λ−1 2∥text∥H−1 2(∂Dϵ) ≤C1C2C6˜λ−1 2∥g∥H−1 2(∂Ω). Setting C=C1C2C6completes the proof of the stated O(˜λ−1 2)–rate. [Correct] The process here is reasonable and correct. comment : None type: lemma Lemma 47. (Existence and H1–Bound for the Global First-Order Corrector, corrected statement) Retain all the notation and hypotheses of Lemmas 16–17 for the high-contrast transmission problem in the perforated domain Ωε= Ω\Dε, D ε=N[ i=1Di ε, and let (ulim,
https://arxiv.org/abs/2505.22451v1
plim)be the coupled Stokes–elastic limit solution (Stokes in Dε, elasticity in Ωε) with prescribed boundary traction g∈H−1/2(∂Ω). Define the global first-order corrector U(1)∈ H1(Ω;Rd), unique up to rigid motions, as the solution of the homogeneous Lamé transmission- Neumann problem   eµ∆U(1)+ (eλ+eµ)∇(divU(1)) = 0 inDε, µ∆U(1)+ (λ+µ)∇(divU(1)) = 0 inΩε, σ(eλ,eµ)(U(1))N|−−σ(λ,µ)(U(1))N|+=−plimN on∂Dε, σ(λ,µ)(U(1))N= 0 on∂Ω, subject to the compatibility that the data −plimNannihilates all rigid motions. Then there is C >0, independent of εandeλ, such that ∥U(1)∥H1(Ω)≤C∥plim∥L2(Dε). Moreover, if one sets the single-corrector approximation euε:=ulim+eλ−1U(1), then the resulting traction-jump on ∂Dε, J(1) ε:= σ(eλ,eµ)(euε)N −− σ(λ,µ)(euε)N +, satisfies the uniform bound ∥J(1) ε∥H−1/2(∂Dε)≤C∥plim∥L2(Dε), withCagain independent of εandeλ. correctness : True Proof. We divide the proof into two steps. 1. Existence and H1–estimate for U(1). Define the space V:= v∈H1(Ω;Rd) :Z ∂Ωv·r dS+NX i=1Z ∂Diεv·r dS= 0∀rigid motions r . OnVconsider the bilinear form a(v, w) :=Z Ωε 2µ D(v) :D(w)+λ(divv)(div w) dx+Z Dε 2eµ D(v) :D(w)+(eλ+eµ) (div v)(div w) dx. 73 By uniform Korn’s inequality (Lemma 11) and the admissibility λ+ 2µ/d > 0,eλ+ 2eµ/d > 0, there isα >0, independent of εandeλ, such that a(v, v)≥α∥v∥2 H1(Ω)∀v∈V. The linear functional L(w) :=NX i=1 −plimN, w H−1/2(∂Diε), H1/2(∂Diε) is well defined because Lemma 21 gives ∥ −plimN∥H−1/2(∂Dε)≤C∥plim∥L2(Dε). [Vague] This setting needs more verification. Moreover −plimNannihilates all rigid motions: for any rigid r, −plimN, r =−Z ∂Dεplim(r·N)dS=−Z Dε(divr)plimdx= 0. By Lax–Milgram there is a unique U(1)∈Vsolving a U(1), w =L(w)∀w∈V, and coercivity yields ∥U(1)∥H1(Ω)≤α−1∥L∥V∗≤C∥plim∥L2(Dε). 2. Traction-jump estimate for euε. Set euε:=ulim+eλ−1U(1). On the outer boundary ∂Ω, since U(1)has zero exterior traction, the traction of euεequals that of ulim, hence matches the prescribed g. On each inclusion boundary ∂Dε, the jump of the limit-solution traction vanishes:  σ(ulim)N ∂Dε= 0 by the transmission condition in the Stokes–elastic limit. Therefore J(1) ε= σ(euε)N = σ(ulim)N |{z} =0+eλ−1 σ(U(1))N =−eλ−1plimN. Hence, using Lemma 21 again, ∥J(1) ε∥H−1/2(∂Dε)=eλ−1∥plimN∥H−1/2(∂Dε)≤C∥plim∥L2(Dε), withCindependent of εandeλ. This completes the proof. comment : None type: lemma Lemma 48. (Global H1–Error at Rate O(λ−1)) Retain all of the standing Assumptions 0–8 and notation. Let ue∈H1(Ωe;Rd)be the solution of the full high-contrast transmission problem in Ωwith background Lamé parameters (λ, µ)onΩeand(˜λ,˜µ)inDe,˜µ > 0,˜λ≫1, subject to the prescribed boundary traction g∈H−1/2(∂Ω;Rd),R ∂Ωg·r= 0∀rigid motions r. Let (ulim, plim)be the incompressible-limit (Stokes–elastic) transmission pair, and let U(1)be the global first-order corrector (normalized so that U(1)=O(1)inH1and solves the homogeneous Lamé system with transmission-Neumann data −plimNon each ∂De, cf. Lemma 32). Define the single- corrector approximation ˜ue:=ulim+˜λ−1U(1), and set the remainder w:=ue−˜ueonΩe. Then there exists C >0, independent of εand˜λ, such that ∥w∥H1(Ωe)≤C˜λ−1∥g∥H−1/2(∂Ω). In particular ∥ue−ulim∥H1(Ωe)≤C˜λ−1∥g∥H−1/2(∂Ω). 74 correctness : True Proof. Proof. 1. By construction (Lemma 32), •U(1)solves the homogeneous Lamé-transmission problem in De∪Ωewith zero Neumann data on ∂Ωand with imposed jump σ(˜λ,˜µ)(U(1))N|−−σ(λ,µ)(U(1))N|+=−plimN on each ∂De, •∥U(1)∥H1(Ω)≤C∥plim∥L2(De). 2. Define the single-corrector approximation ˜ue=ulim+˜λ−1U(1). Its traction-jump on ∂Deis J(1) e:= σ(˜λ,˜µ)(˜ue)N −− σ(λ,µ)(˜ue)N +=˜λ−1 σ(˜λ,˜µ)(U(1))N|−−σ(λ,µ)(U(1))N|+ . But by the defining jump of U(1), σ(˜λ,˜µ)(U(1))N|−−σ(λ,µ)(U(1))N|+=−plimN, so J(1) e=−˜λ−1plimN. Hence by Lemma 21 ( pN∈H−1/2(∂De)) and Lemma 24 ( ∥plim∥L2≤C∥g∥), ∥J(1) e∥H−1/2(∂De)≤˜λ−1∥plimN∥H−1/2≤C˜λ−1∥g∥H−1/2(∂Ω). 3. Set w:=ue−˜ueonΩe. Then wsolves the homogeneous Lamé system
https://arxiv.org/abs/2505.22451v1
in Ωewith zero Neumann data on ∂Ωand with traction-jump J(1) eon∂De. By the uniform exterior-energy estimate (Lemmas 9 and 18), ∥w∥H1(Ωe)≤C∥J(1) e∥H−1/2(∂De)≤C˜λ−1∥g∥H−1/2(∂Ω). 4. Finally, since ue−ulim= (ue−˜ue) + ˜ue−ulim =w+˜λ−1U(1), and∥U(1)∥H1=O(∥plim∥) =O(∥g∥), the same O(˜λ−1)–rate holds for ∥ue−ulim∥H1. This completes the proof. comment : None type: lemma Lemma 49. (Sharp local H1–error at rate O(˜λ−1)for one inclusion— C1,1-case) Let D⊂Rdbe a bounded C1,1domain and fix ˜µ >0. For each ˜λ≫1letu˜λbe the unique (modulo rigid motions) H1–solution of the compressible-elastic Neumann problem ˜µ∆u˜λ+ (˜λ+ ˜µ)∇(divu˜λ) = 0 inD, σ(˜λ,˜µ)(u˜λ)N=ton∂D, witht∈H−1/2(∂D;Rd)annihilating all rigid motions. Let (u0, p0)be the incompressible-Stokes Neumann solution in Dwith the same traction t, i.e. ˜µ∆u0+∇p0= 0,divu0= 0 inD, (−p0I+ 2˜µ D(u0))N=ton∂D. Then there is a constant C, depending only on D,dand˜µ(but noton˜λort), such that for all ˜λ≫1 ∥u˜λ−u0∥H1(D)≤C˜λ−1∥t∥H−1/2(∂D). 75 correctness : True Proof. Proof. The proof rests on the first-order asymptotic expansion of u˜λin powers of 1/˜λ(see Lemma 28) and on uniform a priori estimates for the Stokes Neumann problem. 1. (Asymptotic expansion.) By Lemma 28 (valid in the C1,1-setting) we have for ˜λ≫1the decomposition u˜λ=u0+˜λ−1v1+R2(˜λ), where the remainder R2(˜λ)satisfies the sharp bound ∥R2(˜λ)∥H1(D)≤C˜λ−2∥t∥H−1 2(∂D). Hence ∥u˜λ−u0∥H1(D)≤˜λ−1∥v1∥H1(D)+C˜λ−2∥t∥H−1 2(∂D). 2. (Estimate of the corrector v1.) By construction v1is the unique H1–solution (modulo rigid motions) of the homogeneous Stokes system with Neumann data equal to the “elastic pressure” p0, namely ˜µ∆v1+∇p1= 0,divv1= 0 inD, (−p1I+ 2˜µD(v1))N=p0N on∂D. Standard Neumann-Stokes theory on C1,1domains yields the uniform stability estimate ∥v1∥H1(D)+∥p1∥L2(D)≤C∥p0N∥H−1 2(∂D). Moreover, by Lemma 21 ( pNis a continuous H−1 2-trace) one has ∥p0N∥H−1 2(∂D)≤C∥p0∥L2(D). Combining these gives ∥v1∥H1(D)≤C∥p0∥L2(D). 3. (Pressure bound.) Likewise the incompressible-Stokes Neumann problem that defines (u0, p0) satisfies the uniform a priori bound ∥u0∥H1(D)+∥p0∥L2(D)≤C∥t∥H−1 2(∂D). 4. (Conclusion.) Putting 1–3 together: ∥u˜λ−u0∥H1(D)≤˜λ−1∥v1∥H1(D)+C˜λ−2∥t∥H−1 2(∂D) ≤C˜λ−1∥p0∥L2(D)+C˜λ−2∥t∥H−1 2(∂D) ≤C′˜λ−1∥t∥H−1 2(∂D), which is the desired estimate. [Vague] The proof here just offers the conclusions but not the detailed process. comment : None type: lemma Lemma 50. (Global H1-Error at Rate O(˜λ−1)) Under Assumptions 0–8, let •uebe the solution of the high-contrast Lamé–transmission problem in Ωwith background Lamé parameters (λ, µ)onΩe= Ω\Deand high-contrast parameters (˜λ,˜µ)inDe(˜µ >0, ˜λ≫1), driven by a boundary traction g∈H−1 2(∂Ω)orthogonal to all rigid motions. •(ulim, plim)be the incompressible-limit (Stokes–elastic) transmission pair for the same data g. •U(1)∈H1(Ω)be the global first-order corrector characterized (modulo rigid motions) by 76 –inDe:L˜λ,˜µU(1)= 0, –inΩe:Lλ,µU(1)= 0, with transmission-Neumann data σ(˜λ,˜µ)(U(1))N|−−σ(λ,µ)(U(1))N|+=−plimNon∂De, and σ(λ,µ)(U(1))N= 0on∂Ω, whose existence and bound ∥U(1)∥H1(Ω)≤C∥g∥H−1 2(∂Ω)follow from Lemma 32. Define the single-corrector approximation ˜ue:=ulim+˜λ−1U(1). Then the remainder w:=ue−˜uesatisfies the sharp global estimate ∥w∥H1(Ωe)≤C˜λ−1∥g∥H−1 2(∂Ω), withCindependent of εand˜λ. correctness : True Proof. Proof. We split the argument into three steps, invoking the key lemmas from the background. Step 1. Construction of U(1)and its H1-bound. By Lemma 32 (Existence and H1–Bound for the Global First-Order Corrector) there is a unique U(1)∈H1(Ω)(modulo rigid motions) solving the homogeneous Lamé system in Ωeand in Dewith σ(˜λ,˜µ)(U(1))N|−−σ(λ,µ)(U(1))N|+=−plimNon∂De, σ(λ,µ)(U(1))N= 0on∂Ω, and satisfying ∥U(1)∥H1(Ω)≤C∥plim∥L2(De). By the uniform pressure-estimate (Lemma 24) we have ∥plim∥L2(De)≤C∥g∥H−1 2(∂Ω), and hence ∥U(1)∥H1(Ω)≤C∥g∥H−1 2(∂Ω). Step 2. Boundary-trace improvement of the single-corrector approximation. Set ˜ue=ulim+˜λ−1U(1). Then on each inclusion boundary ∂Dethe boundary-trace error τ:= (ue−ulim−˜λ−1U(1))|∂De satisfies the sharp estimate
https://arxiv.org/abs/2505.22451v1
∥τ∥H1 2(∂De)=∥ue|∂De−ulim|∂De−˜λ−1U(1)|∂De∥H1 2(∂De)≤C˜λ−1∥g∥H−1 2(∂Ω), by Lemma 30 (Boundary-Trace Rate-Improvement via First-Order Asymptotics). Step 3. Interior H1-estimate from boundary H1 2-error. Define w:=ue−˜ueonΩe. Then wsolves the homogeneous Lamé system in Ωe, with zero traction on ∂Ωand Dirichlet data w|∂De=τ. By Lemma 25 (Global Exterior-Energy Error-Estimate), one has the a priori bound ∥w∥H1(Ωe)≤C∥w∥H1 2(∂De) and hence, combining with Step 2, ∥w∥H1(Ωe)≤C′˜λ−1∥g∥H−1 2(∂Ω). This completes the proof of the O(˜λ−1)-estimate. 77 comment : None type: lemma Lemma 51. (Improved Global H1–Error at Rate O(˜λ−2)After Single–Corrector) Retain all the notation and hypotheses of Lemma 35, and in addition assume that each inclusion boundary ∂Diis C1,1. Letuebe the high-contrast transmission solution and (ulim, plim)the Stokes–elastic limit, and letU(1)be the global first-order corrector of Lemma 32, so that ˜ue:=ulim+˜λ−1U(1). Define the remainder w:=ue−˜ueonΩe. Then there is C >0(independent of εand˜λ) such that ∥w∥H1(Ωe)≤C˜λ−2∥g∥H−1/2(∂Ω). correctness : True Proof. We break the proof into three steps. Step 1. Improved boundary-trace estimate. By Lemma 30 (Sharp-boundary-trace asymptotics with first-order corrector), under the C1,1assumption one has the boundary-trace improvement ∥(uε−ulim−˜λ−1U(1))|∂Dε∥H1 2(∂Dε)≤C1˜λ−2∥g∥H−1 2(∂Ω). Setδu:= (uε−˜uε)|∂Dε. Then ∥δu∥H1 2(∂Dε)≤C1˜λ−2∥g∥H−1 2(∂Ω). (1) Step 2. From boundary-trace error to traction-jump error. We wish to bound the net traction-jump J:=σ(λ,µ)(˜uε)N|+−σ(λ,µ)(uε)N|−inH−1 2(∂Dε). Since on each ∂Dithe elasticity Neumann- to-Dirichlet map ΛN:H1 2(∂Di)→H−1 2(∂Di), v7→σ(v)N is a bounded linear operator (by standard Neumann-trace continuity, Lemma 14), there is C2depend- ing only on the C1,1geometry and background parameters such that ∥J∥H−1 2(∂Dε)=∥ΛN(δu)∥H−1 2≤C2∥δu∥H1 2(∂Dε). Combining with (1) gives ∥J∥H−1 2(∂Dε)≤C2C1˜λ−2∥g∥H−1 2(∂Ω). (2) [Vague] This proof lacks an intermediate derivation process and only gives a conclusion. Step 3. Energy estimate in the exterior. The remainder wsolves in Ωεthe homogeneous Lamé system with zero traction on ∂Ωand traction-jump Jon∂Dε. By the uniform exterior energy estimate (Lemma 18), there is C3(independent of ε,˜λ) such that ∥w∥H1(Ωε)≤C3∥J∥H−1 2(∂Dε). Substitute (2) into this bound to conclude ∥w∥H1(Ωε)≤C3·(C2C1)˜λ−2∥g∥H−1 2(∂Ω)=:C˜λ−2∥g∥H−1 2(∂Ω), as claimed. comment : None type: lemma Lemma 52. uϵ−ulim H1/2(∂Diϵ)≤Ceλ−1∥g∥H−1/2(∂Ω). correctness : True 78 Proof. We fix one inclusion D:=Di ϵ, which by Assumption 1 has a C1,1boundary ∂D. Denote the traction-jump on ∂Dby Jϵ:= σ(eλ,eµ)(uϵ)N −− σ(λ,µ)(ulim)N +. By Lemma 17 (Estimate of the Traction-Jump), one splits Jϵ=plimN+Rϵ, and there is C1independent of ϵ,eλsuch that ∥Rϵ∥H−1/2(∂D)≤C1∥plim∥L2(D). On the other hand Lemma 24 (Uniform Pressure-Estimate) gives ∥plim∥L2(Dϵ)≤C2∥g∥H−1/2(∂Ω). Hence ∥Jϵ∥H−1/2(∂D)≤ ∥plimN∥H−1/2(∂D)+∥Rϵ∥H−1/2(∂D)≤C3∥g∥H−1/2(∂Ω). (6) Next consider the difference w:=uϵ−ulim onD. Then w∈H1(D)solves the compressible-elastic Neumann problem eµ∆w+ (eλ+eµ)∇(divw) = 0 inD, σ(eλ,eµ)(w)N=Jϵon∂D, withR ∂DJϵ·r= 0for every rigid motion r. By Lemma 29 (Boundary-Trace Rate–Improvement via First-Order Asymptotics) on the C1,1domain D, there is C4depending only on D, d,eµand the C1,1-character of ∂D, such that ∥w∥H1/2(∂D)≤C4eλ−1∥Jϵ∥H−1/2(∂D). Combining with (6) yields ∥uϵ−ulim∥H1/2(∂D)=∥w∥H1/2(∂D)≤Ceλ−1∥g∥H−1/2(∂Ω), withC=C3C4independent of ϵ,eλ. Since there are finitely many inclusions, the same bound holds uniformly over each ∂Di ϵ. This completes the proof. [Correct] The conclusion presented herein is methodologically derived through an extension of previously established procedural frameworks. comment : None type: lemma Lemma 53. (Global Exterior-Energy Estimate in Terms of the Interior Dirichlet Trace) Under Assumptions 1–2 (so that each perforated domain Ωε= Ω\Dεis connected Lipschitz with constants independent of ε), let wext:=uε−ulim onΩε, where uεis the
https://arxiv.org/abs/2505.22451v1
full high-contrast Lamé solution and ulimthe incompressible-limit (Stokes–elastic) solution, both driven by the same boundary traction gon∂Ω. Then there exists C >0, independent ofεand of the high-contrast parameter ˜λ, such that ∥wext∥H1(Ωε)≤C∥wext∥H1/2(∂Dε). Equivalently, ∥uε−ulim∥H1(Ωε)≤C∥uε−ulim∥H1/2(∂Dε). correctness : True Proof. Proof. 1. Problem for wext. By construction (see Lemma 18) wextsolves the homogeneous Lamé system in Ωεwith zero Neumann (traction) data on the outer boundary ∂Ωand prescribed Dirichlet data wext|∂Dε=:φ∈H1/2(∂Dε;Rd). 79 We wish to show the H1–norm of wextinΩεis controlled by ∥φ∥H1/2(∂Dε)with a constant indepen- dent of ε. 2. Variational set-up. Define the closed subspace V0:={v∈H1(Ωε;Rd) :v= 0on∂Dε, σ(v)N= 0on∂Ω} and the bilinear form a(v, w) :=Z Ωε 2µ D(v) :D(w) +λ(divv)(divw) dx. By uniform coercivity (Lemma 10) together with Korn’s inequality on perforated domains (Lemma 11), there exists α >0independent of εsuch that a(v, v)≥α∥v∥2 H1(Ωε)∀v∈V0. 3. Right-inverse of the trace. Since Ωεis a bounded Lipschitz domain with uniform constants, the Dirichlet trace operator T:H1(Ωε;Rd)→H1/2(∂Dε;Rd), T (u) =u|∂Dε is surjective and admits a bounded right-inverse E:H1/2(∂Dε;Rd)→H1(Ωε;Rd), with E(φ)|∂Dε=φ, E (φ)|∂Ω= 0, and ∥E(φ)∥H1(Ωε)≤Ctr∥φ∥H1/2(∂Dε), where Ctrdepends only on the uniform Lipschitz character of Ωε(hence independent of ε). [Vague] The application of the theorem necessitates verification of its applicability conditions. [Error] The form of function space is not accurate. 4. Decomposition of the solution. Set ψ:=E(φ), z :=wext−ψ. Then z∈V0(indeed zvanishes on ∂Dεand has zero traction on ∂Ω), and wext=ψ+z. In the weak form we have, for all v∈V0, a(wext, v) = 0 (since wextis the homogeneous-system solution) , so a(z, v) =−a(ψ, v). 5. Application of coercivity. Taking v=zin the last identity and using Cauchy–Schwarz plus coercivity, α∥z∥2 H1(Ωε)≤ |a(ψ, z)| ≤Ca∥ψ∥H1(Ωε)∥z∥H1(Ωε), where Cadepends only on the Lamé parameters (λ, µ). Hence ∥z∥H1(Ωε)≤(Ca/α)∥ψ∥H1(Ωε) and therefore ∥wext∥H1(Ωε)≤ ∥ψ∥H1(Ωε)+∥z∥H1(Ωε) ≤(1 +Ca/α)∥ψ∥H1(Ωε) ≤C∥φ∥H1/2(∂Dε), withC= (1 + Ca/α)Ctrindependent of εand˜λ. 6. Conclusion. Noting φ=wext|∂Dε= (uε−ulim)|∂Dε, we arrive at the asserted estimate ∥uε−ulim∥H1(Ωε)≤C∥uε−ulim∥H1/2(∂Dε). This completes the proof. 80 [Correct] The proof here is correct deprivation. comment : None type: theorem Theorem 3. The problem construction is following. A real pair (λ, µ)is called an admissible Lamé parameter pair if it satisfies: µ >0and dλ+ 2µ >0 (1) The static elasticity system (Lamé system) corresponding to the Lamé parameter pair (λ, µ)is: Lλ,µu:=µ∆u+ (λ+µ)∇divu (2) where the displacement field is denoted as u= (u1, . . . , ud), and its divergence is: divu=dX i=1∂ui ∂xi The admissibility condition ensures the ellipticity of the Lamé operator, and physical laws guarantee that natural materials always satisfy this condition. The Lamé operator can be expressed as ∇ ·σ(u), where the stress tensor is: σ(u) :=λ(∇ ·u)Id+ 2µD(u) (3) D(u) =1 2(∇+∇T)u=1 2(∂iuj+∂jui)ij (4) Here, Idis the identity matrix of order d, andDrepresents the symmetrized differential operator. The normal derivative (boundary traction) on the boundary of a region Eis defined as: ∂u ∂νλ,µ ∂E:=σ(u)N=λ(divu)N+ 2µD(u)N on∂E (5) where Nis the unit outward normal vector on the boundary. We use the standard Sobolev space H1(Ω)and its trace space H1 2(∂E)on the boundary, with the dual space H−1 2(∂E). Consider the space of rigid body motions in Rd: R:= r= (r1, . . . ,
https://arxiv.org/abs/2505.22451v1
r d)T:D(r) = 0 inRd (6) The dimension of Risd(d+1) 2, spanned by the following basis vectors: e1, . . . ,ed, xjei−xiej,for1≤i < j≤d where eidenotes the standard basis vector. These basis vectors are denoted as rj, j= 1, . . . ,d(d+1) 2. Define the space orthogonal to rigid body motions: H−1 2 R(∂Dϵ) :=n ϕ∈H−1(∂Dϵ) : (ϕ,r)(H1 2(∂Dϵi),H−1 2(∂Dϵi))= 0,∀r∈Rand1≤i≤No (7) Similarly, define H1 2 R(∂Dϵ)and boundary spaces. Consider the following transmission problem:   Lλ,µuϵ= 0 inΩ\Dϵ, Leλ,eµuϵ= 0 inDϵ, uϵ|−=u|+and∂uϵ ∂ν(λ,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω) and uϵ|∂Ω∈H1 2 R(∂Ω). (8) This is equivalent to: (Lλ(x),µ(x)uϵ=∇ ·[λ(x)(∇ ·uϵ)Id+ 2µ(x)D(uϵ)] = 0 inΩ, ∂uϵ ∂νλ,µ ∂Ω=g∈H−1 2 R(∂Ω),uϵ|∂Ω∈H1 2 R(∂Ω), 81 (9) where λ(x) =λIΩ\Dϵ+eλIDϵ, µ(x) =µIΩ\Dϵ+eµIDϵ. Suppose the solution of the equation (9) above is uϵ. The existence and uniqueness of the solution uϵare guaranteed by the ellipticity condition (1) and the Lax-Milgram theorem. Finally, the homogenization needed to be solved is the following. When eλ→ ∞ andeµis fixed, the coupled system in the incompressible inclusion limit is obtained:   Lλ,µuϵ= 0 inΩ\Dϵ, Leµ(uϵ, pϵ) = 0 anddivuϵ= 0 inDϵ, uϵ|−=uϵ|+and∂(uϵ,pϵ) ∂ν(∞,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω) and uϵ|∂Ω∈H1 2 R(∂Ω), (10) where Leµ(uϵ, pϵ) =eµ∆uϵ+∇pϵrepresents the Stokes operator with viscosity coefficient eµ, andpϵis the pressure field. Its normal derivative is defined as∂(u,p) ∂ν(∞,µ) −:=pN+ 2µD(u)N. Suppose the solution of this limit problem (10) is ulim. Please prove for some constant C, there has ∥ulim−uϵ∥ ≤C eλ∥g∥H−1 2(∂Ω). correctness : True Proof. Combining the two conjectures above yields the desired global rate. Indeed, from the first conjecture we have for each inclusion ∥wext∥H1/2(∂Dϵ)= uϵ−ulim H1/2(∂Dϵ)≤C1eλ−1∥g∥H−1/2(∂Ω). Substituting into the second conjecture, ∥uϵ−ulim∥H1(Ωϵ)=∥wext∥H1(Ωϵ)≤C2∥wext∥H1/2(∂Dϵ)≤C eλ∥g∥H−1/2(∂Ω), where C=C1C2. This is exactly the estimate required: ∥uϵ−ulim∥H1(Ωϵ)≤C eλ∥g∥H−1/2(∂Ω). Thus the proof of the original statement is complete. comment : None 82 E Proof of Homogenization Problem Generated by AIM Based on OpenAI o4-mini type: assumption content : Consider a region D⊂Ω⊂Rd(d≥2), where Ωrepresents the domain of an elastic material, and Dis the inclusion region with high contrast. type: assumption content : 1.Ωis a bounded open domain with a connected Lipschitz boundary ∂Ω 2.Dis an open domain with a finite number of connected components, each having a Lipschitz boundary ∂D. 3.Ω\Dis connected and has a Lipschitz boundary ∂Ω∪∂D. The connected components of Dare denoted as Di,i= 1, . . . , N , where Nis finite. type: assumption content : Given ε∈(0,1),D=Dεconsists of small inclusions arranged in an ε-periodic pattern. The construction steps are as follows: 1. Take the unit cell Y= (−1 2,1 2)d. 2. ω⊂Yis a simply connected open subset with a Lipschitz boundary and satisfies dist(ω, ∂Y )>0. Here, Yf=Y\ωrepresents the model environment at the unit scale. 3. For a given ε >0and n∈Zd, denote ε(n+Y)andε(n+ω)asYn εandωn ε, respectively. Define the lattice point set Πε as the index set satisfying Ynε⊂Ω: Πε:= n∈Zd:Ynε⊂Ω The inclusion region D=Dεand the matrix region Ωεare defined as: Dε:=[ n∈Πεωn ε,Ωε:= Ω\Dε For each fixed ε, the number of connected components of DεisN=|Πε|. 4. Define Yεas the union of
https://arxiv.org/abs/2505.22451v1
all ε-cells in Ω, and Kεas the buffer region: Kε= Ω\ [ n∈ΠεYnε! , Y ε= Ω\Kε ForDεconstructed according to Assumption 2, it can be verified that all conditions of Assumption 1 are satisfied. type: hint content : You can use the two-scale expansion method to obtain the cell problem and subsequently utilize the cell problem to define or explore the homogenized equation. type: hint content : The estimate between the original solution and the limited solution has the form Cϵα∥g∥H−1 2(∂Ω)for some α∈(0,1).type: assumption content : Consider a region D⊂Ω⊂ Rd(d≥2), where Ωrepresents the domain of an elastic material, and Dis the inclusion region with high contrast. type: assumption content : 1.Ωis a bounded open domain with a connected Lipschitz boundary ∂Ω 2.Dis an open domain with a finite number of connected components, each having a Lipschitz boundary ∂D. 3.Ω\Dis connected and has a Lipschitz boundary ∂Ω∪∂D. The connected components of Dare denoted as Di,i= 1, . . . , N , where Nis finite. type: assumption content : Given ε∈(0,1),D=Dεconsists of small inclusions arranged in an ε-periodic pattern. The construction steps are as follows: 1. Take the unit cell Y= (−1 2,1 2)d. 2. ω⊂Yis a simply connected open subset with a Lipschitz boundary and satisfies dist(ω, ∂Y )>0. Here, Yf=Y\ωrepresents the model environment at the unit scale. 3. For a given ε >0and n∈Zd, denote ε(n+Y)andε(n+ω)asYn εandωn ε, respectively. Define the lattice point set Πε as the index set satisfying Ynε⊂Ω: Πε:= n∈Zd:Ynε⊂Ω The inclusion region D=Dεand the matrix region Ωεare defined as: Dε:=[ n∈Πεωn ε,Ωε:= Ω\Dε 83 For each fixed ε, the number of connected components of DεisN=|Πε|. 4. Define Yεas the union of all ε-cells in Ω, and Kεas the buffer region: Kε= Ω\ [ n∈ΠεYnε! , Y ε= Ω\Kε ForDεconstructed according to Assumption 2, it can be verified that all conditions of Assumption 1 are satisfied. type: hint content : You can use the two-scale expansion method to get the cell problem and use the cell problem to define or explore the homogenized equation. type: hint content : The estimate between the original solution and the limited solution has the form Cϵα∥g∥H−1 2(∂Ω)for some α∈(0,1). type: lemma Lemma 54. Conjecture 1. (Uniform Korn Inequality on the periodically perforated domain) Let Ω⊂Rd(d≥2) be a bounded Lipschitz domain. Let Y= −1 2,1 2dandω⊂Ybe a simply-connected open set with Lipschitz boundary, satisfying dist(ω, ∂Y ) =δ >0. For ε∈(0,1), form the ε-periodic perforation Dε=[ k∈Zd, ε(k+Y)⊂Ωε(k+ω), and set Ωε= Ω\Dε. Then there exists a constant C, depending only on Ω,Y, and ω(but not on ε), such that for every u∈H1(Ωε;Rd), ∥u∥H1(Ωε)≤C ∥u∥L2(Ωε)+∥sym∇u∥L2(Ωε) , where sym∇u=1 2(∇u+∇uT). correctness : True Proof. Proof. We shall prove the estimate by a covering–partition-of-unity argument and the fact that each perforated cell εYf(with Yf:=Y\ω) is a fixed Lipschitz domain up to scaling. [Correct] The agent designs a proof idea. 1. (Cellwise Korn with uniform constant.) Since Yf=Y\ωis a bounded Lipschitz set, the classical Korn inequality on Yfgives for all v∈H1(Yf;Rd),∥v∥H1(Yf)≤C0 ∥v∥L2(Yf)+∥sym∇v∥L2(Yf) , withC0depending only on Yf. By the scaling x=εyone deduces that on each
https://arxiv.org/abs/2505.22451v1
ε-cellε(Yf+k) for all v∈H1(ε(Yf+k)),∥v∥H1(ε(Yf+k))≤C0 ∥v∥L2(ε(Yf+k))+∥sym∇v∥L2(ε(Yf+k)) . [Vague] Here the agent considers the Korn inequality but the derivation process is not detailed enough. 2. (Partition of unity subordinate to the periodic tiling.) Let {ψk}k∈Kεbe a smooth partition of unity onΩ: • each ψk∈C∞ c(ε(k+Y)), •0≤ψk≤1,P kψk≡1onΩexcept in a boundary strip of width ≤2ε, • and|∇ψk| ≤C/ε. We also choose a finite set of boundary functions {ψb}supported in the ε-neighborhood of ∂Ωso thatP kψk+P bψb≡1onΩ. By standard Korn–Poincaré on the fixed boundary patches (of size O(ε)), one shows likewise for each boundary patch with cut-off ψb,∥ψbu∥H1≤C ∥u∥L2(supp ψb)+∥sym∇u∥L2(supp ψb) . 84 3. (Summation and absorbing the gradient of cut-off.) Write u=P kuk+P bubwithuk:=ψku, ub:=ψbu. On each interior cell we apply the scaled Korn of Step 1: ∥uk∥H1(ε(Yf+k))≤C0[∥ψku∥L2+∥sym∇(ψku)∥L2] ≤C0 ∥u∥L2(supp ψk)+∥ψksym∇u∥L2+∥u⊗ ∇ψk∥L2 . Since|∇ψk| ≤C/ε andsupp ψk⊂ε(k+Y), we have ∥u⊗ ∇ψk∥L2≤(C/ε)∥u∥L2(supp ψk). By Young’s inequality and the Poincaré-type estimate ∥u∥L2/ε≤C(∥u∥L2+∥sym∇u∥L2), one absorbs the ∇ψk-term into the right-hand side with only a constant depending on C0,C, and the ratioδ/1. Exactly the same argument applies to each boundary patch ψbusing the Korn–Poincaré on a fixed Lipschitz strip. [Vague] The process here should be expressed in combination with mathematical formulas. 4. (Conclusion.) Summing over all kandb, since the overlap of the supports is uniformly bounded, we obtain ∥u∥H1(Ωε)≤C ∥u∥L2(Ωε)+∥sym∇u∥L2(Ωε) , withCdepending only on Ω,Y,ω(through δ) and not on ε. This completes the proof. comment : None type: lemma Lemma 55. Conjecture 2. (Two-Scale Compactness in Periodically Perforated Domains) Let Ω⊂Rd(d≥2) be a bounded Lipschitz domain. Let Y= −1 2,1 2dandω⊂Ya simply-connected open set with Lipschitz boundary satisfying dist(ω, ∂Y )>0, and set Yf=Y\ω(soYfis connected with Lipschitz boundary). For ε >0define the ε–periodic perforation Dε=[ k∈Zd:ε(k+Y)⊂Ωε(k+ω),Ωε= Ω\Dε. Suppose {uε} ⊂H1 0(Ωε;Rd),sup ε>0∥uε∥H1(Ωε)<∞. Then there exist •u0∈H1 0(Ω;Rd), •u1∈L2(Ω;H1 per(Yf)/Rd), such that, up to a subsequence, (i)uε→u0strongly in L2(Ω;Rd), (ii)∇uεtwo-scale converges to ∇xu0(x) +∇yu1(x, y)inL2(Ω×Yf;Rd×d). correctness : True Proof. Proof. We give a proof by the periodic-unfolding method, which avoids delicate global extensions and directly yields the corrector structure. 1. Definition and basic bounds. Recall the unfolding operator Tε:L2(Ωε)→L2(Ω×Yf), which on each cell Ωε∩ε(k+Yf)is defined by (Tεv)(x, y) =v(εk+εy), x∈ε(k+Y), y∈Yf, zero elsewhere. Standard properties (see e.g. Cioranescu–Donato) give constants Cindependent of ε such that for all v∈H1(Ωε): ∥Tεv∥L2(Ω×Yf)=∥v∥L2(Ωε), 85 ∥∇xTεv∥L2(Ω×Yf)+∥∇yTεv∥L2(Ω×Yf)≤C∥∇v∥L2(Ωε). Applying this to uεand using the uniform H1(Ωε)–bound, we deduce {Tεuε}is bounded in H1(Ω;L2(Yf))∩L2(Ω;H1(Yf)), hence also bounded in L2(Ω×Yf). [Correct] Here, the agent constructs a natural extension and the continuity control of the norm. 2. Extraction of limits. By the reflexivity of the three spaces, there exist, along a subsequence, U∗(x, y)∈L2(Ω;H1(Yf)), W(x, y)∈L2(Ω×Yf;Rd×d) such that as ε→0: Tεuε⇀ U∗weakly in L2(Ω;H1(Yf)), ∇xTεuε⇀∇xU∗,∇yTεuε⇀ W weakly in L2(Ω×Yf). [Correct] The accurate convergence of the function spaces. 3. Independence of U∗onyand strong L2-convergence. Because ∥∇yTεuε∥L2(Ω×Yf)remains bounded but multiplied by εvanishes in the original variables, one shows by the unfolding-to-two- scale characterization that ∂yiU∗= 0 in the distributional sense. Hence U∗(x, y) =: u0(x)is independent of y, and u0∈H1(Ω). Moreover, the standard compact embedding H1→L2inx implies Tεuε→u0strongly in L2(Ω×Yf), which by unfolding-to-two-scale equivalence yields uε→u0strongly in L2(Ω). [Vague] The process of the
https://arxiv.org/abs/2505.22451v1
two-scale expansion here is not detailed enough. The proof of the convergence of the function lacks details. 4. Identification of the oscillating part as a y-gradient. We set W(x, y) = lim ε→0∇yTεuεinL2(Ω×Yf). On one hand, by commutation of partial derivatives and the smoothness of the unfolding map, one shows in distributions on Yfthat ∂yjWi=∂yiWj. Since Yfis simply-connected with Lipschitz boundary, the Poincaré (or de Rham) lemma on Yf implies the existence of u1(x,·)∈H1 per(Yf)/Rdsuch that W(x, y) =∇yu1(x, y), and we normalize by ⟨u1(x,·)⟩Yf= 0. [Error] Here the process is lack of deprivation details. And the condition of Poincaré or de Rham lemma should be verified. 5. Passage back to two-scale convergence. By the equivalence between periodic unfolding and two-scale convergence, one concludes that, up to the same subsequence, ∇uεtwo-scale converges to ∇xu0(x) +∇yu1(x, y) inL2(Ω×Yf;Rd×d). All claimed properties follow. comment : None type: lemma Lemma 56. 86 Conjecture 3. (Identification of the Homogenized Elastic–Stokes Limit, rigorous form) Let Ω⊂Rd (d≥2) be a bounded Lipschitz domain and for each ε >0let Dε⊂Ωbe an ε–periodic array of simply-connected, incompressible inclusions with Lipschitz boundary, satisfying the usual separation- of-scales assumptions. Set Ωε= Ω\Dε. Fix an elastic stiffness tensor C:Rd×d sym→Rd×d sym(uniformly positive definite) on the matrix and a viscosity µ >0in the inclusions. Consider the transmission problem: find (uε, pε)∈H1(Ωε;Rd)×L2 0(Dε)such that (i) In the elastic matrix Ωε:−div [C D(uε)] = 0 , (ii) In the Stokes inclusions Dε:−µ∆uε+∇pε= 0,divuε= 0, (iii) On each ∂Dε: continuity of uεand of normal stress, (iv) On ∂Ω:C D(uε)·n=g∈H−1/2 R(∂Ω), plus a rigid-motion normalization. Then as ε→0one has, up to a subsequence, uε⇀ u 0inH1(Ω),uε→u0inL2(Ω), pεtwo-scale→ p1(x, y)inΩ×Yi, where u0∈H1 R(Ω;Rd)is the unique solution of the homogenized Lamé system −divx[ChomDx(u0)] = 0 inΩ,ChomDx(u0)·n=gon∂Ω, and the effective fourth-order tensor Chomis given by the following periodic cell transmission problem: for each fixed symmetric E∈Rd×d symfind (χE e, χE i, πE)∈H1 per(Ye;Rd)×H1 per(Yi;Rd)×L2 per(Yi)/Rsolving (1) In the elastic cell Ye:−divy[C(E+DyχE e)] = 0 ,divy(Ey+χE e) = 0 , (2) In the fluid cell Yi:−µ∆yχE i+∇yπE= 0,divy(Ey+χE i) = 0 , (3)On the material interface Γ =∂Yi:χE e=χE i,[C(E+DyχE e)·n] = [2 µ DyχE i−πEI]·n, (4)(Ey+χE)isY–periodic, with zero-mean normalization on χE eandχE i. Then ChomE:E=R Ye(E+DyχE e) :C(E+DyχE e) dy +R Yi2µ DyχE i:DyχE idy. [Error] Here the cell equation may not be accurate. correctness : True Proof. Proof. We outline a fully rigorous justification in six steps. Precise references to standard lemmas are given in parentheses. 1. Variational formulation and uniform a-priori bounds Define Vε={v∈H1(Ωε;Rd) :vsatisfies the rigid–motion normalization on ∂Ω}, Qε={q∈L2(Dε) :R Dεq= 0}. [Error] The test function space may not be accurate since the divergence of the function is not concluded in this conclusion. Set the bilinear form aε((u, p),(v, q)) =Z ΩεCD(u) :D(v) +Z Dε2µD(u) :D(v)−Z Dεpdivv−Z Dεqdivu and the linear form ℓ(v) =⟨g, v⟩H−1/2,H1/2. By the uniform Korn–Poincaré inequality on Ωε (Lemma A.1) and the Stokes inf–sup estimate on Dε(Lemma A.2), aεis coercive–inf–sup uniformly inε. Hence there is a unique solution (uε, pε)∈Vε×Qεof aε((uε, pε),(v, q)) =ℓ(v)∀(v, q)∈Vε×Qε, and one obtains the uniform bound ∥uε∥H1(Ωε)+∥pε∥L2(Dε)≤C∥g∥H−1/2 R(∂Ω).
https://arxiv.org/abs/2505.22451v1
[Vague] This conclusion needs more detailed process. 87 2. Two-scale compactness Extend uεby zero into Dεandpεby zero into Ωε; still denote the extensions by ˆuε∈H1(Ω)and˙pε∈L2(Ω). From the uniform bound and the periodic unfolding or classical two-scale compactness theorems (Lemma A.3, A.4) we extract a subsequence and limits u0∈H1(Ω;Rd), u 1∈L2(Ω;H1 per(Y)d), p 1∈L2(Ω×Yi) such that ˆuε⇀ u 0inH1(Ω), ˆuε→u0inL2(Ω), ∇ˆuεtwo-scale→ ∇ xu0(x) +∇yu1(x, y), ˙pεtwo-scale→p1(x, y)inΩ×Yi. One further shows u1(x,·) = 0 inYeandp1= 0inYeby uniqueness of two-scale limits on each subphase. [Vague] The convergence especially the two-scale convergence needs more process to be proved. 3. Identification of the periodic cell problem Take test-functions of the form vε(x) =φ(x) +εψ(x, x/ε ), q ε(x) =εθ(x, x/ε ), with φ∈C∞ c(Ω;Rd),ψ∈C∞ c(Ω;C∞ per(Y)d),θ∈C∞ c(Ω;C∞ per(Yi)), satisfying divyψ= 0 inYiandψ= 0 inYe. Inserting (vε, qε)into the variational equation and passing to the limit by two-scale convergence (unfolding arguments in Lemma A.4) yields for a.e. x∈Ωthe local transmission problem in Yas stated in the conjecture. This uniquely characterizes (χE e, χE i, πE)for E=Dxu0(x). 4. Homogenized tensor and macroscopic equation By choosing in the cell system the test-function θ=χE, one derives the energy representation ChomE:E=Z Ye(E+DyχE e) :C(E+DyχE e)dy+Z Yi2µDyχE i:DyχE idy. On the other hand, testing the original problem against φ(x)alone, passing to the limit showsZ ΩChomDxu0:Dxφ=⟨g, φ⟩ ∀φ∈H1 R(Ω;Rd), hence u0solves −divx[ChomDxu0] = 0 inΩ, ChomDxu0·n=gon∂Ω. [Vague] Here the agent gets the homogenization equation. But whether this conclusion is correct or not, more verification processes are needed. 5. Boundary-layer estimates By constructing ε–tubular cut-off functions near ∂Dεand∂Ω(sup- ported in O(ε)–neighborhoods) and using the uniform H1–bound together with trace and Poincaré inequalities, one shows that all interface and boundary-layer remainders vanish as o(1)when ε→0 (see Lemma A.5 for details). [Vague] The conditions for using the theorem need to be verified. 6. Error estimate and strong convergence Define the two-scale corrector uapp ε(x) =u0(x) +εχDxu0(x)(x/ε)inΩε, extended in each DεbyEy+χE i(y). A Céa-lemma–type argument in the perforated domain (again relying on uniform Korn and Stokes inf–sup, Lemma A.1–A.2) yields for some α∈(0,1) ∥uε−uapp ε∥H1(Ωε)≤Cεα∥g∥H−1/2 R(∂Ω). From this one deduces uε→u0strongly in L2(Ω)(and even in H1(Ωε)up to the corrector). This completes a fully rigorous derivation of the homogenized elastic limit and the explicit formula forChom. [Error] The conditions for using the theorem need to be verified for satisfaction. 88 comment : None type: lemma Lemma 57. Conjecture 4. (ε–Rate of Convergence in H1–Norm, precise version) LetΩ⊂Rd(d≥2) be a bounded C1,1domain. Let Dε⊂Ωbe the standard ε–periodic perforation generated by a reference inclusion ω⊂Y= (−1 2,1 2)dwithdist(ω, ∂Y )>0, and set Ωε= Ω\Dε. InΩεsolve the transmission problem −div[CD(uε)] = 0 inΩε, −µ∆uε+∇pε= 0,divuε= 0 inDε, with continuity of displacement and normal stress across ∂Dε, and Neumann data CD(uε)·n= g∈H−1 2 R(∂Ω)on∂Ω, plus the rigid-motion normalizationR Ωεuεdx= 0. Letu0∈H1 R(Ω)be the unique solution of the homogenized Lamé system −div[ChomD(u0)] = 0 inΩ, ChomD(u0)·n=gon∂Ω, and assume in addition that u0∈H2(Ω)with the a priori bound ∥u0∥H2(Ω)≤C∥g∥ H−1 2 R(∂Ω). For each symmetric E∈Rd×dlet(χE e, χE i, πE)be the unique periodic cell-transmission corrector from Lemma 3, and assume the map E7→χEenjoys the uniform C1–dependence
https://arxiv.org/abs/2505.22451v1
property sup |E|≤M∥∂EχE∥H1per(Ye∪Yi)≤C(M). Define the two-scale approximation on Ωεby Uε(x) =u0(x) +εχDxu0(x)x ε , extended by zero in each inclusion. Then there exist α∈(0,1)andC >0, independent of εandg, such that for all sufficiently small ε >0, ∥uε−Uε∥H1(Ωε)≤Cεα∥g∥ H−1 2 R(∂Ω). correctness : True Proof. We split the argument into four parts: (I) preliminaries and parametric regularity; (II) definition of the two-scale ansatz and residual decomposition; (III) estimate of the bulk and boundary-layer consistency errors; (IV) energy estimate and conclusion. [Correct] Actually, this decomposition of the problem here is reasonable and feasible. I. Preliminaries and parametric regularity 1. By standard Neumann-problem regularity on a C1,1 domain (cf. Grisvard), the homogenized solution u0satisfies ∥u0∥H2(Ω)≤C∥g∥ H−1 2 R(∂Ω). [Error] Here, the regularity cannot be directly applied to this problem, and more proof details are required. 2. From Lemma 3 (the periodic cell-transmission problem) and the uniform positive-definiteness ofCandµ, one proves by the implicit-function theorem (or by differentiating the variational formulation in the matrix E) that the mapping E∈Sym(Rd×d)7→χE∈H1 per(Ye∪Yi;Rd) isC1on bounded sets, with a uniform bound on ∂EχE. In particular, for any smooth strain field E(x) =Dxu0(x), one has by the chain rule ∇xχE(x)(y) = ( ∂EχE)(y)[D2 xu0(x)]and hence ∥∇xχDxu0(·)(·/ε)∥L2(Ωε)≤C∥u0∥H2(Ω). II. Two-scale ansatz and residual decomposition 3. Define the global approximation Uε(x) = u0(x) +εχDxu0(x)(x/ε)forx∈Ωε, and extend Uεby zero inside each Dε. By the H1-trace lemma and the fact χEhas zero average in the matrix cell Ye, one checks Uε∈H1(Ωε;Rd). [Vague] More verification processes are needed. And the results above are not entirely correct, but the approach is right. 89 4. Denote the error wε=uε−Uε. Testing the variational formulation of uεagainst any v∈H1(Ωε) and subtracting the corresponding form for Uεyields the residual functional Rε(v) =⟨g, v⟩∂Ω− aε(Uε, v), so that aε(wε, v) =Rε(v)for all v∈H1(Ωε). III. Consistency error estimates 5. Decompose Rε=Rbulk ε+Rbl ε. 5.1 Bulk term. A classical two-scale expansion and the cell-problem identities show that all O(1) terms cancel, leaving |Rbulk ε(v)| ≤Cε∥u0∥H2(Ω)∥v∥H1(Ωε). The key ingredients are: the chain- rule bound on ∇xχand the parametric-regularity control of ∂Eχ. 5.2 Boundary-layer term. Near each perforation ∂Dεone constructs a local boundary-layer corrector ψε,ksupported in a tubular neighborhood of width ℓ(ε) =ετ(0< τ < 1)around ∂Dεby solving the linear transmission problem in an ε–rescaled half-space; classical layer-potential and Agmon-type esti- mates (see e.g. Allaire–Amar ’99, Section 4) yield exponential decay of ψε,kaway from ∂Dε. Gluing these cut-off boundary layers produces a global corrector whose contribution can be summarily esti- mated as |Rbl ε(v)| ≤Ce−cε−τε−d/2∥u0∥H2(Ω)∥v∥H1(Ωε)and hence, by choosing τsufficiently small, there exists α∈(0,1)so that for all small ε|Rbl ε(v)| ≤Cεα∥u0∥H2(Ω)∥v∥H1(Ωε). 6. Combining these yields the H−1–bound ∥Rε∥H−1(Ωε)= sup0̸=v∈H1|Rε(v)|/∥v∥H1≤ Cεα∥u0∥H2(Ω)≤C′εα∥g∥ H−1 2 R(∂Ω). [Vague] The derivation process here is in urgent need of proof details. IV . Energy estimate and conclusion 7. By Korn’s inequality on perforated domains (Lemma 1) and the uniform ellipticity of aε, one has for all v∈H1(Ωε),aε(v, v)≥c∥v∥2 H1(Ωε). Taking v=wεin the variational identity gives c∥wε∥2 H1(Ωε)≤aε(wε, wε) =Rε(wε)≤ ∥Rε∥H−1∥wε∥H1, whence ∥wε∥H1(Ωε)≤C∥Rε∥H−1≤Cεα∥g∥ H−1 2 R(∂Ω). [Vague] The derivation process here is not detailed enough, and the above conclusions used are also incorrect. This completes
https://arxiv.org/abs/2505.22451v1
the proof of the εα–rate in H1–norm. comment : None type: lemma Lemma 58. Conjecture 5. (Well–posedness of the periodic transmission cell–problem and strict ellipticity of the homogenized tensor) Let Y= (−1 2,1 2)dbe the reference cell, decomposed into two disjoint Lipschitz–subdomains Ye(elastic) and Yi(incompressible fluid), with common interface Γ. Let C(y) :Rd×d sym→Rd×d symbe a uniformly positive–definite fourth–order elasticity tensor on Yeand µ >0a constant viscosity in Yi. For each given E∈Rd×d symwe seek χe∈H1 per(Ye;Rd),χi∈ H1 per(Yi;Rd),pe∈L2 per(Ye)/R,pi∈L2 per(Yi)/Rsuch that the following mixed–variational problem holds: 1. (Bulk equilibrium and divergence–constraints) ∀(ve,vi)∈H1 per(Ye;Rd)×H1 per(Yi;Rd),Z YeC(y) E+Dyχe :Dyvedy+Z Yi2µ Dyχi:Dyvidy −Z Yepedivyvedy−Z Yipidivyvidy =−Z YeC(y)E:Dyvedy−Z Yi2µE:Dyvidy. 2. (Incompressibility constraints) ∀(qe, qi)∈L2 per(Ye)/R×L2 per(Yi)/R,Z Yeqedivy Ey+χe dy+Z Yiqidivy Ey+χi dy= 0. 3.(Transmission and periodicity) χeandχiareY–periodic, coincide trace–wise on Γ, and satisfy the zero–mean normalizationsZ Yeχedy= 0,Z Yiχidy= 0. 90 Define the homogenized tensor Chomby∀E∈Rd×d sym, ChomE:E=Z Ye(E+Dyχe) :C(y)(E+Dyχe)dy + 2µZ YiDyχi:Dyχidy. Then: For every E∈Rd×d symthere exists a unique quadruple (χe,χi, pe, pi)solving the above mixed problem. The mapping E7→ChomEis linear and the tensor Chomis symmetric. There is α >0, depending only on the coercivity constants of Cand on µ(and on the geometry of Ye, Yi), such that for all nonzero E. ChomE:E≥α|E|2. correctness : True Proof. Proof. We recast the cell-problem as a mixed variational problem on the broken domain Ye∪Yi, with two pressure–fields enforcing incompressibility in each subdomain simultaneously. 1. Function spaces and bilinear forms. Define Ve=H1 per(Ye;Rd), V i=H1 per(Yi;Rd), Qe=L2 per(Ye)/R, Q i=L2 per(Yi)/R.LetV={(ve, vi)∈Ve×Vi: trace ve= trace vionΓ,R Yevedy+R Yividy= 0}. OnV×(Qe×Qi)define the bilinear forms a((ue, ui),(ve, vi)) =Z YeC(y) Due:Dve dy+ 2µZ Yi(Dui:Dvi)dy, b((ve, vi);qe, qi) =−Z Yeqedivvedy−Z Yiqidivvidy, and the linear functional ℓ((ve, vi)) =−Z YeC(y)E:Dvedy−2µZ YiE:Dvidy. The mixed problem is: find (χe, χi)∈Vand(pe, pi)∈Qe×Qisuch that a((χe, χi),(ve, vi)) +b((ve, vi);pe, pi) =ℓ((ve, vi))∀(ve, vi)∈V, b((χe, χi);qe, qi) +Z Yeqediv(E y)dy+Z Yiqidiv(E y)dy= 0∀(qe, qi)∈Qe×Qi. [Correct] These bilinear forms is correctly constructed. 2. Verification of Brezzi’s conditions. (i) Coercivity on the kernel of b. Let K={(ue, ui)∈V: divue= div ui≡0a.e.}. Then by Korn’s inequality on each connected Lipschitz subdomain and uniform ellipticity of C, there is c >0so that a((ue, ui),(ue, ui))≥c(∥ue∥2 H1(Ye)+∥ui∥2 H1(Yi))∀(ue, ui)∈K. [Error] The result “ divue≡0” is not correct. (ii) Inf–sup condition. We must show that for all (qe, qi)̸= (0,0)there exists (ve, vi)∈Vwith b((ve, vi);qe, qi)≥β(∥ve∥H1(Ye)+∥vi∥H1(Yi))(∥qe∥+∥qi∥). Since YeandYiare connected Lipschitz domains, the classical Stokes inf–sup estimate on each subdomain guarantees βe, βi>0and divergence–test functions ve, qeandvi, qi. Gluing these two vector–fields into a single (ve, vi)∈V(by matching traces on Γ, which is possible via a smooth partition of unity in a neighborhood of Γ) yields a uniform β >0for the combined problem. 91 By Brezzi’s theory the mixed problem has a unique solution (χe, χi, pe, pi). Standard interior and boundary–regularity arguments on each Lipschitz subdomain upgrade this to the strong form and enforce the continuity of normal tractions and of displacements on Γ. 3. Linearity and symmetry. Linearity of (χe, χi)and of (pe, pi)in the data Eis immediate from the linearity of a,b,ℓ. The quadratic form E7→ChomE:Eis obtained by inserting
https://arxiv.org/abs/2505.22451v1
(χe, χi)into the energy identity ChomE:E=a((E y+χe, E y +χi),(E y+χe, E y +χi)), which is manifestly symmetric and gives Chom∈Rd×d sym⊗Rd×d sym. [Vague] The derivation process here is not detailed enough, and the above conclusions used are also incorrect. 4. Strict ellipticity. Coercivity on Kplus the zero–mean and periodicity constraints imply that if ChomE:E= 0 thenχe=χi= 0 inH1and hence E ymust itself be a rigid motion on each connected cell. Periodicity forbids any nonzero skew–part, forcing E= 0. Moreover, the coercivity constant gives the uniform bound ChomE:E≥α|E|2, withαdepending only on the minimum eigenvalue of C, onµ, and on the geometric constants of the domains. Thus assertions (a)–(c) hold and the proof is complete. comment : None type: lemma Lemma 59. Conjecture 6. (Variational minimization principle for the periodic cell–correctors) With the notation of Conjecture 1, define the closed affine space of admissible correctors AE=n (ue, ui)∈H1 per(Ye;Rd)×H1 per(Yi;Rd) :Z Yeue= 0,Z Yiui= 0, divy(Ey+ue) = 0 inYe,divy(Ey+ui) = 0 inYi,trace( ue) = trace( ui)onΓo . Then the unique solution (χe, χi)of the cell-problem is the unique minimizer of the strictly convex functional JE(ue, ui) =1 2Z Ye[E+Dyue] :C[E+Dyue]dy+µZ YiDyui:Dyuidy overAE. Moreover, the homogenized energy is the minimum value ChomE:E= min (ue,ui)∈AEJE(ue, ui). correctness : True Proof. Proof. 1. Convexity and coercivity. The integrand in JEis strictly convex in the pair (Dyue, Dyui)by positivity of Candµ, and by Korn’s inequality on each subdomain it controls the H1–norm of (ue, ui)onAE. [Error] The above conclusions "R Yeue= 0; div y(Ey+ue) = 0 inYe," are incorrect. 2. Existence of a minimizer. Since AEis a closed affine subspace of H1 per(Ye)⊕H1 per(Yi), the direct method applies: any minimizing sequence is bounded in H1, hence converges weakly to some (χe, χi)∈ AE, and by weak lower-semicontinuity of JEone obtains a minimizer. 3. Euler–Lagrange (stationarity) ⇒cell-equations. One computes the first variation of JEunder admissible perturbations (ve, vi)preserving divergence and trace continuity. Integration by parts produces precisely the weak form of the transmission PDEs (1)–(3) with an accompanying Lagrange 92 multiplier p∈L2 per(Yi)/Renforcing divy(Ey+χi) = 0 inYi. This shows that any minimizer solves the strong cell-problem. 4. Uniqueness. Strict convexity of JEon the linear space parallel to AEimplies the minimizer is unique. [Error] The conclusions applied are not entirely correct. And the process is also lack of details. 5. Identification of Chom. By definition the minimum value is JE(χe, χi), which coincides with the representation of ChomE:Ein Conjecture 1. Hence all statements are proved. comment : None type: theorem Theorem 4. The problem construction is following. A real pair (λ, µ)is called an admissible Lamé parameter pair if it satisfies: µ >0and dλ+ 2µ >0 (1) The static elasticity system (Lamé system) corresponding to the Lamé parameter pair (λ, µ)is: Lλ,µu:=µ∆u+ (λ+µ)∇divu (2) where the displacement field is denoted as u= (u1, . . . , ud), and its divergence is: divu=dX i=1∂ui ∂xi The admissibility condition ensures the ellipticity of the Lamé operator, and physical laws guarantee that natural materials always satisfy this condition. The Lamé operator can be expressed as ∇ ·σ(u), where the stress tensor
https://arxiv.org/abs/2505.22451v1
is: σ(u) :=λ(∇ ·u)Id+ 2µD(u) (3) D(u) =1 2(∇+∇T)u=1 2(∂iuj+∂jui)ij (4) Here, Idis the identity matrix of order d, andDrepresents the symmetrized differential operator. The normal derivative (boundary traction) on the boundary of a region Eis defined as: ∂u ∂νλ,µ ∂E:=σ(u)N=λ(divu)N+ 2µD(u)N on∂E (5) where Nis the unit outward normal vector on the boundary. We use the standard Sobolev space H1(Ω)and its trace space H1 2(∂E)on the boundary, with the dual space H−1 2(∂E). Consider the space of rigid body motions in Rd: R:= r= (r1, . . . , r d)T:D(r) = 0 inRd (6) The dimension of Risd(d+1) 2, spanned by the following basis vectors: e1, . . . ,ed, xjei−xiej,for1≤i < j≤d where eidenotes the standard basis vector. These basis vectors are denoted as rj, j= 1, . . . ,d(d+1) 2. Define the space orthogonal to rigid body motions: H−1 2 R(∂Dϵ) :=n ϕ∈H−1(∂Dϵ) : (ϕ,r)(H1 2(∂Dϵi),H−1 2(∂Dϵi))= 0,∀r∈Rand1≤i≤No (7) Similarly, define H1 2 R(∂Dϵ)and boundary spaces. Consider the following transmission problem:   Lλ,µuϵ= 0 inΩ\Dϵ, Leλ,eµuϵ= 0 inDϵ, uϵ|−=u|+and∂uϵ ∂ν(λ,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω) and uϵ|∂Ω∈H1 2 R(∂Ω). 93 (8) This is equivalent to: (Lλ(x),µ(x)uϵ=∇ ·[λ(x)(∇ ·uϵ)Id+ 2µ(x)D(uϵ)] = 0 inΩ, ∂uϵ ∂νλ,µ ∂Ω=g∈H−1 2 R(∂Ω),uϵ|∂Ω∈H1 2 R(∂Ω), (9) where λ(x) =λIΩ\Dϵ+eλIDϵ, µ(x) =µIΩ\Dϵ+eµIDϵ. Suppose the solution of the equation (9) above is uϵ. The existence and uniqueness of the solution uϵare guaranteed by the ellipticity condition (1) and the Lax-Milgram theorem. Finally, the homogenization needed to be solved is the following. When eλ→ ∞ andeµis fixed, the coupled system in the incompressible inclusion limit is obtained:   Lλ,µuϵ= 0 inΩ\Dϵ, Leµ(uϵ, pϵ) = 0 anddivuϵ= 0 inDϵ, uϵ|−=uϵ|+and∂(uϵ,pϵ) ∂ν(∞,eµ) −=∂uϵ ∂ν(λ,µ) +on∂Dϵ, ∂uϵ ∂ν(λ,µ) ∂Ω=g∈H−1 2 R(∂Ω) and uϵ|∂Ω∈H1 2 R(∂Ω), (10) where Leµ(uϵ, pϵ) =eµ∆uϵ+∇pϵrepresents the Stokes operator with viscosity coefficient eµ, andpϵis the pressure field. Its normal derivative is defined as∂(u,p) ∂ν(∞,µ) −:=pN+ 2µD(u)N. Now letϵ→0, we are interested in the limited equation and the limited solution ulim. Please analysis and explore when ϵ→0, what equation does uϵsatisfy and the estimate between the original solution uϵ and the limited solution ulimcorresponding ϵ. correctness : True Proof. Theorem 5 (Homogenized limit and ε–rate of convergence) .LetΩ⊂Rd(d≥2) be a bounded C1,1 domain, and for each ε >0letDε⊂Ωbe the ε–periodic array of incompressible inclusions as in Assumptions 1–2 above. Denote Ωε= Ω\Dεand let (uε, pε)solve the transmission problem (10). Then: 1.(Homogenized limit.) As ε→0,uε⇀ u 0weakly in H1(Ω;Rd)anduε→u0strongly in L2(Ω;Rd), where u0∈H1(Ω;Rd)is the unique solution of the homogeneous Lamé system −div[ChomD(u0)] = 0 inΩ, ChomD(u0)·n=gon∂Ω, with the effective fourth–order tensor Chomgiven by the periodic cell–transmission problem of Lemma 3. 2. (Convergence–rate in H1–norm.) Moreover, if in addition u0∈H2(Ω;Rd)with ∥u0∥H2(Ω)≤C∥g∥H−1/2 R(∂Ω), then there exist α∈(0,1)andC >0independent of εandgsuch that for all sufficiently small ε >0 ∥uε−Uε∥H1(Ωε)≤Cεα∥g∥H−1/2 R(∂Ω), where Uε(x) :=u0(x) +εχDxu0(x)(x/ε) is the two–scale approximation built from the cell–correctors χEof Lemma 3 (extended by zero inside the inclusions). Proof. Step 1: Uniform a priori bounds. From the transmission system and the coercivity of the elastic tensor ConΩεplus the Korn inequality of Lemma
https://arxiv.org/abs/2505.22451v1
1, one obtains a uniform H1–bound ∥uε∥H1(Ωε)≤C∥g∥H−1/2 R(∂Ω). 94 By standard extension–by–zero and the two–scale compactness of Lemma 2, up to a subsequence uε⇀ u 0inH1(Ω), ∇uε→ ∇ xu0+∇yu1two–scale in Ω×Yf, for some u0∈H1(Ω;Rd)and corrector u1(x, y). A matching argument in the inclusions shows thatu0satisfies the homogenized elasticity system with flux ChomD(u0)and traction gon∂Ω. Uniqueness of that limit determines the full sequence uε→u0inL2(Ω). Step 2: Construction of the two–scale approximation. LetχE(y) = (χE e, χE i)(y)be the periodic cell–correctors of Lemma 3 for each macroscopic strain E. Define the two–scale ansatz Uε(x) =u0(x) +εχDxu0(x)(x/ε) inΩε, extended by zero on Dε. A standard calculation (see Lemma 4) shows that Uεnearly solves theε–problem up to an O(εα)remainder in H−1–norm. Step 3: Energy–error estimate. Setwε=uε−Uε. Subtracting the weak forms of the ε–problem foruεand the one nearly satisfied by Uεand testing with wεitself yields, via coercivity and the uniform Korn bound, ∥wε∥2 H1(Ωε)≤Cεα∥g∥H−1/2 R(∂Ω)∥wε∥H1(Ωε). Divide through by ∥wε∥H1(Ωε)to conclude ∥uε−Uε∥H1(Ωε)≤Cεα∥g∥H−1/2 R(∂Ω). Step 4: Passage to the limit. Since Uε→u0strongly in H1(Ωε)(by periodicity and u0∈H2), the triangle inequality immediately gives uε→u0with the same rate. This completes the proof of both the homogenized limit and the O(εα)–rate. [Correct] The derivation ideas and approaches here are reasonable and feasible, and the conclusions are also in line with expectations and cognition. [Vague] The derivation process is seriously lacking in details. [Error] Some of the conclusions used are incorrect, which also leads to the proof here being imprecise and incomplete. comment : None 95
https://arxiv.org/abs/2505.22451v1
Unsupervised Post-Training for Multi-Modal LLM Reasoning via GRPO Lai Wei1,3Yuting Li1Chen Wang3Yue Wang3Linghe Kong1 Weiran Huang1,2∗Lichao Sun4 1School of Computer Science, Shanghai Jiao Tong University 2Shanghai Innovation Institute 3Zhongguancun Academy4Lehigh University Abstract Improving Multi-modal Large Language Models (MLLMs) in the post-training stage typically relies on supervised fine-tuning (SFT) or reinforcement learning (RL). However, these supervised methods require expensive and manually anno- tated multi-modal data–an ultimately unsustainable resource. While recent efforts have explored unsupervised post-training, their methods are complex and difficult to iterate. In this work, we are the first to investigate the use of GRPO, a stable and scalable online RL algorithm, for enabling continual self-improvement without any external supervision. We propose MM-UPT, a simple yet effective frame- work for unsupervised post-training of MLLMs. MM-UPT builds upon GRPO, replacing traditional reward signals with a self-rewarding mechanism based on majority voting over multiple sampled responses. Our experiments demonstrate that MM-UPT significantly improves the reasoning ability of Qwen2.5-VL-7B (e.g., 66.3% →72.9% on MathVista, 62.9% →68.7% on We-Math), using standard dataset without ground truth labels. MM-UPT also outperforms prior unsupervised baselines and even approaches the results of supervised GRPO. Furthermore, we show that incorporating synthetic questions, generated solely by MLLM itself, can boost performance as well, highlighting a promising approach for scalable self-improvement. Overall, MM-UPT offers a new paradigm for continual, au- tonomous enhancement of MLLMs in the absence of external supervision. Our code is available at https://github.com/waltonfuture/MM-UPT . 1 Introduction Multi-modal Large Language Models (MLLMs) have achieved remarkable performance on a variety of vision-language tasks, ranging from image captioning to visual reasoning [ 19,21,42,47,69]. By combining the language understanding capabilities of large language models (LLMs) with visual perception, MLLMs can process and reason over both textual and visual information. The dominant paradigm for improving MLLMs in the post-training stage typically involves supervised fine-tuning (SFT) and reinforcement learning (RL). However, both SFT and RL rely on large volumes of high- quality and annotated multi-modal data, such as image captions, visual reasoning traces, ground truth answers, and human preference signals. As real-world tasks grow in complexity and quantity, a critical challenge emerges: curating and annotating high-quality data at scale becomes increasingly impractical. Thus, it is essential to explore new methods for improving MLLMs, such as using synthetic and unlabeled data. ∗Correspondence to Weiran Huang (weiran.huang@outlook.com).arXiv:2505.22453v1 [cs.CL] 28 May 2025 Find the perimeter of the $\\triangle A B C,$ if $\\triangle A B C \\sim \\triangle C B D, A D=5$, $C D=12,$ and $B C=31.2$.MLLM Data without Labels... Responses ... Estimate Advantage, Update Policy Rewards∗ Majority VotingFigure 1: Overview of the MM-UPT framework. Given an unlabeled multi-modal input, the MLLM samples multiple responses, and uses majority voting to determine the pseudo-label. The MLLM is then updated via GRPO, enabling self-improvement without external supervision. Previous works have studied the use of MLLMs themselves to generate synthetic instruction data for self-improvement through offline training techniques like SFT and DPO [ 5,36–38,55,64]. These approaches typically involve complex pipelines with multiple stages, such as data generation, verification, and filtering, which are hard to iterate online. Fortunately, recent
https://arxiv.org/abs/2505.22453v1
studies demonstrate notable success using online reinforcement learning (e.g. GRPO [ 34]) with verifiable rewards [ 6,10, 28,66] to enhance the reasoning capabilities of LLMs and MLLMs. A concurrent work, TTRL [ 71], further extends this line by applying GRPO on test-time scaling of LLMs. Thus, it is promising that online RL enables models to continuously improve, thus acquiring novel reasoning abilities that exceed corresponding base models’ capacity. Motivated by these insights, we aim to investigate a fundamental and open question: Can we enable MLLMs to continually and iteratively self-improve their reasoning abilities from fully unlabeled training data without any external supervision? To explore this question, we propose MM-UPT (Multi-Modal Unsupervised Post-Training), an easy-to-implement framework for unsupervised post-training in MLLMs. As shown in Figure 1, MM-UPT builds on GRPO [ 34], a stable and scalable online RL method that uses group-normalized rewards instead of explicit value functions. Unlike GRPO that relies on ground-truth labels to calculate rewards, our framework works by deriving implicit reward signals via majority voting over multiple sampled responses. In particular, majority voting aggregates multiple responses and selects the most frequent one, which has been widely used and shown effective to improve model performance [ 40,45,71]. Thus, we adopt the majority-voted answer to serve as a pseudo-label to calculate reward in our framework for unsupervised training, which encourages the model to prefer stable and high-consensus answers without any human annotated labels or external reward models. In our experiments, we focus on the domain of multi-modal reasoning, which is widely studied and inherently challenging. We explore two key scenarios for constructing unlabeled data assuming that labels are not available: (1) using human-created questions without ground-truth labels, and (2) employing synthetic questions generated by AI models, inherently lacking ground-truth labels. The first scenario is simulated by masking the answers in standard training datasets. For the second scenario, we employ two MLLM-driven strategies: (a) generating new questions from in-context examples (including the original image, question, and answer), and (b) generating questions based solely on the image. We evaluate MM-UPT across a range of reasoning benchmarks and observe significant performance improvements over the base models (e.g., 66.3% →72.9% on MathVista, 62.9%→68.7% on We-Math using Qwen2.5-VL-7B [ 1]). Our method also outperforms previous baseline methods, and is even competitive with supervised GRPO, underscoring the effectiveness of MM-UPT as a self-improving training strategy in a fully unsupervised manner. Additionally, we find that models trained on unlabeled synthetic data achieve performance competitive with those trained on the original unlabeled dataset, revealing strong potential for scalable self-improvement. Our main contributions are summarized as follows: •We propose MM-UPT, a novel framework for unsupervised post-training of MLLMs that enables continuous self-improvement without requiring any external supervision. 2 •Extensive experiments on multi-modal reasoning tasks demonstrate the effectiveness of majority voting as a pseudo-reward estimation for unsupervised training. •We investigate the use of synthetic data generated by the MLLM itself and find that training the MLLM on such data leads to notable performance gains. This reveals a promising path toward efficient and stable self-improvement in unsupervised post-training. 2 Related Works Self
https://arxiv.org/abs/2505.22453v1
Improvement. High-quality data obtained from human annotations has been shown to sig- nificantly boost the performance of LLMs across a wide range of tasks [ 10,16,29]. However, such high-quality annotated data may be exhausted in the future. This presents a substantial obstacle to the continual learning of advanced models. As a result, recent research has shifted toward self-improvement—leveraging data generated by the LLM itself without any external su- pervision [ 7,14,26,38,49,56,61,71]. Several following works also explore unsupervised self- improvement in the multi-modal domain [ 5,9,36,55,62,64]. Genixer [ 64] firstly introduces a comprehensive self-improvement pipeline including data generation and filtering for SFT. STIC [5] and SENA [ 37] construct preference data pairs for DPO [ 33] in a fully self-supervised manner. In con- trast to these approaches which are complex and hard to iterate, the key distinction is that our method leverages online reinforcement learning using GRPO [ 10] at the post-training stage of MLLMs, which is more scalable, and supports continuous and iterative self-improvement without reliance on any external supervision. In addition, none of these previous methods focus on multi-modal reasoning tasks, which are considered challenging for current models. Multi-modal Reasoning. Recently, the reasoning abilities of MLLMs have become a central focus of research [ 13,24,70]. In contrast to traditional LLM-based reasoning [ 10,25,54] that primarily relies on text, multi-modal approaches must both process and interpret visual inputs, significantly increasing the complexity of tasks such as geometric problem-solving and chart interpretation [ 2,27,63]. Several works in this field have sought to collect or synthesize a large scale of multi-modal reasoning data [ 4,30,35,58]. Notably, the recent emergence of o1-like reasoning models [ 17] represents an initial step toward activating the slow-thinking capabilities of MLLMs, as demonstrated by several SFT-based methods, such as LLaV A-CoT [ 50], LLaV A-Reasoner [ 59], MAmmoTH-VL [ 11], and Mulberry [ 52]. Moreover, some concurrent works have further explored reinforcement learning approaches, particularly GRPO [ 34], in the post-training stage of MLLMs to enhance performance on multi-modal reasoning tasks [ 6,28,31,44,66]. While these supervised post-training methods have demonstrated promising results, our work explores a different direction by focusing on totally unsupervised post-training of MLLMs to self-improve the reasoning abilities. 3 The Framework of Multi-Modal Unsupervised Post-Training Unlike traditional post-training techniques that require labeled data or external reward models, we propose MM-UPT (Multi-Modal Unsupervised Post-Training), a simple yet effective framework designed to operate purely on unlabeled training data. That is, the model must learn to self-improve without access to any external supervision such as ground-truth labels or additional reward models. 3.1 Problem Formulation Firstly, we formulate the problem of unsupervised post-training for MLLMs as follows: Given a well-trained multi-modal LLM πθand a collection of unlabeled multi-modal data Q={(Ii, qi)}N i=1, where Iirepresents an image and qidenotes a corresponding question, our goal is to improve the model’s performance without access to any ground-truth answers or external supervision signals. This setting differs significantly from conventional supervised fine-tuning (SFT), reinforcement learning with verifiable rewards (RLVR), or reinforcement learning with human feedback (RLHF), which typically rely on labeled data (Ii, qi, yi)or
https://arxiv.org/abs/2505.22453v1
human preference data (Ii, qi, y+ i, y− i), where yidenotes the answer of qiand(y+ i, y− i)denotes the preference pair of qi. In contrast, we only allow to operate in a fully unsupervised manner for this setting, leveraging only the model’s own responses to generate training signals. This presents significant challenges, as the model must learn to assess and improve its own outputs without any external guidance. 3 Algorithm 1 MM-UPT: Multi-Modal Unsupervised Post-Training 1:Input: Current policy πθ, old policy πθold, unlabeled training dataset Q, Group size G, reference model πref, clip parameter ϵ, KL penalty coefficient β, answer extractor E(·). 2:foreach sample (I, q)∼Qdo 3: Sample group of responses: {oi}G i=1∼πθold(o|I, q); // Sample multiple responses 4: Extract answers: ˆY=E(O) ={ˆyi}G i=1; 5: Determine majority vote: y∗←arg maxy∈ˆYPG i=1I[y= ˆyi]; // Select the most frequent answer 6: Compute pseudo-rewards: ri←I[ˆyi=y∗]; // Reward based on majority agreement 7: Compute advantage estimates: ˆAi←ri−mean({r1,r2,...,rG}) std({r1,r2,...,rG}); 8: Compute GRPO objective: 9:J(θ)←1 GPG i=11 |oi|P|oi| t=1n minh γi,t(θ)ˆAi,clip(γi,t(θ),1−ϵ,1 +ϵ)ˆAii −βDKL[πθ∥πref]o 10: where γi,t(θ) =πθ(oi,t|I,q,oi,<t) πθold(oi,t|I,q,oi,<t); 11: Update policy parameters: θ←θ− ∇ θJGRPO (θ); 12: Update old policy: θold←θ; 13:end for 14:return πθ 3.2 Method To achieve this, MM-UPT introduces a self-rewarding mechanism using majority voting as pseudo- labels [ 45] based on the online reinforcement learning. In particular, MM-UPT is built upon the GRPO algorithm [ 34], which is widely used in the post-training stage of multi-modal LLMs. GRPO optimizes computational efficiency by eliminating the need for a separate value model; instead, it directly utilizes group-normalized rewards to estimate advantages. Specifically, for a question qand the correlated image Ifrom the training dataset Q, GRPO samples a group of responses O={oi}G i=1 from the old policy πoldand then optimizes the policy model by maximizing the following objective: J(θ) =E(q,I)∼Q,{oi}G i=1∼πθold(O|q,I) 1 GGX i=11 |oi||oi|X t=1( min" γi,t(θ)ˆAi,t,clip(γi,t(θ),1−ϵ,1 +ϵ)ˆAi,t# −βDKLh πθ∥πrefi) , where γi,t(θ) =πθ(oi,t|q,oi,<t) πθold(oi,t|q,oi,<t),πrefrepresents the reference model, and the term DKLintroduces a KL divergence constraint to limit how much the model can deviate from this reference. The advantage estimate ˆAimeasures how much better the response oiis compared to the average response, which is computed using a group of rewards {r1, r2, . . . , r G}for the responses in set O: ˆAi=ri−mean({r1,r2,...,rG}) std({r1,r2,...,rG}). In the above standard GRPO formulation [ 10], the reward is computed in a supervised manner based on labels for each response in O={oi}G i=1. Shifting towards our unsupervised setting, where no ground-truth labels are available, one feasible way is to construct pseudo-labels to calculate the reward for GRPO. Motivated by [ 14,45,71], we use majority voting over the group of sampled responses Oto serve as pseudo-labels. Majority voting selects the most frequent answer among the sampled responses Oand has proven to be a simple yet effective technique [ 45,71], making it suitable for deriving good pseudo-reward signals. Specifically, we first extract answers from the responses O={oi}G i=1using an rule-based answer extractor [ 12]E(·), resulting in ˆY=E(O) ={ˆyi}G i=1. Then, the majority-voted answer y∗can be obtained by: y∗= arg max y∈ˆYGX i=1I[y= ˆyi], (1) where I[·]is the indicator function. The reward
https://arxiv.org/abs/2505.22453v1
riis then determined based on the y∗: ri=1,ifˆyi=y∗, 0,otherwise .(2) 4 Table 1: Main results on four multi-modal mathematical reasoning benchmarks. We report accuracy (%) for each method on MathVision, MathVerse, MathVista, and We-Math. All methods are conducted on the Qwen2.5-VL-7B backbone. MM-UPT outperforms other baseline methods, and is even competitive with supervised methods. Model and Methods Unsupervised? Training Data MathVision MathVerse MathVista We-Math Avg Qwen2.5-VL-7B - - 24.87 43.83 66.30 62.87 49.47 + GRPO [34] ✗ Geometry3K 28.32 46.40 69.30 68.85 53.22 + GRPO [34] ✗ GeoQA 26.15 46.28 67.50 66.65 51.65 + GRPO [34] ✗ MMR1 29.01 45.03 71.40 67.24 53.17 + SFT [39] ✗ Geometry3K 25.92 43.73 67.90 64.94 50.63 + SFT [39] ✗ GeoQA 25.72 44.70 67.40 65.10 50.73 + SFT [39] ✗ MMR1 26.45 43.53 63.30 64.20 49.37 + SRLM [49] ✓ Geometry3K 26.94 44.54 66.90 66.32 51.18 + SRLM [49] ✓ GeoQA 25.16 44.62 66.30 65.00 50.27 + SRLM [49] ✓ MMR1 25.33 45.08 67.00 64.66 50.52 + LMSI [14] ✓ Geometry3K 25.10 43.96 65.50 64.43 49.75 + LMSI [14] ✓ GeoQA 25.49 43.50 66.60 63.51 49.78 + LMSI [14] ✓ MMR1 24.83 43.76 64.90 66.38 49.97 + Genixer [64] ✓ Geometry3K 26.02 43.15 65.50 62.18 49.22 + Genixer [64] ✓ GeoQA 25.30 44.11 66.80 64.25 50.12 + Genixer [64] ✓ MMR1 23.68 43.30 65.50 64.66 49.29 + STIC [5] ✓ Geometry3K 25.39 42.92 65.20 62.99 49.13 + STIC [5] ✓ GeoQA 23.49 42.87 64.30 63.62 48.57 + STIC [5] ✓ MMR1 23.78 42.72 66.10 63.74 49.09 + MM-UPT ✓ Geometry3K 27.33 42.46 68.50 66.61 51.23 + MM-UPT ✓ GeoQA 27.07 43.68 68.90 68.22 51.97 + MM-UPT ✓ MMR1 26.15 44.87 72.90 68.74 53.17 In this way, we compute pseudo-rewards via majority voting and apply standard GRPO to update the MLLM. This majority-based reward encourages the model to converge toward consistent, high- consensus responses, thereby enabling the model to further exploit its existing self-knowledge leveraging unlabeled data. The overview of our framework is shown in Figure 1 and Algorithm 1. 4 Experiments We conduct extensive experiments to evaluate the effectiveness of MM-UPT across various multi- modal LLMs, datasets, and benchmarks. Our experiments are designed to explore two key scenarios: (1) using human-created questions without ground-truth labels (Section 4.2), and (2) employing synthetic questions generated by the model itself, inherently lacking ground-truth labels (Section 4.3). Before presenting the experimental results, we first outline the baseline methods, evaluation bench- marks, and implementation details in the experimental setup as follows. 4.1 Experimental Setup Baseline Methods. Several prior works have explored self-improvement in both LLMs and MLLMs. Note that we focus on unsupervised self-improvement, we do not compare with methods that rely on external models (e.g., GPT-4o [ 16]) for supervision [ 15,55,60,67,68]. Instead, we compare with several totally unsupervised methods: LMSI [ 14], SRLM [ 49], Genixer [ 64], and STIC [ 5]. In particular, LMSI corresponds to supervised fine-tuning with self-generated content selected by majority voting. SRLM uses the model itself as LLM-as-a-Judge [ 65] to provide its own rewards during DPO [ 33] training. Genixer prompts the
https://arxiv.org/abs/2505.22453v1
MLLM to first self-generate an answer and then self-check it. STIC applies DPO where original images and good prompts are used to generate preferred answers, and corrupted images and bad prompts to produce rejected answers. Additionally, we also compare with GRPO [ 34] and rejection sampling-based SFT [ 39], which are two strong supervised methods. The details of these baseline methods are shown in Appendix A.1. 5 Table 2: Performance comparison of MM-UPT using different synthetic data generation strategies. Both “In-Context Synthesizing” and “Direct Synthesizing” approaches yield significant improve- ments over the base model and perform competitively with the “Original Questions” on average, demonstrating the effectiveness of synthetic data for unsupervised self-improvement. Model and Methods Dataset MathVision MathVerse MathVista We-Math Avg Qwen2.5-VL-7B – 24.87 43.83 66.30 62.87 49.47 w/ Original Questions Geo3K 27.33 42.46 68.50 66.61 51.23 ( 3.6%↑) w/ In-Context Synthesizing Geo3K 26.71 41.24 68.30 67.76 51.00 ( 3.1%↑) w/ Direct Synthesizing Geo3K 26.88 43.53 69.90 68.97 52.32 ( 5.8%↑) w/ Original Questions GeoQA 27.07 43.68 68.90 68.22 51.97 ( 5.1%↑) w/ In-Context Synthesizing GeoQA 26.09 42.87 70.60 69.25 52.20 ( 5.5%↑) w/ Direct Synthesizing GeoQA 26.25 44.64 71.50 68.28 52.67 ( 6.5%↑) w/ Original Questions MMR1 26.15 44.87 72.90 68.74 53.17 ( 7.5%↑) w/ In-Context Synthesizing MMR1 26.15 45.10 71.90 68.62 52.94 ( 7.0%↑) w/ Direct Synthesizing MMR1 26.15 44.11 70.40 67.99 52.16 ( 5.4%↑) Benchmarks. We evaluate our method on four popular multi-modal mathematical reasoning bench- marks: MathVision [ 41], MathVista [ 24], MathVerse [ 57], and We-Math [ 32]. These benchmarks offer comprehensive evaluations with diverse problem types, including geometry, charts, and ta- bles, featuring multi-subject and meticulously categorized visual math challenges across various knowledge concepts and granularity levels. We provide more details in Appendix A.2. Implementation Details. We adopt the EasyR1 [ 53] framework for multi-modal unsupervised post-training, which is based on GRPO. Specifically, we set the training episodes to 15, and use AdamW optimizer [ 22] with a learning rate of 1×10−6, weight decay of 1×10−2, and gradient clipping at a maximum norm of 1.0. The KL divergence constraint βin GRPO is set to 0.01 to stabilize the training. The vision tower of the multi-modal model is also tuned without freezing. Other hyperparameters follow the default settings provided in the EasyR1 framework. 4.2 Unsupervised Training on Standard Datasets For our experiments, we firstly employ standard training datasets with masked labels to simulate the first scenario (i.e., using human-created questions without ground-truth answers). We conduct MM-UPT on Geometry3k [ 23], GeoQA [ 3], and MMR1 [ 18] using the Qwen2.5-VL-7B [ 1] model. These datasets cover a diverse set of visual math problems, including geometric diagrams, charts, and structured question formats (multiple-choice and fill-in-the-blank), serving as a strong foundation for models to self-improve the multi-modal mathematical reasoning abilities. More details of these datasets are introduced in Appendix A.3. Table 1 presents the main results on four challenging multi-modal mathematical reasoning benchmarks. We observe that MM-UPT achieves consistent improvements in average over the base Qwen2.5- VL-7B model across all datasets, also outperforming other baseline methods such as SRLM,
https://arxiv.org/abs/2505.22453v1
LMSI, Genixer, and STIC. Notably, MM-UPT is able to improve the average score from 49.47 (base model) to 53.17 (with MMR1 dataset), demonstrating its effectiveness in leveraging unlabeled data for self-improvement. In comparison, previous baselines provide only marginal gains or even degrade performance on certain benchmarks, highlighting the limitations of existing methods when applied to already strong models in multi-modal reasoning tasks. Furthermore, we find that MM-UPT is even competitive with supervised post-training methods, such as rejection sampling-based SFT [ 39] and GRPO [ 34]. These results underscore the potential of MM-UPT to further exploit the knowledge embedded in multi-modal models for self-improvement. 4.3 Unsupervised Training on Synthetic Datasets To further explore the potential of MM-UPT, we investigate the use of unlabeled synthetic data to improve MLLMs. This aligns with the ultimate goal of MM-UPT: enabling continual self- improvement even after human-created data is exhausted. We utilize two strategies for generating synthetic training samples. 6 In-Context Synthesizing. Inspired by Self-Instruct [46], we adopt a data generation pipeline based on in-context examples. Each original example includes an image, a question, and its corresponding answer. To synthesize new data, we provide the base MLLM with the full original triplet—image, question, and answer—and ask it to generate a new question that is distinct from the original. During the unsupervised post-training, the MLLM attempts to answer this new question, and we define the pseudo-label as the majority vote among the sampled responses of the model. Direct Synthesizing. In addition, we also adopt a more straightforward approach to generate synthetic data. Instead of using the full triplet, we prompt the base MLLM with only the image, asking it to create a new question without any context from the original question or answer. This produces a different type of synthetic sample, where the image remains the same, but the question is generated entirely from visual input. As with the in-context approach, the pseudo-label for training is determined by aggregating several model responses through majority voting. Results. In our experiment, we use the previous two methods to generate the synthetic data, leveraging Geometry3K [ 23], GeoQA [ 3], and MMR1 [ 18] as the seed datasets, and Qwen2.5-VL-7B as the base MLLM for data synthesis. MM-UPT is then applied to the same base model (i.e., Qwen2.5-VL-7B) using each of these different synthetic datasets separately. Table 2 presents experimental results using different synthetic data generation strategies. Both in-context and direct synthesizing lead to significant improvements over the base model, achieving performance comparable to training on original human-written questions. This shows that synthetic questions can effectively enhance the model’s reasoning ability under MM-UPT. Notably, direct synthesizing even surpasses human-written questions (when applied to Geometry3K and GeoQA) on average, demonstrating the strong ability of the model to generate high-quality textual questions solely based on images. This highlights the potential for scalable and fully autonomous self-improvement in multi-modal domain via visual- centric data synthesis. Further Investigation. Moreover, we manually examine some synthetic data. We observe that in- context synthesizing often produces questions similar to the original ones by substituting conditions or expressions, resembling
https://arxiv.org/abs/2505.22453v1
data rephrasing. In contrast, direct synthesizing generates more diverse and novel questions. While some of the directly synthesized questions still contain hallucinations, many are of high quality and beneficial for unsupervised post-training. This underscores the potential of the direct synthesizing approach as a simple yet effective method for data generation, without the need for textual in-context examples. Below, we present two illustrative examples that showcase the effectiveness and quality of synthetic questions generated through both approaches. Demo: Examples of synthetic data using different strategies. Original Question: BC∥DF .AB=x+ 5,BD= 12 ,AC= 3x+ 1, andCF= 15 . Findx. In-Context Synthetic Question: Given that BC∥DF ,AB= 2x−3,BD= 18 ,AC=x+ 7, andCF= 24 . Find the value of x. Directly Synthetic Question: In the given triangle △ADF , point Blies on AD and point Clies on AF. IfBC∥DF , what is the ratio of the area of△ABC to the area of △ADF ? Original Question: If∠RST is a right angle, SU⊥RT,UV⊥ST, and∠RTS = 47◦,find∠RSU In-Context Synthetic Question: If∠RST is a right angle, SU⊥RT,UV⊥ST, and∠RTS = 47◦,find∠V ST . Directly Synthetic Question: In the given triangle △RST , pointUlies on RT such that SU is perpendicular to RT. Point Vlies on ST such that UV is perpendicular to ST. IfRU= 12 units,UT= 16 units, and SV= 9 units, find the length of V T . 4.4 Ablation Study To evaluate the generality and effectiveness of MM-UPT, we conduct an ablation study across a range of backbone models beyond the primary Qwen2.5-VL-7B [ 1]. Specifically, we apply MM-UPT to several state-of-the-art models of varying scales, including Qwen2.5-VL-3B [ 1], MM-Eureka- 7B [28], and ThinkLite-VL-7B [ 44]. All models are post-trained using MM-UPT on the Geometry3K dataset [ 23], without access to any labels. As summarized in Table 3, MM-UPT consistently improves the performance of all tested models on average, despite the absence of supervision during post- training. Notably, ThinkLite-VL-7B with MM-UPT achieves the highest average score (54.07), and shows substantial gains on the MathVista [ 24] benchmark, reaching a score of 74.70. In addition, 7 Table 3: Ablation study using different models besides Qwen2.5-VL-7B. We conduct this experiment on Geometry3K [23] dataset without labels. Models MathVision MathVerse MathVista We-Math Avg Qwen2.5-VL-7B 24.87 43.83 66.30 62.87 49.47 Qwen2.5-VL-7B + MM-UPT 27.33 42.46 68.50 66.61 51.23 ( 3.6%↑) MM-Eureka-7B 28.06 50.46 69.40 64.48 53.10 MM-Eureka-7B + MM-UPT 28.95 50.63 69.10 66.44 53.78 ( 1.3%↑) ThinkLite-VL-7B 26.94 46.58 69.00 67.99 52.63 ThinkLite-VL-7B + MM-UPT 26.91 47.26 74.70 67.41 54.07 ( 2.8%↑) Qwen2.5-VL-3B 19.47 33.58 56.30 50.63 39.00 Qwen2.5-VL-3B + MM-UPT 22.17 32.39 57.10 55.22 41.72 ( 7.4%↑) Qwen2.5-VL-3B, the smallest model in our study, also benefits well from MM-UPT (+7.4% on average), demonstrating the robustness and adaptability of MM-UPT for performance enhancement. These results collectively reveal that MM-UPT can be easily applied to various multi-modal models to enable consistent self-improvement. 5 Deeper Analysis Going beyond standard benchmarking, we conduct a deeper analysis to investigate MM-UPT’s training dynamics (Section 5.1) and performance boundaries (Section 5.2 and Section 5.3). This helps better understand its behavior and potential applications. 5.1 Training Dynamics To
https://arxiv.org/abs/2505.22453v1
better understand the behavior of MM-UPT during training, we monitor several diagnostic metrics, including the majority voting reward and entropy, both of which are label-free and provide insights in the absence of ground-truth supervision. In particular, majority voting reward is calculated following Equation 2. Entropy can be used as an unsupervised objective that measures the uncertainty of the model’s generation [ 43,48,56]. For a group of responses O={oi}G i=1sampled from the question q and image I, we cluster the responses according to their meaning. That is, if two responses share the same meaning (i.e., extracted answers), they should be merged into one same cluster in the semantic space. This results to K(K≤G)clusters C={cj}K j=1. The empirical distribution over clusters is defined as: p(cj|q, I) =|cj| G, where|cj|denotes the number of responses that belongs to cj. The semantic entropy (denoted as H) over the model’s response meanings distribution can be estimated as follows: H=−X cj∈{C}p(cj|q, I) logp(cj|q, I). Figure 2 presents the MM-UPT training curves of the key metrics on Qwen2.5-VL-7B using the MMR1 dataset. We observe that the majority voting reward consistently increases over time, accom- panied by a steady decrease in the entropy. This indicates that the model is converging toward more consistent predictions, reflecting improved confidence and stability in its responses. Additionally, we track the change in average benchmark accuracy and effective rank [ 48] throughout training. The accuracy exhibits an upward trend, demonstrating that our MM-UPT framework– based on an online reinforcement learning algorithm–effectively enables the model to self-improve continuously and iteratively. The effective rank [ 48] further measures the amount of knowledge the model comprehends in the datasets. During training, the internal knowledge of the model is exploited, leading to a consistent increase in the effective rank on the benchmark. 8 20 40 60 80 Steps0.700.750.800.85Majority Voting Reward 20 40 60 80 Steps0.40.50.60.70.8Entropy 20 40 60 80 Steps0.5100.5150.5200.5250.530Accuracy 20 40 60 80 Steps0.5700.5750.580Effective Rank Figure 2: Training dynamics of MM-UPT using Qwen2.5-VL-7B on the MMR1 dataset. We plot the majority voting reward, semantic entropy, and average benchmark accuracy over the course of unsupervised post-training. Table 4: Performance of MM-UPT on the difficult ThinkLite-11K dataset. Results show that MM- UPT leads to a decrease in performance when applied to a dataset where the model has limited prior knowledge, highlighting the limitations of majority voting in such scenarios. Models Training Data MathVision MathVerse MathVista We-Math Avg Qwen2.5-VL-7B – 24.87 43.83 66.30 62.87 49.47 Qwen2.5-VL-7B + MM-UPT ThinkLite-11K 21.12 37.10 59.20 59.02 44.11 5.2 Why Does MM-UPT Work? Majority voting [ 45] is a fundamental ensemble technique that enhances prediction reliability by aggregating multiple independent responses. In our framework, it offers a simple yet powerful pseudo-reward signal to help model self-improve, particularly when the model are moderately reliable on the unlabeled datasets. We consider a simplified explanation for it using a classical toy example. Suppose that each response hits the correct answer with probability p >0.5in a binary question. Then, we sample the model’s response ntimes independently. The final answer is determined by a majority vote, that is, the answer that
https://arxiv.org/abs/2505.22453v1