text
string | source
string |
|---|---|
Table 3: Imitation learning accuracy on RoboMimic [ 15] environment. Our method (in green) compared against baselines (in red) /and ablations (in blue). See text for details. diffusion policy that uses 10 DDIM [ 17] steps, Row 3: conventional flow matching-based policy [ 5] and Row 4: Streaming Diffusion Policy [ 14], a recent method that runs diffusion policy in a streaming manner (see Sec. 7 for details). We also compare against streaming flow policy that does not construct stabilizing flows during training, i.e.usesk= 0(in blue ). This ablation is designed to measure the contribution of stabilization to task performance. Following Chi et al. [1], we report the average score for the 5 best checkpoints, the best score across all checkpoints, and the average latency per action for each method. We also conduct real-world experiments on a Franka Emika Panda robot arm with a RealSense D435f depth camera; see our project website for details. In Table 5, we report results on the Push-T environment: a simulated 2D planar pushing task where the robot state is the 2-D position of the cylindrical pusher in global coordinates, and actions are 2-D setpoints tracked by a PD controller. Push-T contains 200 training demonstrations and 50 evaluation episodes. We perform experiments in two settings: when simulator state is used as observations, and when images are used as observations. “Action imitation” is the standard practice of imitating action sequences provided in the benchmark training set. We also perform experiments with “state imitation” (see Sec. 5), where we directly imitate the measured 2D positions of the robot. Here, we use the known ground-truth robot position at the beginning of each action chunk to as a starting point for velocity field integration. In Table 3, we report results on the RoboMimic environment, specifically the “lift” and “can” tasks, with state inputs. Both tasks involve predicting sequences of 6-DOF end-effector poses with respect to a global frame that are tracked by a PD controller. Each task contains 300 training demonstrations, and 50 evaluation episodes. The tasks involve picking objects and placing them at specific locations, including picking a square nut and placing it on a rod. The neural networks for streaming flow policy vθ:A ×[0,1]× H → TAis structurally similar to diffusion /flow policies ( e.g.ϵθ:AT×[0,1]× H → AT) with the only change being the input and output spaces (action space Avs trajectory space AT). Therefore, we are able to re-use existing diffusion /flow policy architectures [ 1] by changing the input and output dimension of the network and replacing 1-D temporal convolution/attention layers over action sequences [ 1,18] with a fully connected layer. Furthermore, due to the reduced dimensionality of the flow sampling space, we found that streaming flow policy is faster to train and has a smaller GPU memory footprint compared to diffusion /flow policies. Conclusions : Stabilizing flow policy performs comparably to diffusion policy and other baselines in terms of performance on most tasks, while being significantly faster per action. Furthermore, the reported latency does not even take into account the fact that streaming flow policy can
|
https://arxiv.org/abs/2505.21851v1
|
parallelize action generation with robot execution. In practice, this can avoid delays and jerky robot movements. Diffusion policy can be sped up by running fewer diffusion steps via DDIM [ 17]. And flow-matching policy is also faster than diffusion policy. However, their speed seems to come at the cost of sometimes significant reduction in accuracy. In App. C, we analyze the performance of streaming flow policy as a function of the action chunk horizon Tchunk . 7 7 Related work Learning dynamical systems: Our work is closely related to prior work on learning dynamical systems [ 19–23]. A key difference is that most prior works assume a single, deterministic future trajectory given an input state. However, we focus on learning multi-modal distributions over trajectories, which is known to be essential for behavior cloning in robotics [ 1]. For example, Sochopoulos et al. [21] learn a neural ODE [ 11] by minimizing the distance between the predicted and demonstrated trajectories. This is susceptible to undesirable “averaging” of distinct behaviors that achieve the same task. Our approach learns a neural ODE endowed with an initial noise distribution that induces a distribution over future trajectories. This is also known as a continuous normalizing flow [ 11,12]. We use the recently proposed flow matching framework [ 3] to fit the predicted trajectory distribution to training data. An important consequence is that our method aggregates “noised” demonstration trajectories as additional training samples, whereas prior works train only on states directly present in the demonstrations. Furthermore, most prior works assume a time-invariant velocity field [ 19–21]. Our velocity field depends on time and allows learning non-Markovian behaviors for tasks like spreading sauce on pizza dough, where the end-effector must rotate a fixed number of times. Finally, most works on learning dynamical systems focus onpoint-stability around goal states [ 19–21]; we don’t assume goal states and construct flows that stabilize around demonstration trajectories. Flow matching: Flow matching [ 3] is a recent technique for learning complex, multi-modal distribu- tions that has been used to model images [ 3,24,25], videos [ 26,27], molecular structures [ 28–30], and robot action sequences [ 2,5,6,31,32]. However, the flow sampling process starts from Gaussian noise, and the distribution of interest is only modeled at the final timestep t= 1. Our insight is to treat the entire flow trajectory as a sample from the target distribution over action sequences. Streaming Diffusion Policy: The work most closely related to ours is Streaming Diffusion Pol- icy [14], which is an adaptation of of discrete time diffusion policies [ 1]. Instead of maintaining all actions in the sequence at the same noise level, Streaming Diffusion Policy maintains a rolling buffer of actions with increasing noise levels. Every diffusion step reduces the noise level of all actions by one, fully de-noising the oldest action in the buffer that can be streamed to the robot. However, this method requires maintaining an action buffer of the length of the prediction horizon, even if the action horizon is much shorter. Furthermore, there is an up-front cost to initialize the buffer with increasing
|
https://arxiv.org/abs/2505.21851v1
|
noise levels. The rolling buffer approach has been applied to video prediction [ 33] and character motion generation [ 34]. Our method is more economical since it computes only as many actions are streamed to the robot, without requiring a buffer. We evaluate our method against Streaming Diffusion Policy in Sec. 6. 8 Conclusion In this work, we have presented a novel approach to imitation learning that addresses the com- putational limitations of existing diffusion and flow-matching policies. Our key contribution is a simplified approach that treats action trajectories as flow trajectories . This enables incremental integration of a learned velocity field that allows actions to be streamed to the robot during the flow sampling process. The streaming capability makes our method particularly well-suited for receding horizon policy execution. Despite the streaming nature of our approach, the flow matching framework guarantees the ability to model multi-modal action trajectories. By constructing flows that stabilize around demonstration trajectories, we reduce distribution shift and improve imitation learning performance. Our experimental results demonstrate that streaming flow policy performs comparably to prior imitation learning approaches on benchmark tasks, but enable faster policy execution and tighter sensorimotor loops, making them more practical for reactive, real-world robot control. 8 9 Limitations In this section, we discuss some limitations of our approach. 9.1 SFP does not match joint distribution, only per-timestep marginal distributions Our flow matching framework ensures that the learned distribution over trajectories conditioned on the history matches the training distribution in terms of marginal distributions of actions at each timestep t∈[0,1]. We however, do not guarantee that the joint distribution of actions across a trajectory matches the training distribution . This is in contrast to diffusion policy, that is able to match the joint distribution since the diffusion model operates in trajectory space AT. Figs. 3 and 4 illustrate a toy example where streaming flow policy matches marginal distributions but not the joint distribution. The x-axis represents 1-D robot actions, and the y-axis represents flow time (t∈[0,1]). Fig. 3a shows two trajectories in blue and red, of shapes “S” and “ S” respectively. The trajectories intersect at t= 0.5. The learned flow field is shown in Fig. 3c, and the induced marginal distribution over actions is shown in Fig. 3d. The marginal distribution of actions matches the training distribution at each t∈[0,1]. Trajectories sampled from the flow field are shown in Fig. 3d. The trajectory distribution contains two modes of equal probability: trajectories that always lie either in a <0(shown in blue), or in a >0(shown in red). The shapes formed by sampled trajectories — “ 3” and “3” respectively — do not match the shapes of trajectories in the training data. A similar phenomenon is illustrated in Fig. 4 using the latent-space variant of streaming flow policy (see App. B) trained on the same dataset of intersecting trajectories. While the marginal distribution of actions again matches with the training distribution, the trajectories contain four modes, with shapes “S”, “ S”, “ 3” and “3”. Note that the per-timestep marginal distributions over actions still match the training data.
|
https://arxiv.org/abs/2505.21851v1
|
Figure 3: A toy example illustrating how streaming flow policy matches marginal distribution of actions in the trajectory at all time steps, but not necessarily their joint distribution. The x-axis represents a 1-D action space, and the y-axis represents both trajectory time and flow time. (a)The bi-modal training set contains two intersecting demonstration trajectories, illustrated in blue and red, with shapes “S” and “ S” respectively. (b)The marginal distribution of actions at each time step learned by our streaming flow policy. The marginal distributions perfectly match the training data. (c)The learned velocity flow field vθ(a, t|h)that yeilds the marginal distributions in (b). (d)Trajectories sampled from the learned velocity field. Trajectories that start from a <0are shown in blue, and those starting from a >0are shown in red. The sampled trajectories have shapes “3” and “3”, with equal probability. These shapes are different from the shapes “S” and “ S” in the training distribution, although their margin distributions are identical. 9.2 Streaming flow policies exhibit compositionality The loss of fidelity to the joint distribution is a potential weakness of our framework. Therefore, this framework may not be the right choice when learning the correct joint distributions is crucial. However, another perspective is to think of our method as providing compositionality over training demonstrations. The sampled trajectories can be composed of pieces across the training data. 9 Figure 4: Different variants of streaming flow policy can produce different joint distributions of actions that are consistent with the marginal distributions in the training data. This example is produced using the latent-variable version of streaming flow policy, described in App. B. (a)The marginal distribution of actions at each time step learned by the streaming flow policy matches the training data. (b)Samples from the trained policy produces four modes with shapes “S”, “ S”, “ 3” and “3”, whereas the training data contains only two modes with shapes “S” and “ S”. For many robotics tasks, compositionality might be both valid and desirable. For example, in quasi-static tasks where the robot moves slowly, if two demonstration trajectories are valid, then the compositions across these trajectories are often also valid. Under this assumption, composi- tionality allows the flow model to learn many valid combinations of partial trajectories with fewer demonstrations. What constraints on trajectories reflected in the training data can streaming flow policy learn? Streaming flow policy is unable to capture global constraints that can only be represented in the joint distribution. However, it can learn certain local constraints. 9.3 SFPs can learn arbitrary position constraints Robot actions a∈Q⊆ A may be constrained to lie in a subset Q⊆ A. For example, Qmay reflect joint limits of a robot arm. Then, a well-trained streaming flow policy should learn this constraint as well. To see why, consider Eq. 5 which states that the learned marginal density of actions p∗(a|t, h) =R ξpξ(a|t)pD(ξ|h) dξat time tis a weighted average of marginal densities of conditional flows pξ(a|t). Recall that we construct pξ(a|t)to be narrow Gaussian tubes around demonstration trajectories ξ. Assume that the thickness of the Gaussian tube is sufficiently small
|
https://arxiv.org/abs/2505.21851v1
|
that a /∈Q=⇒ pξ(a|t)< ϵ, for some small ϵ >0and for all ξ, t. Then we have from Eq. 5 that pξ(a|t)< ϵ=⇒ p∗(a|t, h)< ϵfor all t∈[0,1]. Therefore, the probability of sampling an action athat violates the constraint Qis extremely low. 9.4 SFPs can learn convex velocity constraints Theorem 2 of Lipman et al. [3]implies that the minimizer of the conditional flow matching loss v∗:= arg min vLCFM(v, pD)has the following form: v∗(a, t|h) =Z ξvξ(a, t)pD(ξ|h)pξ(a|t)R ξ′pD(ξ′|h)pξ′(a|t) dξ′ | {z } pD(ξ|a, t, h )dξ =Z ξvξ(a, t)pD(ξ|a, t, h ) dξ (6) ≈Z ξ˙ξ(t)pD(ξ|a, t, h ) dξ(assuming k≈0) 10 Intuitively, the target velocity field v∗at(a, t)is a weighted average of conditional flow velocities vξ(a, t)over demonstrations ξ. The weight for ξis the Bayesian posterior probability of ξ, where the prior probability pD(ξ|h)is the probability of ξgiven hin the training distribution, and the likelihood pξ(a|t)is the probability that the conditional flow around ξgenerates aat time t. Under sufficiently small values of k, we have from Eq. 2 that vξ(a, t)≈˙ξ(t). Note that v∗is then a convex combination of demonstration velocities ˙ξ(t). Consider convex constraints over velocities ˙ξ(t)∈Ci.e.˙ξ(t)is constrained to lie in a convex set Cfor all ξwith non-zero support pD(ξ)>0 and for all t∈[0,1]. This is the case, for example, when robot joint velocities lie in a closed interval [vmin, vmax]. Then, Eq. 6 implies that v∗also lies in C. References [1]C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. In Proceedings of Robotics: Science and Systems (RSS) , Daegu, Republic of Korea, July 2023. [2]K. Black, N. Brown, D. Driess, A. Esmail, M. Equi, C. Finn, N. Fusai, L. Groom, K. Hausman, B. Ichter, et al. π0: A vision-language-action flow model for general robot control, 2024. ArXiv Preprint:2410.24164 , 2024. [3]Y . Lipman, R. T. Chen, H. Ben-Hamu, M. Nickel, and M. Le. Flow matching for generative modeling. In International Conference on Learning Representations (ICLR) , 2023. [4]Octo Model Team, D. Ghosh, H. Walke, K. Pertsch, K. Black, O. Mees, S. Dasari, J. Hejna, C. Xu, J. Luo, T. Kreiman, Y . Tan, L. Y . Chen, P. Sanketi, Q. Vuong, T. Xiao, D. Sadigh, C. Finn, and S. Levine. Octo: An open-source generalist robot policy. In Proceedings of Robotics: Science and Systems , Delft, Netherlands, July 2024. [5]F. Zhang and M. Gienger. Affordance-based Robot Manipulation with Flow Matching. arXiv preprint arXiv:2409.01083 , 2024. [6]S. Ye and M. C. Gombolay. Efficient trajectory forecasting and generation with conditional flow matching. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 2816–2823. IEEE, 2024. [7]A. Sridhar, D. Shah, C. Glossop, and S. Levine. NoMaD: Goal masked diffusion policies for navigation and exploration. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 63–70. IEEE, 2024. [8]J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning (ICML) , pages 2256–2265. PMLR, 2015. [9]J. Ho, A. Jain, and P. Abbeel. Denoising
|
https://arxiv.org/abs/2505.21851v1
|
diffusion probabilistic models. Advances in Neural Information Processing Systems (NeurIPS) , 33:6840–6851, 2020. [10] A. Block, A. Jadbabaie, D. Pfrommer, M. Simchowitz, and R. Tedrake. Provable guarantees for generative behavior cloning: Bridging low-level stability and high-level behavior. In Proceedings of Advances in Neural Information Processing Systems (NeurIPS) , volume 36, pages 48534–48547, New Orleans, USA, December 2023. [11] R. T. Chen, Y . Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential equations. Advances in Neural Information Processing Systems (NeurIPS) , 31, 2018. [12] W. Grathwohl, R. T. Q. Chen, J. Bettencourt, and D. Duvenaud. FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models. In International Conference on Learning Representations (ICLR) , 2019. 11 [13] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with low-cost hardware. In Proceedings of Robotics: Science and Systems (RSS) , Daegu, Republic of Korea, July 2023. [14] S. H. Høeg, Y . Du, and O. Egeland. Streaming Diffusion Policy: Fast Policy Synthesis with Variable Noise Diffusion Models. In IEEE International Conference on Robotics and Automation (ICRA) , Atlanta, USA, May 2025. [15] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese, Y . Zhu, and R. Mart ´ın-Mart ´ın. What Matters in Learning from Offline Human Demonstrations for Robot Manipulation. In 5th Annual Conference on Robot Learning (CoRL) , London & Virtual, November 2021. [16] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Conference on robot learning (CoRL) , pages 158–168. PMLR, 2022. [17] J. Song, C. Meng, and S. Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations (ICLR) , 2021. [18] M. Janner, Y . Du, J. Tenenbaum, and S. Levine. Planning with Diffusion for Flexible Behavior Synthesis. In International Conference on Machine Learning (ICML) , volume 162 of Proceed- ings of Machine Learning Research (PMLR) , pages 9902–9915, Baltimore, USA, 17–23 July 2022. [19] S. M. Khansari-Zadeh and A. Billard. Imitation learning of globally stable non-linear point-to- point robot motions using nonlinear programming. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 2676–2683. IEEE, 2010. [20] S. M. Khansari-Zadeh and A. Billard. Learning stable nonlinear dynamical systems with gaussian mixture models. IEEE Transactions on Robotics (T-RO) , 27(5):943–957, 2011. [21] A. Sochopoulos, M. Gienger, and S. Vijayakumar. Learning deep dynamical systems using stable neural ODEs. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 11163–11170. IEEE, 2024. [22] A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal. Dynamical movement primitives: learning attractor models for motor behaviors. Neural computation , 25(2):328–373, 2013. [23] S. Schaal, A. Ijspeert, and A. Billard. Computational approaches to motor learning by imitation. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences , 358 (1431):537–547, 2003. [24] P. Esser, S. Kulal, A. Blattmann, R. Entezari, J. M ¨uller, H. Saini, Y . Levi, D. Lorenz, A. Sauer, F. Boesel, et al. Scaling
|
https://arxiv.org/abs/2505.21851v1
|
rectified flow transformers for high-resolution image synthesis. In Proceedings of the 41st International Conference on Machine Learning (ICML) , 2024. [25] N. Ma, M. Goldstein, M. S. Albergo, N. M. Boffi, E. Vanden-Eijnden, and S. Xie. SiT: Exploring flow and diffusion-based generative models with scalable interpolant transformers. In European Conference on Computer Vision (ECCV) , pages 23–40. Springer, 2024. [26] Y . Jin, Z. Sun, N. Li, K. Xu, H. Jiang, N. Zhuang, Q. Huang, Y . Song, Y . Mu, and Z. Lin. Pyra- midal flow matching for efficient video generative modeling. arXiv preprint arXiv:2410.05954 , 2024. [27] A. P. et. al. Movie Gen: A Cast of Media Foundation Models. arXiv preprint arXiv:2410.13720 , 2025. 12 [28] B. Jing, B. Berger, and T. Jaakkola. AlphaFold meets flow matching for generating protein ensembles. arXiv preprint arXiv:2402.04845 , 2024. [29] A. J. Bose, T. Akhound-Sadegh, G. Huguet, K. Fatras, J. Rector-Brooks, C.-H. Liu, A. C. Nica, M. Korablyov, M. Bronstein, and A. Tong. SE(3)-stochastic flow matching for protein backbone generation. arXiv preprint arXiv:2310.02391 , 2023. [30] L. Klein, A. Kr ¨amer, and F. No ´e. Equivariant flow matching. Advances in Neural Information Processing Systems (NeurIPS) , 36:59886–59910, 2023. [31] N. Funk, J. Urain, J. Carvalho, V . Prasad, G. Chalvatzaki, and J. Peters. ActionFlow: Equivariant, Accurate, and Efficient Policies with Spatially Symmetric Flow Matching. arXiv preprint arXiv:2409.04576 , 2024. [32] M. Braun, N. Jaquier, L. Rozo, and T. Asfour. Riemannian flow matching policy for robot motion learning. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 5144–5151. IEEE, 2024. [33] D. Ruhe, J. Heek, T. Salimans, and E. Hoogeboom. Rolling Diffusion Models. In Proceedings of the 41st International Conference on Machine Learning (ICML) , volume 235 of Proceedings of Machine Learning Research , pages 42818–42835. PMLR, 21–27 Jul 2024. [34] Z. Zhang, R. Liu, R. Hanocka, and K. Aberman. TEDi: Temporally-entangled diffusion for long-term motion synthesis. In ACM SIGGRAPH 2024 Conference Papers , pages 1–11, 2024. 13 Appendix Streaming Flow Policy Simplifying diffusion /flow-matching policies by treating action trajectories as flow trajectories A Proof of Theorem 1 Integrating learned velocity fields can suffer from drift since errors accumulate during integration. We adding a stabilization term, we can correct deviations from the demonstration trajectory. The stabilizing velocity field is: vξ(a, t) =−k(a−ξ(t))|{z} Stabilization term+ ˙ξ(t)|{z} Path velocity(7) where k >0is the stabilizing gain. This results in exponential convergence to the demonstration: d dt(a−ξ(t)) =−k(a−ξ(t)) (8) =⇒1 a−ξ(t)d dt(a−ξ(t)) =−k (9) =⇒d dtlog(a−ξ(t)) =−k (10) =⇒log(a−ξ(t)) t 0=−Zt 0kdt (11) =⇒loga(t)−ξ(t) a0−ξ(0)=−kt (12) =⇒a(t) =ξ(t) + (a0−ξ(0))e−kt(13) Since a0∼ N ξ(0), σ2 0 (see Eq. 1), and a(t)is linear in a0, we have by linearity of Gaussian distributions that: pξ(a|t) =N(a ξ(t), σ2 0e−2kt) (14) □ B Decoupling stochasticity via latent variables In order to learn multi-modal distributions during training, streaming flow policy as introduced in Sec. 3 requires a small amount of Gaussian noise added to the initial action. However, we wish to avoid adding noise to actions at test time. We now present a variant of streaming flow policy
|
https://arxiv.org/abs/2505.21851v1
|
in an extended state space by introducing a latent variable z∈ A. The latent variable zdecouples stochasticity from the flow trajectory, allowing us to sample multiple modes of the trajectory distribution at test time while deterministically starting the sampling process from the most recently generated action. We now define a conditional flow in the extended state space (a, z)∈ A2. We define the initial distribution by sampling a0andz0independently. a0is sampled from a vanishingly narrow Gaussian distribution centered at the initial action of the demonstration trajectory ξ(0), but with a extremely small variance σ0≈0.z0is sampled from a standard normal distribution, similar to standard diffusion models [9] and flow matching [3]. Initial sample z0∼ N(0, I) (15) a0∼ N(ξ(0), σ2 0) (16) 14 Figure 5: Constructing a conditional flow using auxiliary stochastic latent variables instead of adding noise to actions. In this toy example, the x-axis represents a 1-D action space, and the y-axis represents both trajectory time and flow time. (a)A toy bi-modal training set contains two trajectories shown in red and blue; the same as in Fig. 1a. Given a demonstration trajectory ξfrom the training set (e.g.the demonstration in blue), we design a velocity field vξ(a, z, t )that takes as input time t∈[0,1], the action aat time t, as well as an additional latent variable z. The latent variable is responsible for injecting noise into the flow sampling process, allowing the initial action a(0)to be deterministically set to the initial action ξ(0)of the demonstration. The latent variable z(0)∼ N(0,1)is sampled from the standard normal distribution at the beginning of the flow process, similar to conventional diffusion /flow policies. The velocity field vξ(a, z, t )generates trajectories in an extended sample space [0,1]→ A2where aandzare correlated and co-evolve with time. (b, c) Shows the marginal distribution of actions a(t)and the latent variable z(t), respectively, at each time step. Overlaid in red are the a- and z- projections, respectively, of trajectories sampled from the velocity field. The action evolves in a narrow Gaussian tube around the demonstration, while the latent variable starts fromN(0,1)att= 0and converges to the demonstration trajectory at t= 1; see App. B for a full description of the velocity field. σ0 Initial standard deviation R+ σ1 Final standard deviation R+ k Stabilizing gain R≥0 σr Residual standard deviation =p σ2 1−σ2 0e−2k R≥0 Table 4: Hyperparameters used in the stochastic variant of streaming flow policy that uses stochastic latent variables. We assume hyperparameters σ0,σ1andk. They correspond to the initial and final standard deviations of the action variable ain the conditional flow. kis the stabilizing gain. Furthermore, we constrain them such that σ1≥σ0e−k. Then, let us define σr:=p σ2 1−σ2 0e−2k. Then we construct the joint flow trajectories of(a, z)starting from (a(0), z(0)) as: Flow trajectory diffeomorphism a(t|ξ, a0, z0) =ξ(t) + (a0−ξ(0))e−kt+ (σrt)z0 z(t|ξ, a0, z0) = (1 −(1−σ1)t)z0+tξ(t)(17) The flow is a diffeomorphism from A2toA2for every t∈[0,1]. Note that a(0|ξ, a0, z0) =a0andz(0|ξ, a0, z0) =z0, so the diffeomorphism is identity at t= 0. The marginal distribution at t= 1 foraandzis given by a(1|ξ)∼ N(ξ(1), σ2 1)andz(1|ξ)∼ N(ξ(1), σ2
|
https://arxiv.org/abs/2505.21851v1
|
1). Intuitively, the variable afollows the shape of the action trajectory ξ(t)with an error starting from a0−ξ(0)and decreasing with an exponential factor due to the stabilizing gain. However, it uses the sampled noise variable z0∼ N(0, I)to increase the standard deviation from σ0around ξ(0)toσ1 15 around ξ(1). This is done in order to sample different modes of the trajectory distribution at test time. On the other hand, the latent variable zstarts from the random sample z0∼ N(0, I)but continuously moves closer to the demonstration trajectory ξ(t), reducing its variance from 1toσ1. Since (a, z)at time tis a linear transformation of (q0, z0), the joint distribution of (a, z)at every timestep is a Gaussian given by: Joint distribution of (a, z)at each timestep a z = e−ktσrt 0 1 −(1−σ1)t | {z } A a0 z0 + ξ(t)−ξ(0)e−kt tξt | {z } b(18) pξ(a, z|t) =N Aµ0+b , AΣ0AT (19) =N ξ(t) tξ(t) , Σ11Σ12 Σ12Σ22 where (20) Σ11=σ2 0e−2kt+σ2 rt2(21) Σ12=σrt(1−(1−σ1)t) (22) Σ22= (1−(1−σ1)t)2(23) Note that µ0= ξ(0) 0 andΣ0= σ2 00 0 1 . Since the flow is a diffeomorphism, we can invert it and express (a0, z0)as a function of (a(t), z(t)): Inverse of the flow diffeomorphism z0=z−tξ(t) 1−(1−σ1)t a0=ξ(0) + ( a−ξ(t)−(σrt)z0)ekt(24) At time t, the velocity of the trajectory starting from (a0, z0)can be obtained by differentiating the flow diffeomorphism in Eq. 17 with respect to t: Velocity in terms of (a0, z0) ˙a(t|ξ, a0, z0) =˙ξ(t)−k(a0−ξ(0))e−kt+σrz0 ˙z(t|ξ, a0, z0) =ξ(t) +t˙ξ(t)−(1−σ1)z0(25) The flow induces a velocity field at every (a, z, t ). The conditional velocity field vθ(a, z, t|h)by first inverting the flow transformation as shown in Eq. 24, and plugging that into Eq. 25, we get: Conditional velocity field va ξ(a, z, t ) =˙ξ(t)−k(a−ξ(t)) +σr(1 +kt) 1−(1−σ1)t(z−tξ(t)) vz ξ(a, z, t ) =ξ(t) +t˙ξ(t)−1−σ1 1−(1−σ1)t(z−tξ(t))(26) Importantly, the evolution of aandzis inter-dependent i.e.the sample z0determines the evolution ofa. Furthermore, the marginal probability distribution pa ξ(a, t)can be deduced from the joint distribution in Eq. 20 and is given by: pξ(a|t) =N a ξ(t), σ2 0e−2kt+σ2 rt2 (27) In other words, qevolves in a Gaussian tube centered at the demonstration trajectory ξ(t)with a stan- dard deviation that varies from σ0att= 0toσ1att= 1. The fact that the marginal distribution lies close to the demonstration trajectories, from Eq. 5 ensures that the per-timestep marginal distributions over actions induced by the learned velocity field are close to training distribution. However, this formulation allows us to select extremely small values of σ0, essentially deterministically starting from the last generated action aprev. The stochasticity injected by sampling z0∈ N(0, I), as well as the correlated evolution of aandzensures that we sample a diverse distribution of actions in a starting from the same action acurr. This phenomenon is illustrated via a 1-D toy example in Figs. 5 and 6, with details in captions. 16 Figure 6: The marginal velocity flow field vθ(a, z, t|h)learned using the flow construction in Fig. 5. (a, b) shows the marginal distribution of actions a(t)and the latent variable z(t), respectively, at each time step under the
|
https://arxiv.org/abs/2505.21851v1
|
learned velocity field. (c, d) Shows the a- and z- projections, respectively, of trajectories sampled from the learned velocity field. By construction, a(0)deterministically starts from the most recently generated action, whereas z(0)is sampled from N(0,1). Trajectories starting withz(0)<0are shown in blue, and those with z(0)>0are shown in red. The main takeaway is that in (c), even though all samples deterministically start from the same initial action ( i.e.the most recently generated action), they evolve in a stochastic manner that covers both modes of the training distribution. This is possible because the stochastic latent variable zis correlated with a, and the initial random sample z(0)∼ N(0,1)informs the direction aevolves in. C Action Horizon In Fig. 7, we analyze the effect of action chunk size on the performance of streaming flow policy, under various benchmark environments: (1) Robomimic: Can , (2)Robomimic: Square , (3)Push-T with state input and (4) Push-T with image input. The x-axis shows the chunk size in log scale. The y-axis shows the relative decrease in performance compared to that of the best performing chunk size. All scores are less than or equal to zero, where higher is better. In 3/4 environments, the performance peaks at chunk size 8, and 1/4 environments peak at chunk size 6. The performance decreases as the chunk size deviates from the optimum. Our results match with findings from Chi et al. [1], suggesting that behavior cloning policies have a “sweet spot” in the chunk size of the action trajectories. We recommend choosing a larger chunk size ( i.e.closer to open-loop execution) when the environment dynamics are deterministic and stable. Smaller chunk sizes should be used in stochastic environments with high uncertainty, where the policy may benefit from a tighter feedback loop. Figure 7: Analysis of the effect of action chunk size on the performance of streaming flow policy, under various benchmark environments. x-axis shows the chunk size, in log scale. y-axis shows the relative decrease in performance compared to that of the best performing chunk size. All scores are less than or equal to zero, where higher is better. In 3/4 environments, the performance peaks at chunk size 8, and the other environment peaks at chunk size 6. The performance decreases as the chunk size increases or decreases from the optimum. 17 D Push-T experiments with image inputs and action imitation In this section, we perform experiments in the Push-T environment [ 1,16] using images as observa- tions, and imitating actions instead of states (see Sec. 6 for a discussion on state imitation vs. action imitation). This was missing in Table 5 of the main paper. The conclusions from the table are essentially the same as in the main paper. Streaming flow policy performs nearly as well as the best performing baseline i.e.diffusion policy with 100 DDPM inference steps. However, streaming flow policy is significantly faster than diffusion policy. It is also faster than the remaining baselines, while also achieving a higher task success rate. Push-T with image input Action imitation Avg/Max scoresLatency ↑ ↓ 1 DP [1]: 100 DDPM steps 83.8% /87.0% 127.2 ms
|
https://arxiv.org/abs/2505.21851v1
|
arXiv:2505.21852v1 [cs.LG] 28 May 2025A Provable Approach for End-to-End Safe Reinforcement Learning Akifumi Wachi∗Kohei Miyaguchi∗Takumi Tanabe∗Rei Sato∗Youhei Akimoto†‡ ∗LY Corporation†University of Tsukuba‡RIKEN AIP {akifumi.wachi, kmiyaguc, takumi.tanabe, sato.rei}@lycorp.co.jp akimoto@cs.tsukuba.ac.jp Abstract A longstanding goal in safe reinforcement learning (RL) is a method to ensure the safety of a policy throughout the entire process, from learning to operation. However, existing safe RL paradigms inherently struggle to achieve this objective. We propose a method, called Provably Lifetime Safe RL ( PLS), that integrates offline safe RL with safe policy deployment to address this challenge. Our proposed method learns a policy offline using return-conditioned supervised learning and then deploys the resulting policy while cautiously optimizing a limited set of parameters, known as target returns, using Gaussian processes (GPs). Theoretically, we justify the use of GPs by analyzing the mathematical relationship between target and actual returns. We then prove that PLS finds near-optimal target returns while guaranteeing safety with high probability. Empirically, we demonstrate thatPLS outperforms baselines both in safety and reward performance, thereby achieving the longstanding goal to obtain high rewards while ensuring the safety of a policy throughout the lifetime from learning to operation. 1 Introduction Reinforcement learning (RL) has exhibited remarkable capabilities in a wide range of real problems, including robotics [ 32], data center cooling [ 34], finance [ 23], and healthcare [ 59]. RL has attracted significant attention through its successful deployment in language models [ 21,38] or diffusion models [ 7]. As RL becomes a core component of advanced AI systems that affect our daily lives, ensuring the safety of these systems has emerged as a critical concern. Hence, while harnessing the immense potential of RL, we must simultaneously address and mitigate safety concerns [4]. Safe RL [ 18,20] is a fundamental and powerful paradigm for incorporating explicit safety considera- tions into RL. Given its wide range of promising real-world applications, safe RL naturally spans a broad scope and involves several critical considerations in its formulation. For example, design choices must be made regarding the desired level of safety (e.g., safety guarantees are required in expectation or with high probability), the phase in which safety constraints are enforced (e.g., post-convergence or even during training), and other related aspects [27, 54]. A longstanding goal in safe RL is to develop a methodology with a safety guarantee throughout the entire process, from learning to operation. However, existing safe RL paradigms inherently struggle to achieve this goal. In online safe RL, where an agent learns its policy while interacting with the environment, ensuring safety is especially challenging during the initial phases of policy learning. While safe exploration [ 50], sim-to-real safe RL [ 24], or end-to-end safe RL [ 11] have been actively studied, they typically rely on strong assumptions, such as (partially) known state transitions. Also, in offline safe RL, where a policy is learned from a pre-collected dataset, it remains difficult to deploy a Preprint. Under review. 5BSHFUSFUVSOT"DUVBMSFUVSOT 0GGMJOFEBUBTFUReturn-conditioned policyReturn-conditioned policy〜〜0GGMJOFQPMJDZMFBSOJOH"CTPMVUFTBGFUZHVBSBOUFF0GGMJOFˡ%FQMPZNFOUˠ0OMJOF 4BGFUBSHFUSFUVSOTPQUJNJ[BUJPO)JHIQSPCBCJMJUZTBGFHVBSBOUFF-POHUFSNPQFSBUJPO)JHIQSPCBCJMJUZOFBSPQUJNBMJUZBOETBGFUZHVBSBOUFFTReturn-conditioned policy'JYFEUBSHFUSFUVSOTBGUFSDPOWFSHFODF (BVTTJBOQSPDFTTFTFigure 1: A conceptual illustration of PLS. After learning a return-conditioned policy using offline safe RL, PLS optimizes target returns
|
https://arxiv.org/abs/2505.21852v1
|
through safe online policy evaluation via Gaussian processes. A key advantage of PLS is that safety is guaranteed at least with high probability in the entire process. safe policy in a real environment due to distribution mismatch issues between the offline data and the actual environment, even though training can proceed without incurring any immediate safety risks. Our contributions. We propose Provably Lifetime Safe RL ( PLS), an algorithm designed to address the longstanding goal in safe RL. PLS integrates offline policy learning with online policy evaluation and adaptation with high probability safety guarantee, as illustrated in Figure 1. Specifically, PLS be- gins by training a policy using an offline safe RL algorithm based on return-conditioned supervised learning (RCSL). Given this resulting return-conditioned policy, PLS then seeks to optimize a set of target returns by maximizing the reward return subject to a safety constraint during actual environ- mental interaction. Through rigorous analysis, we demonstrate that leveraging Gaussian processes (GPs) for this optimization is theoretically sound, which enables PLS to optimize target returns in a Bayesian optimization framework. We further prove that, with high probability, the resulting target returns are near-optimal while guaranteeing safety. Finally, empirical results demonstrate that 1) PLS outperforms baselines in both safety and task performance, and 2) PLS learns a policy that achieves high rewards while ensuring safety throughout the entire process from learning to operation. 2 Related Work Safe RL [ 18] is a promising approach to bridge the gap between RL and critical decision-making problems related to safety. A constrained Markov decision process (CMDP, [ 3]) is a popular model for formulating a safe RL problem. In this problem, an agent must maximize the expected cumulative reward while guaranteeing that the expected cumulative safety cost is less than a fixed threshold. Online safe RL. Although safe RL in CMDP settings has been substantially investigated, most of the existing literature has considered “online” settings, where the agent learns while interacting with the environment [ 54]. Prominent algorithms fall into this category, as represented by constrained policy optimization (CPO, [ 1]), Lagrangian-based actor-critic [ 6,8], and primal-dual policy optimization [ 39, 57]. In online safe RL, satisfaction of safety constraints is not usually guaranteed during learning, and many unsafe actions may be executed before converging. To mitigate this issue, researchers have investigated safe exploration [ 5,50,52], formal methods [ 2,17], or end-to-end safe RL [ 11,25]. These techniques, however, typically rely on strong assumptions (e.g., known state transitions), and excessively conservative policies tend to result in unsatisfactory performance or inapplicability to complex systems. Therefore, simultaneously achieving both reward performance and guaranteed safety within the online safe RL paradigm is inherently difficult. Offline safe RL. Offline reinforcement learning (RL) [ 33,40] trains an agent exclusively on a fixed dataset of previously collected experiences. Since the agent does not interact with the environment during training, no potentially unsafe actions are executed during learning. Extending this setup to incorporate explicit safety requirements has led to the area of offline safe RL [ 30,31,37,42,55]. In this context, the objective is to maximize expected
|
https://arxiv.org/abs/2505.21852v1
|
cumulative reward while satisfying pre-specified 2 safety constraints, all from a static dataset. Because the policy is never deployed during training, offline safe RL is especially appealing for safety-critical domains. Le et al. [ 30] pioneered this direction with an algorithm that optimizes return under safety constraints using only offline data. Liu et al. [37] proposed a constrained decision transformer (CDT) that solves safe RL problems by sequence modeling by extending decision transformer [ 10] architectures from unconstrained to constrained RL settings. Despite such progress, offline safe RL still suffers from a central difficulty: learned policies often become either unsafe or overly conservative, largely due to the intrinsic challenges of off-policy evaluation (OPE) in stateful environments [15]. Versatile safe RL. OurPLS is also related to versatile safe RL, where an agent needs to incorporate a set of thresholds rather than a single predefined value. For example, in online safe RL settings, Yao et al. [58] proposes a framework called constraint-conditioned policy optimization (CCPO) that consists of versatile value estimation for approximating value functions under unseen threshold conditions and conditioned variational inference for encoding arbitrary constraint thresholds during policy optimization. Also, Lin et al. [35] proposes an algorithm to address offline safe RL problems with real-time budget constraints. Finally, Guo et al. [22] proposes an algorithm called constraint- conditioned actor-critic (CCAC) that models the relations between state-action distributions and safety constraints and then handles out-of-distribution data and adapts to varying constraint thresholds. 3 Problem Statement We consider a sequential decision-making problem in a finite-horizon constrained Markov decision process (CMDP, [ 3]) defined as a tuple M:=⟨S,A, P, H, s 1, r, g⟩, where Sis a state space, Ais an action space, and P:S × A → ∆(S)is the state transition probability, where ∆(X) denotes the probability simplex over the set X. For ease of notation, we define a transition kernel PT:S × A → ∆(R2× S)associated with ⟨P, r, g⟩. Additionally, H∈Z+is the fixed finite length of each episode, s1∈ S is the initial state, r:S × A → [0,1]is the normalized reward function bounded in [0,1]. While we assume that the initial state is fixed to s1, our key ideas can be easily extended to the case of initial state distribution ∆(S). A key difference from a standard (unconstrained) MDP lies in the (bounded) safety cost function g:S × A → [0,1]. For succinct notation, we use standatto denote the state and action at time t, and then define ξt:= (st, at, rt, gt) for all t∈[H], where rt=r(st, at)andgt=g(st, at). Episodes are defined as sequences of states, actions, rewards, and safety costs Ξ:={ξt}H t=1∈ (S × A × R2)H, where st+1∼P(· |st, at)for all t∈[H]. The t-th context xtof an episode refers to the partial history xt:= (ξ1, ξ2, . . . , ξ t−1, st)for1≤t≤H+ 1, where we let sH+1=⊥be a dummy state. Let Xt:= (S × A × R2)t−1× S be the set of all t-th contexts and X:=SH t=1Xtbe the sets of all contexts at time steps 1≤t≤H. We consider a context-dependent policy π:X → ∆(A)to map a
|
https://arxiv.org/abs/2505.21852v1
|
context to an action distribu- tion, subsequently identifying a joint probability distribution PπonΞsuch that at∼π(xt)and (rt, gt, st+1)∼PT(st, at)for all t∈[H].1Given a trajectory τ= (ξ1, ξ2, . . . , ξ H), returns are given bybR(τ):=PH t=1r(st, at)for reward and bG(τ):=PH t=1g(st, at)for safety cost, respectively. We now define the following two metrics that are respectively called reward and safety cost returns, where the expectation is taken over trajectories τinduced by a policy πand the transition kernel PT: Jr(π) =Eτ∼π,PTh bR(τ)i and Jg(π) =Eτ∼π,PTh bG(τ)i . Dataset. We assume access to an offline dataset D:={Ξ(i)}n i=1, where n∈Z+is a positive integer. Letβ:X → ∆(A)denote a behavior policy. The dataset Dcomprises nindependent episodes generated by β; that is, D ∼ (Pβ)n. We also assume that, for any xt∈ X , the behavior action distribution β(xt)is conditionally independent of past rewards {rh}t−1 h=1and safety costs {gh}t−1 h=1 given past states and actions xt\ {rh, gh}t−1 h=1. Goal. We solve a versatile safe RL problem in the CMDP, where the safety threshold bis chosen within a set of candidate thresholds B:= [0, H]. Specifically, our goal is to optimize a single policy 1In this paper, we focus on context-dependent policies, a broader class than the state-dependent policies that dominate most prior RL work. 3 πthat maximizes Jr(π)while ensuring that Jg(π)is less than a threshold b∈ B: max πJr(π)subject to Jg(π)≤b,∀b∈ B. (1) In contrast to the standard safe RL problems, we additionally address two fundamental and important challenges. First, our goal is to learn, deploy, and operate a policy for solving (1)while guaranteeing safety throughout the entire safe RL process from learning to operation, at least with high probability. Second, we aim to train a single policy that can adapt to diverse safety thresholds b∈ B. 4 Preliminaries 4.1 Return-Conditioned Supervised Learning Return-conditioned supervised learning (RCSL) is a methodology to learn the return-conditional distribution of actions in each state and then define a policy by sampling from the action distribution with high returns. RCSL was first proposed in online RL settings [ 29,43,46] and was then extended to offline RL settings [ 10,14]. In offline RL settings, RCSL aims at estimating the return-conditioned behavior (RCB) policy βR(a|x):=Pβ(at=a|xt=x,bR=R); that is, the action distribution conditioned on the return bR=R∈[0, H]and the context xt=x∈ X. According to the Bayes’ rule, the RCB policy βR:X → ∆(A)is written as the importance-weighted behavior policy dβR(a|x) =f(R|x, a)/f(R|x)·dβ(a|x), (2) where f(R|x):=d dRPβ(bR≤R|xt=x)andf(R|x, a):=d dRPβ(bR≤R|xt=x, at=a) respectively denote the conditional probability density functions of the behavior return.2 4.2 Decision Transformer Decision transformer (DT, [ 10]) is a representative instance of the RCSL. In DT, trajectories are modeled as sequences of states, actions, and returns (i.e., reward-to-go). DT policies are typically learned using the GPT architecture [ 41] with a causal self-attention mask; thus, action sequences are generated in an autoregressive manner. The pre-training of DT can be seen as a regularized maximum likelihood estimate (MLE) of the neural network parameters ˆθ= argmin θ∈Θ( −1 nHnX i=1HX t=1lnpθ(a(i) t|x(i) t,bR(i)) + Φ( θ)) , (3) where P:={pθ(a|x, R)}θ∈Θis a parametric model of conditional
|
https://arxiv.org/abs/2505.21852v1
|
probability densities, and Φ(θ)≥0is a penalty term representing inductive biases in parameter optimization. The output of DT is then given by πˆθ,R, where πθ,Rdenotes the policy associated with pθ(· | ·, R). 4.3 Constrained Decision Transformer Constrained decision transformer (CDT, [ 37]) is a promising paradigm that extends the DT to constrained reinforcement learning by conditioning the policy on both reward and safety-cost returns. Specifically, CDT parameterizes a policy to take states, actions, reward returns, and safety cost returns as input tokens, and then generates the same length of predicted actions as output. Although practical implementations often truncate the input to a fixed context length, we simplify the analysis by assuming that the entire history xtis provided to the model. In the inference phase, a user specifies a target reward return Rand target safety cost return G at the beginning of the episode and iteratively update the target returns for the next time step by Rt+1=Rt−rtandGt+1=Gt−gtwithR1=RandG1=G. Since the target returns play critical roles in the CDT framework, we explicitly add them in the notations of πto emphasize the dependence on the pair of target returns z:= (R, G); that is, let us denote πˆθ,z(a|x)and define Z to be the set of all zthat are feasible. Crucially, since CDT is a variant of RCSL that extends DT to 2Strictly speaking, the right-hand side of (2) can be ill-defined for certain x∈ X anda∈ A if either f(R|x) orf(R|x, a)are ill-defined, or if f(R|x) = 0 . For our analysis, however, it suffices to impose (2) on βR only when the right-hand side is well-defined. 4 0 10 20 30 40 50 T arget cost0204060Actual cost (a) AntCircle 0 10 20 30 40 50 T arget cost01020304050Actual cost (b) HopperVelocity 0 10 20 30 40 50 T arget cost01020304050Actual cost (c) DroneRun Figure 2: Relations between target safety cost return Gand actual safety cost return Jg(π)of pretrained CDT policies (red lines). Blue dotted lines represent y=x. Target reward returns are fixed with the reward returns of the best trajectories included in the offline dataset. Observe that CDT policies suffer from unsuccessful misalignment between actual returns and target returns: (a) constraint violation, (b) excessively conservative behavior, and (c) both. constrained RL settings, the mathematical discussions are also true with CDT by replacing Rwithz, by defining f(z|x)in (2) or pθ(· | ·, R, G )in (3), for example. Safety issues of CDT policies. Ideally, we desire to align actual returns with target returns; that is, Jr(πˆθ,z)≈RandJg(πˆθ,z)≈Gforz= (R, G). This is why the target reward return Ris typically set to be the maximum return included in the offline dataset, while the target safety cost return Gis set to be the safety threshold. Unfortunately, however, the actual returns are notnecessarily aligned with the correct target returns. As evidence, Figure 2 shows the empirical relations between target returns and actual returns of CDT policies. Specifically, actual returns may differ from corresponding target returns, and their differences vary depending on the tasks or pre-trained CDT models. 5 Theoretical Relations Between Target and Actual Returns In Figure 2, while we observe
|
https://arxiv.org/abs/2505.21852v1
|
discrepancies between the target and actual returns, there seem to be some relations that can be captured using data. Our goal here is to theoretically understand when and how closely the CDT policy πˆθ,zachieves the target returns, z. Unfortunately, however, given the architecture and learning complexity of CDTs, it is almost impossible to conduct such theoretical analyses without any assumptions; hence, we first list several necessary assumptions. Assumption 1 (Near-deterministic transition) .Letq:= (r, g)denote a pair of reward and safety cost. Also, let pq(q′|s, a):=d dq′PT{r≤r′, g≤g′|s, a}be the corresponding density function. There exist deterministic maps ˆq(·,·),ˆs′(·,·), and small constants ϵq, ϵs, δ≥0such that pq(q|s, a)≤ϵq for all ∥q−ˆq(s, a)∥∞> δandP{s′̸=s′(s, a)|s, a} ≤ϵsfor all s∈ S anda∈ A. Assumption 1 is more general than that used in Brandfonbrener et al. [9]because 1) ours is for multiobjective settings and 2) we consider δ-neighborhood rather than exact equality (i.e., δ= 0). Especially, the second extension is beneficial since we can analyze theoretical properties of CDT policies optimized based on continuous reward and safety cost, whereas Brandfonbrener et al. [9] effectively limits the scope of application to the problems with discrete rewards. This is a significant extension because safe RL problems typically require the agent to deal with safety constraints with continuous safety cost functions and thresholds. We then make assumptions about the conditional probability density function of the behavior return; that is, fdefined in (2). With a slight extension from Rtoz, we assume the following three conditions onf(z|x), withzfixed to a value of interest. Assumption 2 (Initial coverage) .ηz:=f(z|s1)>0. Assumption 3 (Boundedness) .Cz:= supx∈Xf(z|x)<∞. Assumption 4 (Continuity) .cz(δ):= supz′:∥z′−z∥∞≤2δ, x∈X|f(z′|x)−f(z|x)|<∞is small. Finally, we assume the expressiveness and regularity of the regularized model (P,Φ)in (3) to control the behavior of the MLE, ˆθ. The following assumptions are fairly standard and borrowed from 5 Van der Vaart [51]; therefore, for ease of understanding, we will make informal assumptions below. See Appendix C.3 for the formal presentations of these assumptions. Assumption 5 (Soft realizability, informal ).There exists θ∗∈Θsuch that βRandπθ∗,Rare close to each other regarding the KL divergence and Φ(θ∗)is small. See Assumption 14 for a formal version. Assumption 6 (Regularity, informal ).PandΦare ‘regular’ enough for ˆθto be asymptotically normal. See Assumption 15 for a formal version. Finally, we present a theorem that characterizes the relation between target and actual returns. Theorem 1 (Relation between target and actual returns) .For any policy π, let us define J(π):= (Jr(π), Jg(π)). Also, let πˆθ,zdenote the policy obtained by the algorithm, which is characterized by a set of target returns z= (R, G). Recall that nis the number of trajectories contained in the offline dataset. Then, under Assumptions 1 - 6, we have J(πˆθ,z)−z−H2 √nF(z) ∞≤ε(z) +oP1√n , (4) where ε(z)is a small bias function and F: [0, H]2→R2is a sample path of a Gaussian process GP(0,k), whose precise definitions are given in Theorems 4 and 7, respectively. Here, oP(·)is the probabilistic small-o notation, i.e., bn=oP(an)implies limn→∞P{|bn/an|> ϵ}= 0,∀ϵ >0. See Appendix D for its formal statement and complete proof. Intuitively, the difference between the target and actual returns
|
https://arxiv.org/abs/2505.21852v1
|
is decomposed into an unbiased Gaussian process term H2F(z)/√n, a small bias term ε(z), and an asymptotically negligible term oP(1/√n). Remark 1 (Smoothness) .Examining the explicit form of the covariance function k(·,·)reveals that F(·)is smooth (under suitable conditions). Specifically, the smoothness of F(·)is known to be closely matches that of k(Corollary 1 in [13]). For more details, see Remark 9. 6 Provably Lifetime Safe Reinforcement Learning We finally present Provably Lifetime Safe Reinforcement Learning ( PLS), a simple yet powerful approach that advances safe RL toward the longstanding goal of end-to-end safety. As illustrated in Figure 1, PLS begins with offline policy learning from a pre-collected dataset. Since RL agents are most prone to violating safety constraints during the early phases of learning, this offline learning step is particularly beneficial for ensuring lifetime safety. Also, a key idea behind PLS is the use of a constrained RCSL (e.g., CDT) for this offline policy learning step. This approach yields a return-conditioned policy that enables control over both reward and safety performance through a few significant parameters. In the case of a single safety constraint, all we have to do is optimize a two-dimensional target return vector. Therefore, this method offers several advantages, including computational efficiency and enhanced controllability of policy behavior. Hereinafter, we suppose there is a pre-trained policy obtained by constrained RCSL. For simplicity, we denote such a return-conditioned policy as πzcharacterized by target reward and safety cost returns z= (R, G)while omitting the neural network parameters ˆθ. 6.1 Characterizing Reward and Safety Cost Returns via Gaussian Processes Guided by Theorem 1, we employ GPs to model the mapping from a target return vector z= (R, G) to the actual returns J(πz):= (Jr(πz), Jg(πz)). We formulate this as a supervised learning problem with the dataset {(zj,J(πzj))}N j=1, where z1,z2, . . . ,zN∈ Z is a sequence of target returns. For tractability, we discretize the search space, yielding a finite candidate set Zwith cardinality |Z|. While collecting such data, we sequentially choose the next target returns z∈ Z that maximize the actual reward return Jr(πz)subject to the safety constraint (i.e., Jg(πz)≤b). The measured returns are assumed to be perturbed by i.i.d. Gaussian noise for sampled inputs ZN:= [z1, . . . ,zN]⊤⊆ Z. Thus, for ⋄ ∈ { r, g}(the symbol ⋄is used as a wildcard), we model the noise-perturbed observations byy⋄,j=J⋄(πzj) +w⋄,jwithw⋄,j∼ N(0, ν2 ⋄), for all j∈[N]. A GP is a stochastic process that is fully specified by a mean function and a kernel. We model the reward and safety cost returns with separate GPs: Jr(πz)∼GP(µr(z), kr(z,˜z))and Jg(πz)∼GP(µg(z), kg(z,˜z)), 6 where µ⋄(z)is a mean function and k⋄(z,˜z)is a covariance function for ⋄ ∈ { r, g}. In principle, Jr(πz)andJg(πz)may be correlated (i.e., off-diagonal elements in kis non-zero in Theorem 1), but we ignore these cross-correlations and learn each GP independently for simplicity. Then, given the previous inputs ZN= [z1, . . . ,zN]⊤and observations y⋄,N:={y⋄,1, . . . , y ⋄,N}, we can analytically compute a GP posterior characterized by the the mean µ⋄,N(z) =k⋄,N(z)⊤(K⋄,N+ ν2 ⋄IN)−1y⋄,Nand variance σ2 ⋄,N(z)
|
https://arxiv.org/abs/2505.21852v1
|
=k⋄(z,z)−k⋄,N(z)⊤(K⋄,N+ν2 ⋄IN)−1k⋄,N(z), where k⋄,N(z) = [ k⋄(z1,z), . . . , k ⋄(zN,z)]⊤andK⋄,Nis the positive definite kernel matrix [k⋄(z,˜z)]z,˜z∈ZN, andIN∈RN×Nis the identify matrix. Finally, we assume that Jg(πz)is L-Lipschitz continuous with respect to some distance metric d(·,·)inZ. This assumption is rather mild and is automatically satisfied by many commonly-used kernels [45, 48]. 6.2 Safe Exploration and Optimization of Target Returns Our current goal is to find the optimal pair of target returns z= (R, G)that maximizes Jr(πz) while guaranteeing the satisfaction of the safety constraint (i.e., Jg(πz)≤b) according to GP-based inferences. For this purpose, we optimistically sample the next target returns zwhile pessimistically ensuring the satisfaction of the safety constraint, as conducted in Sui et al. [49]. A key advantage of using GPs is that we can estimate the uncertainty of the actual returns JrandJg. To guarantee, high probability, both constraint satisfaction and reward maximiza- tion, for each function ⋄ ∈ { r, g}, we construct a confidence interval defined as Ω⋄,N(z):= [µ⋄,N−1(z)±α⋄,N·σ⋄,N−1(z) ], where α⋄,N∈R+is a positive scalar that balances exploration and exploitation. These coefficients αrandαgare crucial in the performance of PLS, and principled choices for these coefficients have been extensively studied in the Bayesian optimization literature (e.g., [12, 45]). Thus, following Srinivas et al. [45], we define αr,j=αg,j=q 2 log |Z|j2π2/(6∆) , (5) where ∆∈[0,1]is the allowed failure probability. Note that πin (5) is the circle ratio, not a policy. To expand the set of feasible target returns zwhile satisfying the safety constraint, we use alternative confidence intervals ΛN(z):= ΛN−1(z)∩Ωg,N(z)withΛ0(z) = [0 , b]so that ΛNare sequentially contained in ΛN−1for all N. We thus define an upper bound uN(z):= max Λ N(z)and a lower bound of ℓN(z):= min Λ N(z), respectively. Note that uNis monotonically non-increasing and ℓN is monotonically non-decreasing, with respect to N. Safe exploration. Using the GP upper confidence bound, we construct the set of safetarget returns byYN=S z∈YN−1 z′∈ Z | uN(z) +L·d(z,z′)≤b . At each iteration, PLS computes a set of zthat are likely to increase the number of candidates for safe target returns. The agent thus picks z with the highest uncertainty while satisfying the safety constraint with high probability; that is, zN= argmax z∈EN uN(z)−ℓN(z) with EN={z∈ YN:eN(z)>0}, (6) where eN(z):= z′∈ Z \ Y N|ℓN(z)−L·d(z,z′)≤b . Intuitively, eN(·)optimistically quantifies the potential enlargement of the current safe set after obtaining a new sample z. Reward maximization. Safe exploration is terminated under the condition maxz∈EN uN(z)− ℓN(z) ≤ζ, where ζ∈R+is a tolerance parameter. After fully exploring the set of safe target returns, we turn to maximizing Jr(·)under the safety constraint. Concretely, we choose the next target returns optimistically within the pessimistically constructed set of safe target returns by zN= argmax z∈YN µr,N(z) +αr,N·σr,N(z) . (7) 6.3 Theoretical Guarantees on Safety and Near-optimality We provide theoretical results on the overall properties of PLS. We will make an assumption and then present two theorems on safety and near-optimality. The assumption below is fairly mild in practice, because we can easily ensure that the return-conditioned policy meets the safety constraint by conservatively choosing small target returns,
|
https://arxiv.org/abs/2505.21852v1
|
RandG. See Appendix I for the full proofs. 7 Assumption 7 (Initial safe set) .There exists a singleton seed set Z0that is known to satisfy the safety constraint; that is, for all z∈ Z0,Jg(πz)≤bholds. Theorem 2 (Safety guarantee) .At every iteration j, suppose that αg,jis set as in (5)and the target returns zjare chosen within Yj. Then, Jg(πzj)≤bholds — i.e., the safety constraint is satisfied — for all j≥0, with a probability of at least 1−∆. Intuitively, because PLS samples the next target returns zso that the GP upper bound u(z)is smaller than the threshold b, the true value Jg(πz)is guaranteed to be smaller than bwith high probability under proper assumptions. Moreover, since PLS learns the return-conditioned policy offline , Theorem 2 leads to an end-to-end safety guarantee, ensuring that the constraint is satisfied from learning to operation, with at least a high probability. Theorem 3 (Near-optimality) .Setαr,jas in (5)for all j≥0. Letz⋆denote the optimal feasible target returns. For any E ≥0, define N♯as the smallest positive integer Nsatisfying 4q Cνξr,NN−1log |Z|π2N2/(6∆) ≤ E, where Cν:= 1/log(1 + ν−2 r). Then, PLS finds a near-optimal zsuch that: Jr(πz)≥Jr(πz⋆)− E with a probability at least 1−∆, after collecting N♯GP observations for reward maximization. Theorem 3 characterizes the online sample complexity of PLS. Following the analysis of Sui et al. [48], we can show that the safe exploration phase expands the estimated safe set until it contains the optimal target return vector z⋆after at most N†∈Z+GP iterations. Consequently, Theorem 3 thus implies that PLS will find a near-optimal target return vector zusing at most ϖ(N†+N♯)trajectories, where ϖ∈Z+is the number of trajectories used for sample approximations of JrandJgfor each GP update. Because PLS optimizes only the two-dimensional target return vector (i.e., RandG), it requires far fewer online interactions than conventional online safe RL algorithms, which is an essential advantage in safety-critical settings where every interaction is costly or risky. 7 Experiments We conduct empirical experiments for evaluating our PLS in multiple continuous robot locomo- tion tasks designed for safe RL. We adopt Bullet-Safety-Gym [ 19] and Safety-Gymnasium [ 26] benchmarks and implement our PLS and baseline algorithms using OSRL and DSRL libraries [ 36]. Experimental details are deferred to Appendix J. Metrics. Our evaluation metrics are reward return and safety cost return, respectively normalized by bRnormalized (π):=bR(π)−R† min,b R† max,b−R† min,bandbGnormalized (π):=bG(π) b. Recall that bR(π)andbG(π)are defined as the evaluated cumulative reward and safety cost that are obtained by a policy π. In the above definitions, R† max,bandR† min,bare the maximum and minimum cumulative rewards of the trajectories in the offline dataset D. Note that we call a policy safe if bGnormalized (π)≤1. Baselines. We compare PLS against the following six baseline algorithms: BCQ-Lag, BEAR-Lag, CPQ, COptiDICE, CDT, and CCAC. BCQ-Lag and BEAR-Lag are both Lagrangian-based methods that apply PID-Lagrangian [ 47] to BCQ [ 16] and BEAR [ 28], respectively. CPQ [ 56] is an offline safe RL algorithm that regards out-of-distribution actions as unsafe and learns the reward critic using only safe state-action pairs. COptiDICE [ 31], a member of DIstribution Correction Estimation
|
https://arxiv.org/abs/2505.21852v1
|
(DICE) family, is specifically designed for offline safe RL and directly estimates the stationary distribution correction of the optimal policy in terms of reward returns under safety constraints. CDT [ 37] is a DT-based algorithm that learns a policy conditioned on the target returns, as discussed in Section 2 as a preliminary. Finally, CCAC [ 22] is a recent proposed offline safe RL algorithm that models the relationship between state-action distributions and safety constraints and then leverages this relationship to regularize critics and policy learning. We use offline safe-RL algorithms as baselines because standard online approaches often violate safety constraints during training and optimize objectives that diverge from ours. Although some safe exploration algorithms share similar goals, they rely on strong assumptions—such as known and deterministic transition dynamics [ 50] or access to an emergency reset policy [44, 53]—that do not hold in our experimental setting. 8 Table 1: Experimental result with the safety cost threshold b= 20 . The mean and standard deviation over5runs for each algorithm are shown. Reward and cost are normalized. Bold : Safe agents whose normalized cost is smaller than 1. Red: Unsafe agents. Blue : Safe agent with the highest reward. Task Metric BCQ-Lag BEAR-Lag CPQ COptiDICE CDT CCAC PLS Ant-RunReward ↑ 0.79±0.05 0.07±0.02 0.01 ±0.01 0.63 ±0.01 0.72 ±0.05 0.02 ±0.00 0.78 ±0.06 Safety cost ↓ 5.52±0.67 0.12±0.13 0.00 ±0.00 0.79 ±0.42 0.90 ±0.12 0.00 ±0.00 0.77 ±0.10 Ant-CircleReward ↑ 0.59±0.18 0.58 ±0.24 0.00±0.00 0.16±0.13 0.47 ±0.00 0.62 ±0.13 0.41±0.01 Safety cost ↓ 2.28±1.50 3.37 ±1.71 0.00±0.00 2.98±3.55 2.23 ±0.00 1.24 ±0.55 0.77±0.05 Car-CircleReward ↑ 0.65±0.19 0.76 ±0.12 0.70±0.03 0.48±0.04 0.73±0.01 0.72 ±0.03 0.72 ±0.01 Safety cost ↓ 2.17±1.10 2.74 ±0.89 0.01±0.07 1.85±1.48 0.98±0.12 0.87 ±0.29 0.88 ±0.09 Drone-RunReward ↑ 0.65±0.11 -0.03±0.02 0.19 ±0.01 0.69±0.03 0.57±0.00 0.82±0.05 0.59±0.00 Safety cost ↓ 3.91±2.02 0.00±0.00 0.00 ±0.00 3.48±0.19 0.34±0.29 7.62±0.37 0.50±0.44 Drone-CircleReward ↑ 0.69±0.05 0.82 ±0.06 -0.26±0.01 0.22 ±0.10 0.60±0.00 0.37±0.14 0.59 ±0.00 Safety cost ↓ 1.92±0.64 3.58 ±0.74 0.14±0.39 0.68 ±0.46 1.12±0.06 0.74±0.24 0.90 ±0.08 Ant-VelocityReward ↑ 1.00±0.01 -1.01±0.00 -1.01 ±0.00 1.00±0.01 0.97±0.00 0.68 ±0.34 0.98 ±0.00 Safety cost ↓ 3.22±0.60 0.00±0.00 0.00 ±0.00 6.60±1.07 0.36±0.22 0.60 ±0.21 0.82 ±0.19 Walker2d Reward ↑ 0.78±0.00 0.89±0.04 -0.02±0.03 0.13±0.01 0.80±0.00 0.81±0.07 0.79±0.00 -Velocity Safety cost ↓ 0.44±0.32 7.60±2.89 0.00±0.00 1.75±0.31 0.01±0.04 6.37±0.95 0.00±0.00 HalfCheetah Reward ↑ 1.03±0.03 0.98 ±0.03 0.22±0.33 0.63 ±0.01 0.96 ±0.03 0.84±0.01 0.99±0.00 -Velocity Safety cost ↓27.00±8.76 12.35 ±8.63 0.28±0.23 0.00 ±0.00 0.03 ±0.13 1.36±0.19 0.15±0.19 Hopper Reward ↑ 0.85±0.22 0.36 ±0.11 0.20 ±0.00 0.14±0.10 0.68 ±0.06 0.17±0.09 0.83±0.01 -Velocity Safety cost ↓ 8.48±2.75 10.39 ±3.79 3.06 ±0.07 0.34±0.42 0.12 ±0.26 1.79±1.52 0.42±0.10 Implementation of PLS.We use CDT [ 37] for offline policy learning as a constrained RCSL algorithm. The neural network configurations or hyperparameters for PLS are the same as the CDT used as a baseline. The key difference lies in how target returns are determined. In the baseline CDT, as a typical choice, we set the target reward return to the maximum reward return in the dataset and the target safety cost return to the threshold. In contrast, PLS employs GPs with radial
|
https://arxiv.org/abs/2505.21852v1
|
basis function kernels to optimize the target returns for maximizing the reward under the safety constraint. Main results. Table 1 summarizes our experimental results under a safety cost threshold of b= 20 . Additional results, including Table 6 for b= 40 , are provided in Appendix J. Notably, PLS is the only method that satisfies the safety constraint in every task. In contrast, every baseline algorithm violates the safety constraint in at least one task, which implies that a policy violating constraints could potentially persist in unsafe behavior in an actual environment. Moreover, PLS achieves the highest reward return in most tasks, which demonstrates its its superior overall performance in terms of reward and safety. In summary, while baseline methods suffer from either safety constraint violations or poor reward returns, PLS consistently delivers a balanced performance. Computational cost. Although GPs are known to be computationally expensive, PLS only needs to optimize target returns in two dimensions, z= (R, G). Because the amount of training data for the GPs is fairly small until convergence (see also Figure 3 in Appendix J), their computational overhead is not problematic. Consequently, the main source of computational cost in PLS stems from offline policy learning. Since PLS can adapt to multiple thresholds using a single policy by appropriately choosing target returns, it typically incurs lower overall computational cost than baseline algorithms (e.g., CPQ, COptiDICE), which require training a separate policy for each threshold. Safe exploration. As shown in Figure 3 in Appendix J, PLS successfully ensures safety not only after convergence but also while exploring target returns, which is consistent with Theorem 2. In some cases, however, maintaining safety beyond the initial deployment can still pose a challenge in practice. Because our guarantee is probabilistic and constructing accurate GP models is not always feasible, a small number of unsafe deployments may occur. 8 Conclusion We propose PLS as a solution to a longstanding goal in safe RL: achieving end-to-end safety from learning to operation. PLS consists of two key components: (1) offline policy learning via RCSL and (2) safe deployment that carefully optimizes target returns on which the pre-trained policy is conditioned. The relationship between target and actual returns is modeled using GPs, an approach 9 justified by our theoretical analyses. We also provide theoretical guarantees on safety and near- optimality, and we empirically demonstrate the effectiveness of PLS in safe RL benchmark tasks. Limitations. Although PLS guarantees near-optimal target returns , as established in Theorem 3, this does not directly translate into achieving a near-optimal policy . Developing a method that ensures both a near-optimal policy and end-to-end safety remains an open and ambitious research direction. References [1]J. Achiam, D. Held, A. Tamar, and P. Abbeel. Constrained policy optimization. In International Conference on Machine Learning (ICML) , pages 22–31, 2017. [2]M. Alshiekh, R. Bloem, R. Ehlers, B. Könighofer, S. Niekum, and U. Topcu. Safe reinforcement learning via shielding. In AAAI Conference on Artificial Intelligence (AAAI) , 2018. [3] E. Altman. Constrained Markov decision processes , volume 7. CRC Press, 1999. [4]D. Amodei, C. Olah, J. Steinhardt,
|
https://arxiv.org/abs/2505.21852v1
|
P. Christiano, J. Schulman, and D. Mané. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 , 2016. [5]F. Berkenkamp, M. Turchetta, A. Schoellig, and A. Krause. Safe model-based reinforcement learning with stability guarantees. In Advances in Neural Information Processing Systems (NeurIPS) , 2017. [6]S. Bhatnagar and K. Lakshmanan. An online actor–critic algorithm with function approximation for constrained Markov decision processes. Journal of Optimization Theory and Applications , 153(3):688–708, 2012. [7]K. Black, M. Janner, Y . Du, I. Kostrikov, and S. Levine. Training diffusion models with reinforcement learning. In International Conference on Learning Representations (ICLR) , 2024. [8]V . S. Borkar. An actor-critic algorithm for constrained markov decision processes. Systems & control letters , 54(3):207–213, 2005. [9]D. Brandfonbrener, A. Bietti, J. Buckman, R. Laroche, and J. Bruna. When does return- conditioned supervised learning work for offline reinforcement learning? Advances in Neural Information Processing Systems (NeurIPS) , 35:1542–1553, 2022. [10] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advances in Neural Information Processing Systems (NeurIPS) , 34:15084–15097, 2021. [11] R. Cheng, G. Orosz, R. M. Murray, and J. W. Burdick. End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. In Proceedings of the AAAI conference on artificial intelligence (AAAI) , volume 33, pages 3387–3395, 2019. [12] S. R. Chowdhury and A. Gopalan. On kernelized multi-armed bandits. In International Conference on Machine Learning (ICML) , pages 844–853, 2017. [13] N. Da Costa, M. Pförtner, L. Da Costa, and P. Hennig. Sample path regularity of Gaussian processes from the covariance kernel. arXiv preprint arXiv:2312.14886 , 2023. [14] S. Emmons, B. Eysenbach, I. Kostrikov, and S. Levine. RvS: What is essential for offline RL via supervised learning? In International Conference on Learning Representations (ICLR) , 2021. [15] J. Fu, M. Norouzi, O. Nachum, G. Tucker, A. Novikov, M. Yang, M. R. Zhang, Y . Chen, A. Kumar, C. Paduraru, et al. Benchmarks for deep off-policy evaluation. In International Conference on Learning Representations (ICLR) , 2021. [16] S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without explo- ration. In International Conference on Machine Learning (ICML) , pages 2052–2062, 2019. [17] N. Fulton and A. Platzer. Safe reinforcement learning via formal methods: Toward safe control through proof and learning. In AAAI Conference on Artificial Intelligence (AAAI) , 2018. 10 [18] J. Garcıa and F. Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research (JMLR) , 16(1):1437–1480, 2015. [19] S. Gronauer. Bullet-safety-gym: A framework for constrained reinforcement learning. Technical report, mediaTUM, 2022. [20] S. Gu, L. Yang, Y . Du, G. Chen, F. Walter, J. Wang, and A. Knoll. A review of safe reinforcement learning: Methods, theory and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024. [21] D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, et al. Deepseek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [22] Z. Guo, W. Zhou, S.
|
https://arxiv.org/abs/2505.21852v1
|
Wang, and W. Li. Constraint-conditioned actor-critic for offline safe reinforcement learning. In International Conference on Learning Representations (ICLR) , 2025. [23] B. Hambly, R. Xu, and H. Yang. Recent advances in reinforcement learning in finance. Mathematical Finance , 33(3):437–503, 2023. [24] K.-C. Hsu, A. Z. Ren, D. P. Nguyen, A. Majumdar, and J. F. Fisac. Sim-to-lab-to-real: Safe reinforcement learning with shielding and generalization guarantees. Artificial Intelligence , 314: 103811, 2023. [25] N. Hunt, N. Fulton, S. Magliacane, T. N. Hoang, S. Das, and A. Solar-Lezama. Verifiably safe exploration for end-to-end reinforcement learning. In Proceedings of the 24th International Conference on Hybrid Systems: Computation and Control , pages 1–11, 2021. [26] J. Ji, B. Zhang, J. Zhou, X. Pan, W. Huang, R. Sun, Y . Geng, Y . Zhong, J. Dai, and Y . Yang. Safety gymnasium: A unified safe reinforcement learning benchmark. In Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2023. [27] H. Krasowski, J. Thumm, M. Müller, L. Schäfer, X. Wang, and M. Althoff. Provably safe reinforcement learning: Conceptual analysis, survey, and benchmarking. arXiv preprint arXiv:2205.06750 , 2022. [28] A. Kumar, J. Fu, M. Soh, G. Tucker, and S. Levine. Stabilizing off-policy Q-learning via bootstrapping error reduction. Advances in Neural Information Processing Systems (NeurIPS) , 32, 2019. [29] A. Kumar, X. B. Peng, and S. Levine. Reward-conditioned policies. arXiv preprint arXiv:1912.13465 , 2019. [30] H. Le, C. V oloshin, and Y . Yue. Batch policy learning under constraints. In International Conference on Machine Learning (ICML) , pages 3703–3712, 2019. [31] J. Lee, C. Paduraru, D. J. Mankowitz, N. Heess, D. Precup, K.-E. Kim, and A. Guez. COptiDICE: Offline constrained reinforcement learning via stationary distribution correction estimation. In International Conference on Learning Representations (ICLR) , 2021. [32] S. Levine, C. Finn, T. Darrell, et al. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research (JMLR) , 17(1):1334–1373, 2016. [33] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020. [34] Y . Li, Y . Wen, D. Tao, and K. Guan. Transforming cooling optimization for green data center via deep reinforcement learning. IEEE transactions on cybernetics , 50(5):2002–2013, 2019. [35] Q. Lin, B. Tang, Z. Wu, C. Yu, S. Mao, Q. Xie, X. Wang, and D. Wang. Safe offline rein- forcement learning with real-time budget constraints. In International Conference on Machine Learning (ICML) , pages 21127–21152. PMLR, 2023. [36] Z. Liu, Z. Guo, H. Lin, Y . Yao, J. Zhu, Z. Cen, H. Hu, W. Yu, T. Zhang, J. Tan, et al. Datasets and benchmarks for offline safe reinforcement learning. arXiv preprint arXiv:2306.09303 , 2023. 11 [37] Z. Liu, Z. Guo, Y . Yao, Z. Cen, W. Yu, T. Zhang, and D. Zhao. Constrained decision transformer for offline safe reinforcement learning. In International Conference on Machine Learning (ICML) , 2023. [38] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, et al. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems (NeurIPS) ,
|
https://arxiv.org/abs/2505.21852v1
|
2022. [39] S. Paternain, M. Calvo-Fullana, L. F. Chamon, and A. Ribeiro. Safe policies for reinforcement learning via primal-dual methods. arXiv preprint arXiv:1911.09101 , 2019. [40] R. F. Prudencio, M. R. Maximo, and E. L. Colombini. A survey on offline reinforcement learning: Taxonomy, review, and open problems. IEEE Transactions on Neural Networks and Learning Systems , 2023. [41] A. Radford. Improving language understanding by generative pre-training. OpenAI , 2018. [42] H. Satija, P. S. Thomas, J. Pineau, and R. Laroche. Multi-objective SPIBB: Seldonian offline policy improvement with safety constraints in finite MDPs. In Advances in Neural Information Processing Systems , volume 34, 2021. [43] J. Schmidhuber. Reinforcement learning upside down: Don’t predict rewards–just map them to actions. arXiv preprint arXiv:1912.02875 , 2019. [44] A. Sootla, A. Cowen-Rivers, J. Wang, and H. Bou Ammar. Enhancing safe exploration using safety state augmentation. In Advances in Neural Information Processing Systems (NeurIPS) , 2022. [45] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In International Conference on Machine Learning (ICML) , 2010. [46] R. K. Srivastava, P. Shyam, F. Mutz, W. Ja ´skowski, and J. Schmidhuber. Training agents using upside-down reinforcement learning. arXiv preprint arXiv:1912.02877 , 2019. [47] A. Stooke, J. Achiam, and P. Abbeel. Responsive safety in reinforcement learning by PID Lagrangian methods. In International Conference on Machine Learning (ICML) , 2020. [48] Y . Sui, A. Gotovos, J. W. Burdick, and A. Krause. Safe exploration for optimization with Gaussian processes. In International Conference on Machine Learning (ICML) , 2015. [49] Y . Sui, V . Zhuang, J. W. Burdick, and Y . Yue. Stagewise safe Bayesian optimization with Gaussian processes. In International Conference on Machine Learning (ICML) , 2018. [50] M. Turchetta, F. Berkenkamp, and A. Krause. Safe exploration in finite Markov decision processes with Gaussian processes. In Advances in Neural Information Processing Systems (NeurIPS) , 2016. [51] A. W. Van der Vaart. Asymptotic statistics , volume 3. Cambridge university press, 2000. [52] A. Wachi and Y . Sui. Safe reinforcement learning in constrained Markov decision processes. In International Conference on Machine Learning (ICML) , 2020. [53] A. Wachi, W. Hashimoto, X. Shen, and K. Hashimoto. Safe exploration in reinforcement learning: A generalized formulation and algorithms. In Advances in Neural Information Processing Systems (NeurIPS) , 2023. [54] A. Wachi, X. Shen, and Y . Sui. A survey of constraint formulations in safe reinforcement learning. In International Joint Conference on Artificial Intelligence (IJCAI) , pages 8262–8271, 2024. [55] R. Wu, Y . Zhang, Z. Yang, and Z. Wang. Offline constrained multi-objective reinforcement learning via pessimistic dual value iteration. In Advances in Neural Information Processing Systems (NeurIPS) , 2021. 12 [56] H. Xu, X. Zhan, and X. Zhu. Constraints penalized Q-learning for safe offline reinforcement learning. In AAAI Conference on Artificial Intelligence (AAAI) , 2022. [57] L. Yang and M. Wang. Reinforcement learning in feature space: Matrix bandit, kernels, and regret bound. In International Conference on Machine Learning (ICML) , 2020. [58] Y . Yao, Z. Liu, Z. Cen, J. Zhu, W.
|
https://arxiv.org/abs/2505.21852v1
|
Yu, T. Zhang, and D. Zhao. Constraint-conditioned policy optimization for versatile safe reinforcement learning. Advances in Neural Information Processing Systems (NeurIPS) , 36:12555–12568, 2023. [59] C. Yu, J. Liu, S. Nemati, and G. Yin. Reinforcement learning in healthcare: A survey. ACM Computing Surveys (CSUR) , 55(1):1–36, 2021. 13 Appendix A Broader Impacts We believe that our proposed approach PLS plays a significant role in enhancing the benefits associated with reinforcement learning while concurrently working to minimize any potential negative side effects. However, it must be acknowledged that any reinforcement learning algorithm, regardless of its design or intended purpose, is intrinsically susceptible to abuse, and we must remain cognizant of the fact that the fundamental concept underlying PLS can be manipulated or misused in ways that might ultimately render reinforcement learning systems less safe. B Pseudo Code of PLS For completeness, we will present a pseudo code of our PLS. Algorithm 1 Provably Lifetime Safe Reinforcement Learning ( PLS) 1:Input: Pre-collected dataset D, safety threshold b, safe singleton set Z0, Lipschitz constant L 2: 3:// Offline policy Learning (safe with probability of 1) 4:Train a return-conditioned policy πzfromDvia constrained RCSL 5: 6:// Safe exploration (safe with high probability) 7:Initialize Y0withZ0 8:forN= 1, . . . , N †do 9:YN←S z∈YN−1 z′∈ Z | ug,N(z) +L·d(z,z′)≤b 10: eN(z)← z′∈ Z \ Y N|ℓg,N(z)−L·d(z,z′)≤b 11: EN← {z∈ YN:eN(z)>0} 12: zN←argmaxz∈EN u⋄,N(z)−ℓ⋄,N(z) 13: Update GPs using the reward and safety cost observations Jr(πzN)andJg(πzN). 14:end for 15: 16:// Reward maximization (safe with high probability) 17:forN=N†+ 1, . . . , N †+N♯do 18:YN←S z∈YN−1 z′∈ Z | ug,N(z) +L·d(z,z′)≤b 19: zN←argmaxz∈YNur,N(z) 20: Update GPs using the reward and safety cost observations Jr(πzN)andJg(πzN). 21:end for 22: 23:// Operation (safe with high probability) 24:while truedo 25: Continue to use zNas target returns for long-term operation. 26:end while C Preliminaries of Theoretical Analyses As a more general formulation of the problem, we define a multi-objective MDP characterized by m reward functions, where mis an arbitrary positive integer. Our theoretical analyses in the main paper are a specific case of m= 2compared to those we will present in the following. C.1 Multi-objective Reinforcement Learning Episodes are sequences of states, actions, and rewards Ξ:={(st, at,rt)}H t=1∈(S × A × Rm)H, where H≥0is a time horizon and m≥1is the number of reward dimensions. The t-th context xt 14 of an episode refers to the partial history xt:= (s1, a1,r1, . . . , s t−1, at−1,rt−1, st) (8) for1≤t≤H+ 1, where we let sH+1=⊥be a dummy state. Let Xt:= (S × A × Rm)t−1× S be the set of all t-th contexts and X:=SH t=1Xtbe the sets of all contexts at steps 1≤t≤H. With a fixed initial state s1and a transition kernel PT:S × A → ∆(Rm× S), we consider the Markov decision process (MDP) M= (S,A, H, s 1, PT).3Under M, every (context-dependent) policy π:X → ∆(A)identifies a probability distribution PπonΞsuch that at∼π(xt)and (rt, st+1)∼PT(st, at)for all t≥1. Assumption 8 (Bounded reward) .For any policies π, we have Pπ-almost surely 0≤rt,j≤1for 1≤t≤Hand1≤j≤m. Assumption 9 (Near-deterministic transition) .There exist deterministic maps ˆr(·,·),ˆs′(·,·)and
|
https://arxiv.org/abs/2505.21852v1
|
small constants ϵr, ϵs, δ≥0such that, if (r, s′)∼PT(s, a), 1.the reward density pr(r′|s, a):=d dr′PT{r≤r′|s, a}4is well-defined and bounded by ϵr outside the δ-neighborhood of ˆr(s, a), i.e., supr:∥r−ˆr(s,a)∥∞>δpr(r|s, a)≤ϵr, and 2. the successor state s′coincides with ˆs′(s, a)with probability of at least 1−ϵs, for all s∈ S anda∈ A. Letβ:X → ∆(A)be a behavior policy and D:={Ξ(i)}n i=1∼(Pβ)nbe a collection of n i.i.d. copies of episodes generated by β. Assumption 10 (Reward-independent behavior) .The behavior action distribution β(xt),xt∈ X, is conditionally independent of the past rewards {rh}t−1 h=1given the past states and actions xt\{rh}t−1 h=1. LetJ(π)denote the multi-dimensional policy value of π, J(π) = (J1(π), . . . , J m(π)):=Eπ[bR]∈Rm, (9) where bR:=PH t=1rtdenotes the return of episode and the superscript πofEπsignifies the dependency on Pπ. The aforementioned setting leads to constrained RL problems where a policy aims to maximize one dimension of the policy value J1(π)as much as possible while controlling the other dimensions to satisfy constraints Jk(π)≤bkwith certain threshold bk∈R, for2≤k≤m. More specifically, r1 andr2respectively correspond to randgin the main paper. C.2 Return-conditioned supervised learning Return-conditioned supervised learning (RCSL) is a methodology of offline reinforcement learning that aims at estimating the return-conditioned behavior (RCB) policy βR(a|x):=Pβ(at=a|xt= x,bR=R), the action distribution conditioned on the return bR=R∈[0, H]mas well as the context xt=x∈ X . According to the Bayes’ rule, the RCB policy βR:X → ∆(A)is written as the importance-weighted behavior policy dβR(a|x) =f(R|x, a) f(R|x)dβ(a|x), (10) where f(R|x):=d dRPβ(bR≤R|xt=x)andf(R|x, a):=d dRPβ(bR≤R|xt=x, at=a) respectively denote the conditional probability density functions of the behavior return.5 3Our analysis can be easily extended to s1being stochastic. 4We abuse the notation r≤r′forr,r′∈Rmto imply the multi-dimensional inequality, i.e., rj≤r′ jfor all1≤j≤m. 5Strictly speaking, the RHS of (10) may be ill-defined for some x∈ X anda∈ A if either f(R|x)or f(R|x, a)are ill-defined, or f(R|x) = 0 . However, it is sufficient for our analysis to impose (10) on βRonly if the RHS is well-defined. 15 Return-based importance weighting (10) favors the actions that led to the target return Rover those that did not. Hence, intuitively, it is expected that βRachieves J(βR)≈R. (11) This is the case under suitable assumptions. Thus we can solve multi-objective reinforcement learning with RCSL by setting Rto a desired value. We assume the following conditions on f(R|x), withRfixed to a value of interest. Assumption 11 (Initial coverage) .ηR:=f(R|s1)>0. Assumption 12 (Boundedness) .CR:= supx∈Xf(R|x)<∞. Assumption 13 (Continuity) .cR(δ):= supR′:∥R′−R∥∞≤2δ, x∈X|f(R′|x)−f(R|x)|<∞is small. C.3 Decision transformers Decision transformer (DT) is an implementation of RCSL. More specifically, it is seen as a regularized maximum likelihood estimation (MLE) method ˆθ= argmin θ∈Θ( −1 nHnX i=1HX t=1lnpθ(a(i) t|x(i) t,bR(i)) + Φ( θ)) , (12) where P:={pθ(a|x,R)}θ∈Θis a parametric model of conditional probability densities, typically constructed with the transformer architecture, and Φ(θ)≥0is a penalty term representing inductive biases, both explicit and implicit, in the procedure of parameter optimization. Here, a(i) t,x(i) tand bR(i)are the t-th action, the t-th context, and the return of the i-th episode Ξ(i)∈ D, respectively. The output of decision transformer is then given by πˆθ,R, where πθ,Rdenotes the policy associated withpθ(· | ·,R).
|
https://arxiv.org/abs/2505.21852v1
|
Note that the original DT is for a single-dimensional reward function, we presented (12) by extending it to multi-dimensional settings. We introduce some notation and conditions on the probabilistic model Pand the penalty Φ. Let us define a regularized risk of θrelative to βRby RΦ(θ):=Eβ t∼Unif[ H]h DKL(βbR(xt)∥πθ,bR(xt))i | {z } dissimilarity of βRandπθ,R in expectation+ Φ(θ), (13) where DKL(·∥·)denotes the Kullback–Leibler divergence. Assumption 14 (Soft realizability) .ϵP,Φ:= min θ∈ΘRΦ(θ)<∞is small. Remark 2. Assumption 14 is a relaxation of a standard realizability condition. That is, we have ϵP,Φ= 0ifβRis realizable in Pwithout penalty, i.e., there exists θ0∈Θsuch that πθ0,R=βRand Φ(θ0) = 0 . Assumption 15 (Regularity) .The following conditions are met. i)Θis a compact subset of Rd,d≥1. ii)RΦ(θ)admits a unique minimizer θ∗in the interior set Θ◦. iii)RΦ(θ)is twice differentiable at θ∗with Hessian Iθ∗:=∇2 θRΦ(θ∗)≻0. iv)The one-sample stochastic gradient ψθ(a|x,R):=∇θ{−lnpθ(a|x,R) + Φ( θ)}is locally bounded in expectation as Eβ t∼Unif[ H] sup θ∈Θb ψθ(at|xt,bR) 2 2 <∞ (14) for every sufficiently small ball ΘbinΘ. v)ˆθ∈Θ◦almost surely. Remark 3. At first glance, ii) the unique existence of θ∗and iii) the positive definiteness of the Hessian seem restrictive for over-parametrized models, including transformers. However, we note that these conditions may be enforced by adding a tiny, strongly convex penalty to Φ(θ). 16 Remark 4. Similarly, v) ˆθ∈Θ◦can be also enforced by adding a barrier function such as Φ(θ) = Kϕ2 hinge(dist( θ,Rd\Θ)/h), where h >0andK <∞are respectively suitably small and large constants, dist(θ, E):= inf θ′∈E∥θ−θ′∥2, and ϕhinge(t):= max {0,1−t}. D Error analysis Our goal here is to understand when and how closely the output of decision transformer, πˆθ,R, achieves the target return, R. The following theorem summarizes our theoretical results, answering the above question. Theorem 4. Under the assumptions of Theorems 5 to 7, we have J(πˆθ,R)−R−H2 √nF(R) ∞≤ε(R) +oP1√n , (15) whereF: [0, H]m→Rmis a sample path of a Gaussian process with mean zero and ε(R):=2¯CR(H2ϵ+δ)+H2cR(δ) ηR+H2q ϵP,Φ 2is a small bias function, where ¯CR= max {CR,1} andϵ=ϵr+ϵs. Here, oP(·)is the probabilistic small-o notation, i.e, bn=oP(an)signifies limn→∞P{|bn/an|> ϵ}= 0for all ϵ >0. Remark 5. Theorem 1 in the main paper is a special case of Theorem 4 of m= 2, which is presented in a slightly informal manner. To derive Theorem 4, we consider the bias-variance decomposition J(πˆθ,R)−R=J(βR)−R|{z} bias of RCSL+J(πθ∗,R)−J(βR)| {z } bias of MLE+J(πˆθ,R)−J(πθ∗,R) | {z } variance of MLE(16) and evaluate each term in RHS with Theorems 5 to 7, respectively, through Appendices D.1 and D.2. D.1 Bias of RCSL The following theorem gives an upper bound on the first bias term, showing that it is negligible under suitable conditions, such as the near-determinism of the transition and the regularity of the return density. The proof is deferred to Appendix E. Theorem 5. Suppose Assumptions 8 to 13 hold. Then, ∥J(βR)−R∥∞≤2¯CR H2ϵ+δ +H2cR(δ) ηR, (17) where ϵ:=ϵr+ϵsand¯CR:= max {CR,1}. A few remarks follow in order. First, we compare our result to previous one. Remark 6. Theorem 5 can be considered as a complementary extension of the previous result [ 9]. In particular, our result is applicable when the return density
|
https://arxiv.org/abs/2505.21852v1
|
f(R|s1)is bounded away from 0and ∞, while Theorem 1 of [ 9] is not. On the contrary, Theorem 1 of [ 9] is applicable when there is a nonzero probability of exactly R=bR, while our result is not since f(R|s1) =∞. Remark 7. Our result also extends Theorem 1 in Brandfonbrener et al. [9]in allowing the transition kernel PTto include small additive noises in the reward, i.e., δ >0. Below is a generalization of (17) that is useful to understand what constitutes the upper bound. Remark 8. Taking a closer look at the proof of Theorem 5, we can conclude ∥J(βR)−R∥∞≤2¯CR H2ϵ+δH +PH−1 t=1HcR(δt) ηR, (18) where δtis the additive noise tolerance specific to the t-th transition. In other words, the contributions of these additive errors to the bias of RCSL depends largely on whether they are in the terminal step (t=H) or not. 17 If we have Assumption 9 with δ= 0, Assumption 13 is automatically satisfied with cR(0) = 0 and Assumption 10 is unnecessary, resulting in the following rather simplified corollary. Corollary 1. Suppose Assumptions 8, 9, 11 and 12 hold with δ= 0. Then, ∥J(βR)−R∥∞≤2¯CRH2ϵ ηR. (19) Besides, Assumption 12 can be replaced with a stronger variant of Assumption 13. Corollary 2. Suppose Assumptions 8 to 11 hold. Also assume the Hölder continuity of f(·|x), |f(R′|x)−f(R|x)| ≤K∥R′−R∥ω ∞,R,R′∈[0, H], x∈ X. (20) Then, ∥J(βR)−R∥∞≤2(K+ 1) ηR H2(ϵ+δω) +δ . (21) Proof. It directly follows from that CR≤K+ 1andcR(δ)≤K(2δ)ω≤2Kδω. See Lemma 3 for the argument on bounding CR. D.2 Bias and variance of MLE The following theorem shows that the bias of MLE in (16) is negligible if a mild realizability condition is met. The proof is deferred to Appendix F. Theorem 6. Suppose Assumption 14 holds. Then, ∥J(πθ∗,R)−J(βR)∥∞≤H2rϵP,Φ 2. (22) Moreover, the following theorem characterizes the asymptotic distribution of the variance of MLE in (16). The proofs are deferred to Appendix G. Let us introduce the gradient covariance matrix Vθ:=Eβ t∼Unif[ H]h ψθ(at|xt,bR)ψθ(at|xt,bR)⊤i ∈Rd×d(23) and the normalized policy Jacobian Uθ(R):=1 HEπθ,R t∼Unif[ H] Qπθ,R(xt, at)∇θlnpθ(at|xt,R)⊤ ∈Rm×d, (24) where Qπ(x, a):=Eπ[bR|xt=x, at=a]∈Rmis the m-dimensional action value function. Theorem 7. Suppose Assumption 15 holds. Then, we have √n H2h Jj(πˆθ,R)−Jj(πθ∗,R)i j∈[m],R∈[0,H]m⇝GP(0,k) (25) in the limit of n→ ∞ , where k: [0, H]m×[0, H]m→Rm×mis the covariance function given by k(R,R′):=Uθ∗(R)I−1 θ∗Vθ∗I−1 θ∗Uθ∗(R′)⊤. (26) Remark 9. The differentiability of sample paths of the limit process F(·)∼GP(0,k)is known to be (roughly) the same as the differentiability of the covariance function k(·,·)(Corollary 1 in [ 13]), which, according to (26), is governed by that of Uθ∗(·). In other words, F(·)is smooth if Uθ∗(·)is smooth. With a straightforward calculation, one can further see that Uθ∗(·)is smooth if, under some mild regularity conditions, the probabilistic model Pis smooth in terms of the associated policy πθ∗,Rand the gradient ∇θlnpθ(at|xt,R)|θ=θ∗as functions of the target return R. 18 E Proof of Theorem 5 Consider the weighted error function given by ϕ(xt):=f(R|xt)∥V(xt)−ˆV(xt)∥∞, (27) where V(xt):=EβR[PH h=trh|xt]is the value function of βRandˆV(xt):=R−Pt−1 h=1rhis the target value function. It suffices for the proof of Theorem 5 to establish a suitable bound
|
https://arxiv.org/abs/2505.21852v1
|
on ϕ(x1) since, by Assumption 11, ∥J(βR)−R∥∞=ϕ(x1) f(R|x1)=ϕ(x1) ηR. (28) To this end, we will make use of ˆPT:S × A → ∆(Rm× S), the near-deterministic component of PTsuch that dˆPT(r, s′|s, a) =I{(r, s′)∈ˆT(s, a)} PT(ˆT(s, a)|s, a)dPT(r, s′|s, a), (29) where ˆT(s, a) =B∞(ˆr(s, a), δ)× {ˆs′(s, a)} ⊂Rm× S is the image of the near-deterministic transition and B∞(r, δ):={r′∈Rm:∥r′−r∥∞≤δ}is the ℓ∞-ball centered at rwith radius δ. Let also ˆP,ˆE,ˆPπ,ˆEπbe probability distributions and expectation operators identical to P,E,Pπ,Eπ, respectively, except that the transition kernel PTis replaced with ˆPTunder the hood. Now, for 1≤t≤H−1, we can bound ϕ(xt)in terms of ϕ(xt+1). Lemma 1. Suppose Assumptions 8 to 10, 12 and 13 hold. Then, for all xt∈ Xtwith1≤t≤H−1, we have ϕ(xt)≤ˆEβ[ϕ(xt+1)|xt] +HcR(δ) + 2ϵHC R. (30) Proof. Letˆf(R|xt, at):=ˆE[f(R|xt+1)|xt, at]. Note that f(R′|xt, at) =E[f(R′|xt+1)|xt, at]is well-defined for all xt∈ Xtandat∈ A by Assumptions 12 and 13. Thus, the claim follows from ϕ(xt) =f(R|xt) V(xt)−ˆV(xt) ∞ (a) ≤f(R|xt)Z V(xt+1)−ˆV(xt+1) ∞dPβR(at,rt, st+1|xt) (b) ≤f(R|xt)Z V(xt+1)−ˆV(xt+1) ∞dˆPβR(at,rt, st+1|xt) +ϵHC R (c)=Z V(xt+1)−ˆV(xt+1) ∞f(R|xt, at)dˆPβ(at,rt, st+1|xt) +ϵHC R (d) ≤Z V(xt+1)−ˆV(xt+1) ∞ˆf(R|xt, at)dˆPβ(at,rt, st+1|xt) + 2ϵHC R (e) ≤Z V(xt+1)−ˆV(xt+1) ∞f(R|xt+1)dˆPβ(at,rt, st+1|xt) +HcR(δ) + 2ϵHC R =Z ϕ(xt+1)dˆPβ(at,rt, st+1|xt) +HcR(δ) + 2ϵHC R, where (a) is shown by Jensen’s inequality with V(xt)−ˆV(xt) =EβR[V(xt+1)−ˆV(xt+1)|xt], (b) shown by Assumption 8 implying ∥V(x)−ˆV(x)∥∞≤H, Assumption 12 and Lemma 4 and, (c) shown by (10), (d) shown by Assumption 12 and evaluating ˆf(R|xt, at)−f(R|xt, at) =R f(R|xt+1)d{ˆPT−PT}(rt, st+1|st, at)with Lemma 4, and (e) shown by Lemma 5. Finally, the proof of Theorem 5 is concluded by dealing with the boundary term ϕ(xH). Lemma 2. Suppose Assumptions 8 to 10 and 13 hold. For all xH∈ XH, we have ϕ(xH)≤2ϵH¯CR+ 2δCR. (31) 19 Proof. Similarly as the proof of Lemma 1, we have ϕ(xH)≤Z V(xH+1)−ˆV(xH+1) ∞f(R|xH)dˆPβR(aH,rH|xH) +ϵHC R. We evaluate the RHS above by separating the domain of integral into two: i) where aH∈ A dtm:= {a∈ A :∥ˆr(sH, aH)−ˆV(xH)∥∞≤δ}and ii) where aH̸∈ A dtm. For the case i), we have V(xH+1)−ˆV(xH+1) ∞≤ ∥rH−ˆr(sH, aH)∥∞+ ˆr(sH, aH)−ˆV(xH) ∞≤2δ and therefore, by Assumption 12, the integral restricted to Adtmis bounded with 2δCR. For the case ii), note that f(R|xH, aH) =pr(ˆV(xH)|sH, aH)is well-defined by Assumption 9 with ∥ˆV(xH)−ˆr(sH, aH)∥∞> δ. Thus, we have Z aH̸∈Adtm V(xH+1)−ˆV(xH+1) ∞f(R|xH)dˆPβR(aH,rH|xH) (a)=Z aH̸∈Adtm V(xH+1)−ˆV(xH+1) ∞f(R|xH, aH)dˆPβ(aH,rH|xH) =Z aH̸∈Adtm V(xH+1)−ˆV(xH+1) ∞pr(ˆV(xH)|sH, aH)dˆPβ(aH,rH|xH) (b) ≤Hϵr≤Hϵ, where (a) follows from (10) and (b) from Assumption 9. Combining both cases, we arrive at the desired result. F Proof of Theorem 6 For simplicity, let π∗ R:=πθ∗,R. By the performance difference lemma (Lemma 6), we have J(π∗ R)−J(βR) =HX t=1EβRh Qπ∗ R(xt, π∗ R(xt))−Qπ∗ R(xt, βR(xt))i , (32) where RHS is further bounded by (a) ≤HHX t=1EβR[∥π∗ R(xt)−βR(xt)∥TV] (33) =H2EβR t∼Unif[ H][∥π∗ R(xt)−βR(xt)∥TV] (34) (b) ≤H2EβR t∼Unif[ H]"r 1 2DKL(βR(xt)∥π∗ R(xt))# (35) (c) ≤H2r 1 2EβR t∼Unif[ H][DKL(βR(xt)∥π∗ R(xt))] (36) (d)=H2r 1 2ϵP,Φ. (37) Here, (a) is owing to the boundedness of the Q-function 0≤Qπ(x, a)≤H, (b) is to Pinsker’s inequality, (c) is to Jensen’s, and (d) is to Assumption 14. G Proof of Theorem 7 Note that ˆθis the M-estimator
|
https://arxiv.org/abs/2505.21852v1
|
[51] associated with the criterion function Mθ(a|x, R):= lnpθ(a|x, R) pθ∗(a|x, R)−Φ(θ) + Φ( θ∗). (38) 20 Also note that Mθis locally bounded in the sense that, for every ℓ2-ballUinΘwith a sufficiently small radius ρ >0, Eβ t∼Unif[ H] sup θ∈UMθ(at|xt,ˆr) (39) ≤Eβ t∼Unif[ H] Mθ0(at|xt,ˆrk) +ρsup θ∈U∥ψθ(at|xt,ˆr)∥2 (40) ≤ρs Eβ t∼Unif[ H] sup θ∈U∥∇θMθ(at|xt,ˆr)∥2 2 <∞, (41) where θ0is the center of U. Here, the first inequality follows from Mθ(·|·) =Mθ0(·|·) +R1 0(θ− θ0)⊤ψ(1−t)θ0+tθ(·|·)dt, while the second inequality follows from that Eβ t∼Unif[ H][Mθ(at|xt,ˆr)]≤0 and Jensen’s inequality. This, with Assumption 15 i,ii), allows us to use Theorem 5.14 in [ 51] and obtain the consistency of MLE: ˆθP→θ∗. Furthermore, with Assumption 15 iii-v), it is possible to use Theorem 5.23 in [51] and have the asymptotic normality √n ˆθ−θ∗ ⇝N(0,I−1 θ∗Vθ∗I−1 θ∗). (42) Finally, we apply the functional delta method (Theorem 20.8 in [ 51]) on ˆθand the mapping θ7→ {Jj(πθ,R)}j,R. The desired result follows from calculating the derivative ∇θJj(πθ,R) =HX t=1Eπθ,R Qπθ,R j(xt, at)∇θlnpθ(at|xt,R) =H2Uθ,j(R), (43) according to the policy gradient theorem (Corollary 3). H Lemmas Lemma 3. Suppose (20) holds. Then, we have Assumption 12 with CR≤K+ 1. Proof. LetN:=B∞(R,1)∩[0, H]mand note that ρ:= supR′∈N∥R′−R∥∞≥1. Then, by the assumption, we have 1≥Z Nf(R′|x)dR′≥ρ{f(R|x)−K} ≥f(R|x)−K. (44) Rearranging the terms, we get the desired result. Lemma 4. Letϵ:=ϵr+ϵs. Then, under Assumption 9, we have ˆPT(s, a)−PT(s, a) TV≤ϵ (45) for all s∈ S anda∈ A. Proof. It is shown by ˆPT(s, a)−PT(s, a) TV= sup E Z Edn ˆPT−PTo (r, s′|s, a) (a)= 1−PTn (r, s′)∈ˆT(s, a)|s, ao (b) ≤PT{∥r−ˆr(s, a)∥∞> δ|s, a}+PT{s′̸= ˆs′(s, a)|s, a} (c) ≤ϵ, where (a) follows from taking E=ˆT(s, a), (b) from the union bound, and (c) from Assumption 9. 21 Lemma 5. Suppose Assumptions 10 and 13 hold. Then, for all xt+1∈ X such that (rt, st+1)∈ ˆT(st, at), we have ˆf(R|xt, at)−f(R|xt+1)≤cR(δ). (46) Proof. Recall that ˆf(R|xt, at):=R f(R|x′ t+1)dˆPT(r′ t, s′ t+1|xt, at), where x′ t+1 = (xt, at,r′ t, s′ t+1). Now, the claim is shown by ˆf(R|xt, at)−f(R|xt+1) =Z f(R|x′ t+1)−f(R|xt+1) dˆPT(r′ t, s′ t+1|xt, at) (a)=Z {f(R−r′ t+rt|xt+1)−f(R|xt+1)}dˆPT(r′ t, s′ t+1|xt, at) (b) ≤ sup ∥r′ t−rt∥∞≤2δ{f(R−r′ t+rt|xt+1)−f(R|xt+1)} (c) ≤cR(δ), where (a) follows from Assumption 10 and s′ t+1= ˆs′(st, at) = st+1almost surely, (b) from ∥rt−ˆr(st, at)∥∞≤δand∥r′ t−ˆr(st, at)∥∞≤δalmost surely, and (c) from Assumption 13. Lemma 6. We have J(π)−J(π′) =HX t=1Eπ′[Qπ(xt, π(xt))−Qπ(xt, π′(xt))], (47) where Qπ(x, a):=Eπ[PH h=trh|xt=x, at=a]is the action value function of π. Proof. We may write Qπ(x, π′(x)):=Ea∼π′(x)[Qπ(x, a)]. Now, observe J(π′) =HX t=1Eπ′[rt] (48) and J(π) =Qπ(x1, π(x1)) =Eπ′[Qπ(x1, π(x1))] (49) =HX t=1Eπ′[Qπ(xt, π(xt))−Qπ(xt+1, π(xt+1))], (50) where the last equality is due to Qπ(xH+1,·) = 0 . Taking the difference, we see J(π)−J(π′) =HX t=1Eπ′[Qπ(xt, π(xt))−rt−Qπ(xt+1, π(xt+1))] (51) =HX t=1Eπ′[Qπ(xt, π(xt))−Qπ(xt, π′(xt))] (52) where the last equality follows from Qπ(xt, at) =Eπ[rt+Qπ(xt+1, π(xt+1))|xt, at]. Corollary 3. Suppose Assumption 8 holds. Let πθ:X → ∆(A)be a policy associated with a parametrized density pθ(a|x),θ∈Θ⊂Rd, whose score function ˙ℓθ(a|x):=∇θlnpθ(a|x)is bounded in the sense Eπθ[supθ′∈U∥˙ℓθ′(a|x)∥2]<∞for some Ubeing a neighborhood of θ. Then, we have ∇θJ(πθ) =HX t=1Eπθh Qπθ(xt,
|
https://arxiv.org/abs/2505.21852v1
|
at)˙ℓθ(at|xt)i . (53) 22 Proof. Letω >0and fix λ∈Rdarbitrarily. Set π=πθ+ωλandπ′=πθ, and let νbe the base measure on Arelative to which pθ(a|s)is defined. Now, divide both sides of (47) by ω, and take the limitω→0to obtain λ⊤∇θJ(πθ) =HX t=1lim ω→0EπθZ Qπθ(xt, a)pθ+ωλ(a|xt)−pθ(a|xt) ωdν(a) (54) =HX t=1EπθZ Qπθ(xt, a)pθ(a|xt)λ⊤˙ℓθ(a|xt)dν(a) , (55) where the last equality is owing to the interchange of the expectation and the limit enabled by the dominated convergence theorem. Now, the desired result is shown since λis arbitrary. I Proofs of Theorems 2 and 3 Lemma 7. Pick∆∈(0,1)and set α⋄,j=p 2 log(|Z|j2π2/(6∆)) for⋄ ∈ { r, g}. Then, |J⋄(πz)−µ⋄,j(z)| ≤α⋄,j·σ⋄,j(z)∀z∈ Z ∀ j≥1 (56) holds with a probability at least 1−∆. Proof. See Lemma 5.1 and its proof in Srinivas et al. [45]. Lemma 8. Pick∆∈(0,1)and set α⋄,j=p 2 log(|Z|j2π2/(6∆)) for⋄ ∈ { r, g}. Then, the following inequality holds: NX j=1 J⋄(πz⋆)−J⋄(πzj)2≤8 log(1 + ν−2 ⋄)·α2 ⋄,Nξ⋄,N (57) with a probability at least 1−∆, where Nis the number of iterations in the reward maximization phase. Proof. This lemma directly follows from Lemma 5.4 in Srinivas et al. [45]. I.1 Proof of Theorem 2 Proof. PLS chooses the next target returns zsuch that ug,j(z) +L·d(z,z′)≤b. (58) By Lemma 7 and the Lipschitz continuity, we have ug,j(z) +L·d(z,z′)≥Jg(πz) +L·d(z,z′) (59) ≥Jg(πz′). (60) Therefore, we obtained the desired theorem. I.2 Proof of Theorem 3 Proof. We first define an one-step reachability operator with a certain margin ζ∈R+as bZζ(Y):=Y∪ z∈ Z | ∃ z′∈Y, Jg(z′) +ζ+Ld(z′,z)≤b . (61) Then, we can obtain the following reachable set after Niterations: bZN ζ(Z0):=bZζ(bZζ. . .(bZζ|{z} Ntimes(Z0)). . .). (62) Here, the optimal target return z⋆in this paper can now be defined as z⋆:= argmax z∈bZ∞ ζ(Z0)Jr(πz). (63) 23 Based on Theorem 1 in Sui et al. [49], it is guaranteed that 1) the safe exploration phase in PLS fully expands the predicted safe set (with some margin ζ) and 2) ζ-optimal target return vector z⋆exists within the safe set, after at most N†GP samples. Note that N†is defined as the smallest positive integer satisfying N† α2 g,N†ξg,N†≥C†(|bZ∞ 0(Z0)|+ 1) ζ2, (64) where C†∈R+is a positive constant. The following proof mostly follows from that of Theorem 2 in Sui et al. [49], but there are differences in how to construct the confidence intervals. Specifically, for the compatibility with Theorem 1, we cannot assume that the functions are endowed with reproducing kernel Hilbert space (RKHS), which leads to a different bound in terms of optimality. The reward maximization phase in PLS chooses the next sample using the upper confidence bound in terms of reward within the fully expanded safe region. Thus, by the Cauchy-Schwarz inequality, we have NX j=1 Jr(πz⋆)−Jr(πzj) 2 ≤N·NX j=1 Jr(πz⋆)−Jr(πzj)2(65) By combining the above inequality with Lemma 8, we have NX j=1 Jr(πz⋆)−Jr(πzj) 2 ≤N·8 log(1 + ν−2 ⋄)·α2 r,Nξr,N (66) =16Nξr,N log(1 + ν−2r)log|Z|π2N2 6∆ . (67) Given N♯be the smallest positive integer Nsuch that 4s ξr,N Nlog(1 + ν−2r)log|Z|π2N2 6∆ ≤ E, (68) we then have 1 N♯N♯X j=1 Jr(πz⋆)−Jr(πzj) ≤ E. (69) The LHS of (69) represents the average
|
https://arxiv.org/abs/2505.21852v1
|
regret. Thus, there exists ˆz∈ Z in the samples such that Jr(πˆz)≥Jr(πz⋆)− E. J Experiment Details and Additional Results J.1 Computational Resources Our experiments were conducted in a workstation with Intel(R) Xeon(R) Silver 4316 CPUs@2.30GHz and 1 NVIDIA A100-SXM4-80GB GPUs. J.2 Hyperparameters We use the OSRL library6for implementing most of the baseline algorithm. We leverage the default hyperparameters used in the OSRL library for the baselines. For CCAC, we use the authors’ implementation7. For baselines, we use Gaussian policies with mean vectors given as the outputs of neural networks, and with variances that are separate learnable parameters. The policy networks and Q networks for all experiments have two hidden layers with ReLU activation functions. The KP, KI 6https://github.com/liuzuxin/OSRL 7https://github.com/BU-DEPEND-Lab/CCAC 24 andKDare the PID parameters [ 47] that control the Lagrangian multiplier for the Lagrangian-based algorithms (i.e., BCQ-Lag and BEAR-Lag). We use the same 105gradient steps and rollout length which is the maximum episode length for CDT and baselines for fair comparison. Specifically, we set the rollout length to 500 for Ant-Circle, 200 for Ant-Run, 300 for Car-Circle and Drone-Circle, 200 for Drone-Run, and 1000 for Velocity. The safe cost thresholds for baselines are 20 and 40 across all the tasks. The hyperparameters used in the experiments are shown in Table 2. Table 2: Hyperparameters for BCQ-Lag, BEAR-Lag, CPQ, COptiDICE, and CCAC. Parameter BCQ-Lag BEAR-Lag CPQ COptiDICE CCAC Actor hidden size [256, 256] Critic hidden size [256, 256] V AE hidden size [400, 400] [400, 400] [400, 400] –[512, 512, 64, 512, 512] [KP, KI, KD] [0.1, 0.003, 0.001] [0.1, 0.003, 0.001] – – – Batch size 512 512 512 512512, 2048 (Velocity) Actor learning rate 1.0e-3 1.0e-3 1.0e-4 1.0e-4 1.0e-4 Critic learning rate 1.0e-3 1.0e-3 1.0e-3 1.0e-4 1.0e-3 Moreover, we will present hyperparameters specifically used for the CDT and PLS that are based on return-conditioned supervised learning, in Table 3. The experimental settings are same as the original authors’ implementation of CDT. Table 3: Hyperparameters common for CDT and PLS. Parameter All tasks Number of layers 3 Number of attention heads 8 Embedding dimension 128 Batch size 2048 Context length K 10 Learning rate 0.0001 Droupout 0.1 Adam betas (0.9, 0.999) Grad norm clip 0.25 25 We now summarize the hyperparameters related to GPs in safe exploration and reward maximization phases in PLS. We set the number of episodes for each policy evaluation as ϖ= 20 for all tasks. We use GPs with radial basis function (RBF) kernels: one for the reward and one for the safety cost. We set the lengthscales of the reward as 50for Bullet-Safety-Gym tasks and 100for Safety-Gymnasium Velocity tasks. The length-scales for the safety cost is set to be 5.0for all tasks. While variances for the reward are 1.0for Bullet-Safety-Gym tasks and 100for Safety-Gymnasium Velocity tasks, those for the safety cost are 1.0for all tasks. Finally, following Turchetta et al. [50] or Sui et al. [49], we set the Lipschitz constant L= 0. Other important experimental settings include how to set a initial safe set Z0associated with Assumption 7. Tables 4 and 5 summarize our experimental settings
|
https://arxiv.org/abs/2505.21852v1
|
regarding the initial safe set of target returns. Table 4: Safe target return range ( Z0) forPLS (Bullet-Safety-Gym). Parameter Ant-Circle Ant-Run Car-Circle Drone-Circle Drone-Run Reward [250, 300] [700, 750] [400, 475] [700, 720] [400, 450] Safety [0, 5] [0, 5] [0, 5] [0, 5] [0, 5] Table 5: Safe target return range ( Z0) forPLS (Safety-Gymnasimum Velocity). Parameter Ant HalfCheetah Hopper Walker2d Reward [2000, 2300] [200, 2300] [1200, 1500] [2000, 2400] Safety [0, 5] [0, 5] [0, 5] [0, 5] J.3 Additional Experimental Results We present additional experimental results for a different threshold b= 40 in Table 6. Note that, as for PLS and CDT, the return-conditioned policy in Table 6 is same as that in Table 1. The only difference regarding PLS between Tables 1 and 6 is the target returns as a result of our target returns optimization algorithm. Observe that the experimental results in Table 6 exhibit similar tendency to those in Table 1. More specifically, in both cases of b= 20 andb= 40 ,PLS is the only method that satisfies the safety constraint in all tasks, while every baseline algorithm violates the safety constraint in at least one task. Moreover, PLS obtains the highest reward return in most tasks, which demonstrates its higher performance in terms of reward and safety. In addition, we provide Figure 3 to show how our PLS explores target returns z. Please observe that PLS guarantees safety in most of policy deployment. Moreover, even if safety constraint is violated, PLS quickly recovers to meet the safety requirement. 26 Table 6: Evaluation results for the case with the safety cost threshold 40. We computed the mean and standard deviation by running each algorithm five times. Reward and cost are normalized; thus, the normalized cost limit is 1.0.Bold : Safe agents whose normalized cost is smaller than 1. Red: Unsafe agents. Blue : Safe agent with the highest reward. Task Metric BCQ-Lag BEAR-Lag CPQ COptiDICE CDT CCAC PLS Ant-RunReward ↑ 0.76±0.14 0.02±0.02 0.02 ±0.01 0.63 ±0.05 0.72±0.03 0.02±0.01 0.70 ±0.02 Safety cost ↓ 2.34±0.61 0.05±0.03 0.00 ±0.00 0.56 ±0.34 1.10±0.00 0.00±0.00 0.54 ±0.09 Ant-CircleReward ↑ 0.78±0.16 0.63 ±0.25 0.00±0.00 0.17±0.14 0.53±0.00 0.62±0.14 0.55±0.00 Safety cost ↓ 2.54±0.87 2.15 ±1.38 0.00±0.00 2.50±2.81 0.79±0.00 1.13±0.44 0.82±0.00 Car-CircleReward ↑ 0.79±0.10 0.84 ±0.09 0.73±0.03 0.49±0.04 0.80±0.00 0.77 ±0.02 0.80 ±0.02 Safety cost ↓ 1.58±0.38 1.75 ±0.37 0.86±0.04 1.44±0.72 0.99±0.05 0.86 ±0.04 0.93 ±0.06 Drone-RunReward ↑ 0.68±0.12 0.87 ±0.09 0.19 ±0.10 0.69 ±0.02 0.60±0.03 0.57±0.00 0.62±0.04 Safety cost ↓ 2.34±0.64 3.04 ±0.61 2.41 ±0.34 1.64 ±0.10 0.89±0.11 1.73±0.01 0.91±0.09 Drone-CircleReward ↑ 0.92±0.05 0.78 ±0.06 -0.27±0.01 0.28 ±0.03 0.69 ±0.00 0.16 ±0.27 0.68 ±0.01 Safety cost ↓ 2.31±0.24 1.69 ±0.31 0.20±0.67 0.29 ±0.24 1.00 ±0.00 0.71 ±0.49 0.96 ±0.03 Ant-VelocityReward ↑ 1.01±0.01 -1.01±0.00 -1.01 ±0.00 1.00±0.01 0.97±0.01 0.60 ±0.39 0.99 ±0.00 Safety cost ↓ 2.25±0.29 0.00±0.00 0.00 ±0.00 3.35±0.74 0.81±0.44 0.68 ±0.29 0.49 ±0.05 Walker2d Reward ↑ 0.78±0.00 0.91±0.03 -0.01±0.00 0.13 ±0.01 0.79 ±0.00 0.84±0.02 0.83±0.00 -Velocity Safety cost ↓ 0.30±0.13 4.05±1.31 0.00±0.00 0.90 ±0.10 0.00 ±0.00 3.49±0.43 0.00±0.00 HalfCheetah Reward ↑ 1.04±0.02 0.98 ±0.04 0.01±0.22 0.63 ±0.01 0.97 ±0.03 0.85±0.01 1.00±0.01
|
https://arxiv.org/abs/2505.21852v1
|
-Velocity Safety cost ↓14.10±3.46 6.34 ±5.46 0.10±0.11 0.00 ±0.00 0.05 ±0.11 1.22±0.09 0.01±0.00 Hopper Reward ↑ 0.85±0.19 0.40 ±0.21 0.23 ±0.00 0.05±0.07 0.67 ±0.03 0.60 ±0.17 0.84 ±0.00 -Velocity Safety cost ↓ 5.30±3.85 6.08 ±3.09 2.75 ±0.04 0.46±0.17 0.56 ±0.56 0.60 ±0.63 0.20 ±0.03 0 2 4 6 8 10 GP Iteration0.00.20.40.60.81.01.21.4Normalized safety cost (a) HopperVelocity 0 2 4 6 8 10 GP Iteration0.00.20.40.60.81.01.21.4Normalized safety cost (b) AntRun 0 2 4 6 8 10 GP Iteration0.00.20.40.60.81.01.21.4Normalized safety cost (c) AntVelocity 0 2 4 6 8 10 GP Iteration0.00.20.40.60.81.01.21.4Normalized safety cost (d) HalfCheetahVelocity 0 2 4 6 8 10 GP Iteration0.00.20.40.60.81.01.21.4Normalized safety cost (e) CarCircle 0 2 4 6 8 10 GP Iteration0.00.20.40.60.81.01.21.4Normalized safety cost (f) DroneCircle Figure 3: Experimental results on how our PLS ensures the satisfaction of the safety constraint while obtaining new GP observations. Black dotted lines represent the normalized safety threshold. 27
|
https://arxiv.org/abs/2505.21852v1
|
arXiv:2505.21854v1 [cs.CV] 28 May 2025Rethinking Gradient-based Adversarial Attacks on Point Cloud Classification Jun Chen1,3∗Xinke Li2∗Mingyue Xu4Tianrui Li1,3Chongshou Li1,3† 1School of Computing and Artificial Intelligence, Southwest Jiaotong University 2College of Computing, City University of Hong Kong 3Engineering Research Center of Sustainable Urban Intelligent Transportation Ministry of Education, Chengdu, China 4SWJTU-Leeds Joint School, Southwest Jiaotong University 2024212410@my.swjtu.edu.cn, xinkeli@cityu.edu.hk xumingyue@my.swjtu.edu.cn, {trli, lics}@swjtu.edu.cn Abstract Gradient-based adversarial attacks have become a dominant approach for evaluat- ing the robustness of point cloud classification models. However, existing methods often rely on uniform update rules that fail to consider the heterogeneous nature of point clouds, resulting in excessive and perceptible perturbations. In this paper, we rethink the design of gradient-based attacks by analyzing the limitations of con- ventional gradient update mechanisms and propose two new strategies to improve both attack effectiveness and imperceptibility. First, we introduce WAAttack , a novel framework that incorporates weighted gradients and an adaptive step-size strategy to account for the non-uniform contribution of points during optimiza- tion. This approach enables more targeted and subtle perturbations by dynamically adjusting updates according to the local structure and sensitivity of each point. Second, we propose SubAttack , a complementary strategy that decomposes the point cloud into subsets and focuses perturbation efforts on structurally critical regions. Together, these methods represent a principled rethinking of gradient- based adversarial attacks for 3D point cloud classification. Extensive experiments demonstrate that our approach outperforms state-of-the-art baselines in generating highly imperceptible adversarial examples. Code will be released upon paper acceptance. 1 Introduction Despite the significant achievements of Deep Neural Networks (DNNs) in various fields such as autonomous driving [ 1,2,3], indoor navigation [ 4], semantic segmentation for indoor scene understanding [ 5], and AI-assisted shape design [ 6], DNNs are particularly vulnerable to adversarial attacks [ 7,8,9]. Adversarial attacks involve adding imperceptibly small perturbations to input data that can cause the model to produce erroneous outputs. This poses a substantial threat to applications relying on the security and reliability of DNNs. Therefore, investigating more effective methods against adversarial attacks to enhance the robustness of models is of paramount importance. Although the notable success of existing attack methods [ 7,10] in achieving high attack success rates, further improving the imperceptibility of adversarial examples while maintaining strong attack performance remains a key challenge. Early approaches [ 11,12,13] incorporate regularization terms into the objective function to preserve local geometric structures. However, fixed regularization ∗Equal Contribution †Corresponding Author Preprint. Under review. Figure 1: Left: Existing methods (e.g., SI-Adv) suffer from both local and global over-perturbation due to the neglect of uneven gradient contributions across points and the use of a fixed step size (η= 0.007). Adjusting either the perturbed points (by replacing randomly selected npoints with original ones) or ηcan improve imperceptibility. Right: Our proposed WAAttack (WA) and Subattack (Sub) combined with existing methods significantly improve imperceptibility. strategies may not generalize well across diverse point cloud data. SI-Adv [ 14] and CIM [ 15] aim to improve adversarial quality by adjusting perturbation directions, yet their use of fixed directions often leads to local outliers, as illustrated in the left
|
https://arxiv.org/abs/2505.21854v1
|
of Figure 1. Recently, the HiT-Adv [ 16] method embeds perturbations into less perceptible regions through deformations, achieving an improved balance between attack strength and invisibility. Nonetheless, structural deformations may still appear unnatural under certain viewing conditions. The root cause lies in the fact that most gradient-based attack methods apply uniform perturbation magnitudes across all points and point clouds, neglecting both the uneven contribution of individual point gradients and the varying perturbation intensities required by different point cloud structures. Such one-size-fits-all update strategies tend to induce excessive perturbations in local regions of individual point clouds, as well as imbalanced perturbation distributions across different samples. This ultimately compromises the overall visual quality and imperceptibility of adversarial examples. To address the aforementioned issues, we revisit gradient-based white-box attack methods[ 7,17,11, 12,13,14,15] from a new perspective. We shift attention to the update mechanism of adversarial point clouds themselves. Specifically, we propose two key components to enhance attack effectiveness and imperceptibility: First, we introduce a dynamic weight allocation mechanism that adaptively adjusts the contribution of each point during the gradient update based on its local gradient information. This effectively mitigates the issue of local over-perturbation. Second, we design a min-first adaptive step size adjustment strategy , which dynamically adjusts the perturbation magnitude for each individual point cloud during the iterative process. This strategy significantly alleviates the imbalance in perturbation distribution across different point cloud samples. Furthermore, considering the structural heterogeneity of point clouds and the varying influence of different regions on model decisions, we propose a sub-point cloud based attack strategy (SubAttack). By partitioning the original point cloud into multiple non-overlapping sub-point clouds and selectively optimizing those subsets with the most significant impact on the model output, our method minimizes unnecessary perturbations on irrelevant regions. This leads to improved structural coherence and enhanced visual imperceptibility of the generated adversarial examples. Extensive experiments show that our method generates highly imperceptible adversarial point clouds with consistently high attack success rates across various mainstream models[ 18,19,20,21], signifi- cantly outperforming existing state-of-the-art attacks[ 7,22,11,13,14,17,15,16]. Unlike traditional methods, WAAttack provides finer control over perturbation distribution, preserving semantic plau- sibility and geometric naturalness without sacrificing attack strength. As illustrated in the right of Figure 1, our approach can also be easily integrated into other attack frameworks. 2 Related Work 3D Point Cloud Classification In the field of 3D point cloud processing, classification tasks are at the core of many safety-critical applications, with the aim of recognizing and distinguishing 2 between different object categories. PointNet[ 18], as the first deep neural network model that directly processes point cloud data, utilizes multilayer perceptrons (MLPs) to learn features for each point and employs a max pooling operation to ensure permutation invariance to unordered point cloud inputs. Its upgraded version, PointNet++[ 19], improves the capture of local and global structural information through hierarchical feature learning. DGCNN[ 20] further improves the ability to capture local structures by constructing dynamic graph structures and progressively extracting and aggregating local features using multiple EdgeConv layers. Based on these works, subsequent research has focused on designing specialized 3D convolutions[ 23,24,25,26,27] or
|
https://arxiv.org/abs/2505.21854v1
|
developing graph neural networks[28, 29, 30, 31, 32] to further enhance the performance. Adversarial Attack on Point Cloud Classification Although advanced point cloud classifica- tion models achieve strong performance, recent studies[ 7,8,9,10] indicate their vulnerability to adversarial attacks. Xiang et al.[ 7] pioneered the generation of 3D adversarial examples through point addition and perturbation, achieving high attack success rates. Subsequent work has mainly focused on two directions: improving transferability across models[ 33,34,35,36,17] and enhancing the imperceptibility of adversarial samples[ 11,12,37,14,13,38,39,40,15,41,42,43]. Regarding transferability, progress has been limited, recently, Li et al.[44] proposed an optimal transport (OT)- based method that identifies intrinsic singular boundaries in the data manifold, enabling effective black-box cross-model attacks. On the aspect of imperceptibility, many studies have introduced regularization terms[ 11,12,13,40,42,43] or optimized perturbation directions[ 37,14,15]. However, results remain unsatisfactory. More recent approaches[ 45,46,47,48,16] explore shape deformation by enhancing correlations among points to reduce outliers and improve visual naturalness. But these methods often introduce noticeable structural distortions, making them perceptible to humans. In contrast to the above methods, this work takes a fundamentally different approach by exploring more natural and physically plausible perturbation strategies from the perspective of point cloud generation mechanisms. 3 Preliminaries Our research focuses on gradient-based white-box attacks in point cloud classification tasks, primarily untargeted attacks, though the framework can also be adapted for targeted attacks. Assume that P={Pi}n i=1represents the set of original point clouds and Y={yi}n i=1denotes the corresponding true labels. Here, Pi∈Rn×3represents a single point cloud in P, and yiis the true label associated withPi. Each point within Piis denoted as pi∈R1×3. Letf(·)represent the point cloud classifier such that f(Pi) =yi. Gradient-based white-box attacks primarily refer to scenarios where an attacker has complete knowledge of the target model’s architecture, parameters, and training data distribution, and leverages the gradient information to generate adversarial examples. Specifically, small imperceptible perturbations ∆iare added to the original point cloud Pi, causing the model to misclassify it: f(Pi+ ∆i)̸=yi. The resulting perturbed point clouds are referred to as adversarial point clouds, denoted P′={P′ i}n i=1. 3.1 Typical Gradient-based Adversarial Attacks Classic gradient-based adversarial attacks typically generate perturbations by computing the gradient of a classification loss with respect to the input point cloud. The general formulation can be expressed as an optimization problem under a distortion constraint: Lloss= min Lcls(P′)s.t.D(P, P′)< ϵ, P′=P+ ∆ (1) Here, D(Pi, P′)denotes a distance metric between the original point cloud and the adversarial point cloud, i.e., L2orL∞.Lcls(P′)represents the classification loss, which quantifies the difference between the prediction of the output of the model and the true label, following the formulation in the C&Wattack framework[49]. Specifically, it can be expressed as: Lcls= max 0, Z(P′)y−max i̸=yZ(P′)i+k (2) where κ≥0is a margin threshold controlling the trade-off between attack strength and success probability, Zdenotes the model’s output logits, and yis the true class label. Traditional methods compute the gradient of the above optimization problem with respect to the entire point cloud using 3 automatic differentiation tools and update the coordinates of each point accordingly. This can be formally expressed as: p′ it+1=p′ it−η·⃗ n (3) where p′ itdenotes
|
https://arxiv.org/abs/2505.21854v1
|
the result of the i-th point at the t-th iteration, ηis the step size,and ⃗ nis a unit vector indicating only the update direction. Through multiple iterations, the perturbation of all points is gradually optimized until the maximum number of iterations is reached, at which point the adversarial point cloud P′is generated. 4 Methodology Unlike previous attack methods, our approach does not involve complex regularization terms or perturbation directions. Although our method can be easily integrated into other frameworks, we adopt the simplest variant to improve efficiency. The basic paradigm originates from Equations (1) and (3), which constrain the gradient of each point to unit length, thereby allowing the magnitude and direction of the perturbation to be controlled solely by ηand⃗ n, respectively, where ηis typically fixed. In the following, we will elaborate on how to generate higher-quality adversarial samples using a similarly simple paradigm. Please refer to Figure 2 for the demonstration of our framework. 4.1 Precise Perturbations via Weight Adjustments and Adaptive Step Size (WAAttack) Mechanism of Weight Term ω.To precisely control the perturbation magnitude of each point and avoid excessive distortions that may lead to perceptibility, we introduce a weight term ωwhen updating the point cloud. This is a personalized parameter designed for each point, used to adjust the contribution of gradients during the update process of different points. Specifically, ωt ireflects the importance of the gradient of the i-th point at the t-th iteration in generating adversarial samples. A larger value indicates a more significant impact of the point’s gradient on the attack effectiveness. By assigning different weights to the gradients of different points during the update process, we can effectively address the issue of excessive perturbations in traditional methods, thereby reducing the visibility of the perturbations. More precisely, we define the weight ωt ias follows: ωt i=∥∇p′ iLloss(P′ it)∥ ∥∇L loss(P′ it)∥max+ϵ(4) where ∥∇p′ iLloss(P′ it)∥denotes the norm of the gradient of the loss function with respect to point p′ iin point cloud P′ iat the t-th iteration, and ∥∇L loss(P′ it)∥maxrepresents the maximum gradient norm among all points in P′ itduring that iteration. It can be readily observed that the weight term ωt i satisfies ωt i∈[0,1]. Such a weighting term facilitates adaptive control of the perturbations, thereby making the generated adversarial samples more imperceptible in terms of both visual appearance and geometric structure. Adaptive Step Size for Balanced Attack Performance. After applying the weight term, since the perturbation magnitude of each point is relatively constrained, selecting an appropriate step size becomes a critical factor that affects both the success and imperceptibility of the attack. An overly large step size may result in noticeable visual distortions, while an excessively small one may fail to induce misclassification. To address this, we propose a min-first adaptive step size adjustment strategy . Specifically, for each input point cloud sample, our strategy is based on the intuition that, under successful attacks, smaller step sizes generally lead to better imperceptibility. Therefore, we first attempt the smallest possible step size; if the attack succeeds, we return immediately with the optimal step size
|
https://arxiv.org/abs/2505.21854v1
|
denoted as ηbest. Otherwise, we proceed with a modified binary search to find the optimal step size within a predefined range. During this process, we aim to identify the smallest step size that still ensures a successful attack and maintains imperceptibility, which is then recorded as ηbest. This approach enables us to maintain high imperceptibility while improving attack efficiency. However, if no suitable step size leads to a successful attack after multiple iterations, the sample is marked as a failure. In summary, the mechanism for generating adversarial samples in our WAAttack framework can be formally expressed as: p′ it+1=p′ it−ηbest·ωt i·⃗ n. (5) 4 Figure 2: Demonstration of the framework of WAAttack and SubAttack. Given an input point cloud, we first partition it into sub-point clouds using a hash function. Adversarial examples are then generated via a weighted iterative attack on both the original and sub-point clouds, and the best one is selected based on a comprehensive distance metric. 4.2 Point Cloud Dividing for Enhanced Attack Performance (SubAttack) To further enhance the quality of adversarial point clouds, in this section, we design a sub-point- cloud-based attack method called SubAttack . This section will elaborate on the specific details of this method. Partition a Point Cloud into Multiple Disjoint Sub-Point Clouds. Traditional gradient-based white-box attacks compute gradients over the entire point cloud, but in practice, many points have little impact on the model output. Perturbing such points rarely changes the prediction, and thus only a few critical points contribute effectively to the attack. Inspired by this, we propose to divide the point cloud into multiple sub-point clouds and focus perturbations on key subsets, enabling more structured and imperceptible adversarial modifications. Specifically, given the original point cloud P, we partition it into Ndisjoint subsets of equal size, ensuring fair evaluation of each point’s importance. To achieve uniformity within sub-point clouds and randomness across them, we adopt a hashing-based method similar to [ 50]. For each point pi= (x, y, z ), we convert its coordinates into strings and concatenate them: si=C(g(x), g(y), g(z)), (6) where g(·)converts numerical values to strings and C(·)performs concatenation. We then compute the hash value of siand assign the point to a sub-point cloud using modulo N: k=h(si) mod N. (7) This ensures that all points are randomly and uniformly distributed into Nmutually exclusive sub-point clouds. Selective Critical Sub-Point Cloud Attack. After partitioning the point cloud into sub-point clouds, we select those that significantly influence the overall classification as key sub-point clouds. Specifically, we classify each sub-point cloud using the model f(·)and compute the most frequently predicted class: cmax= arg max cNX i=1I(ci=y), (8) where yis the true label of the original point cloud, ciis the prediction for the i-th sub-point cloud, and I(·)is an indicator function. If cmax=y, the partitioning is considered effective. To 5 efficiently determine a suitable number of sub-point clouds N, we employ binary search within a predefined range. Specifically, the search starts from Nmin= 1 and is upper-bounded by Nmax, which is set to a small value (e.g., 4). This setting strikes a balance between
|
https://arxiv.org/abs/2505.21854v1
|
the size of the search space and computational efficiency. We then perform attacks only on the key sub-point clouds whose predictions match the original label. To achieve selective updating, we define a binary mask M= [m1, m2, . . . , m n], where nis the total number of points in the original point cloud. Each element mjis defined as: •mj= 1, if the corresponding point belongs to any key sub-point cloud; •mj= 0, otherwise. This mask ensures that only points in key sub-point clouds are updated during adversarial generation. Given Nkey sub-point clouds, we generate 2N−1adversarial samples by considering all non-empty combinations, plus one from attacking the full point cloud, resulting in 2Ncandidate adversarial point clouds in total, denoted as P′ can. Then, we select the best adversarial point cloud through a comprehensive distance metric. The specific selection method will be explained in the next subsection. Selection of Optimal Adversarial Point Cloud. To select the best adversarial sample P′ bestfrom the candidate set P′ can, we evaluate a weighted distance metric between each candidate and the original point cloud. Specifically, we define: D(P, P′ can,j) =λ1Dc+λ2Dh+λ3Dl, (9) where Dc,Dh, and Dldenote the Chamfer distance, Hausdorff distance, and L2-norm distance, respectively, and λ1, λ2, λ3are corresponding weights. The optimal adversarial point cloud is then selected as the one minimizing this metric: P′ best= argmin P′ can,j∈P′canD(P, P′ can,j). (10) In summary, the mechanism for generating adversarial samples in our SubAttack framework can be formally expressed as: p′ it+1=p′ it−ηbest·ωt i·⃗ n⊙mi (11) 5 Experiments 5.1 Experimental Setup Implementation. In our attack, we set the iteration limit to T= 50 , perturbation budget ϵ= 0.16, and sub-point cloud limits as Nmin= 1 andNmax= 4. The comprehensive distance metric hyperparameters are configured as λ1= 100 ,λ2= 100 , and λ3= 10 . We adopt a step size η∈[0.007,0.07]with a min-first adaptive step size adjustment strategy , using a binary search of up tobinary_max_step = 5steps to determine the optimal ηfor each point cloud. All experiments were conducted on a device equipped with four NVIDIA A100 GPUs. Datasets. We use the ModelNet40[ 51] public dataset to evaluate the performance of our attack method. The ModelNet40 dataset contains 12,311 CAD models from 40 object categories, with 9,843 models used for training and 2,468 models used for testing. Similar to previous attack methods, we uniformly sample 1,024 points from each point cloud and normalize them into a unit cube. Additionally, we apply the same preprocessing method as in [ 14] to all point clouds in the training set. Victim Models. We selected four widely used point cloud classification models as the victim models for our attack, namely PointNet [ 18], PointNet++ [ 19], DGCNN [ 20], and CurveNet [ 21]. By conducting attack experiments on these representative models, we can comprehensively evaluate the effectiveness and robustness of our attack method. Baselines. To verify the outstanding performance of our attack method, we selected seven classic gradient-based perturbation attacks: 3D-Adv[ 7], JGBA[ 22], GeoA3[11], GSDA[ 13], SI-Adv[ 14], PF-Attack[ 17], and CIM[ 15]. Additionally, we included a gradient-based shape
|
https://arxiv.org/abs/2505.21854v1
|
transformation attack: HiT-Adv[16]. These seven methods were used as benchmark approaches for comparison. Evaluation Metrics. we use five common metrics: Attack Success Rate (ASR), Chamfer distance [52], Hausdorff distance [53], L2 norm distance, and average time cost (A.T). 6 5.2 Performance Evaluation Imperceptibility Assessment. To evaluate the high imperceptibility of the adversarial point clouds generated by our method and to ensure a fair comparison with baseline approaches, we set the step size search range to [0.007,0.07], consistent with that used in other methods. This setting allows us to highlight the effectiveness of the weight term in our attack strategy. The results are summarized in Table 1. We compare all attack methods in terms of the perturbation magnitude required to achieve an attack success rate (ASR) of 100%. As shown in the table, the perturbations produced by baseline methods typically fall within the range of 10−4forDcand10−2forDh. In contrast, our proposed method, WAAAttack , achieves significantly smaller perturbations at the levels of 10−5forDc and10−3forDh. Moreover, our method demonstrates substantial improvements on the Dlmetric, achieving nearly the best performance across all evaluation criteria. Furthermore, by incorporating sub-point cloud partitioning, our SubAttack variant further enhances the quality of the adversarial samples, demonstrating both the effectiveness and stealthiness of our overall attack framework. Table 1: Comparison on the perturbation sizes required by different methods to reach their highest achievable ASR(100%) in attacking PointNet, PointNet++, DGCNN and CurveNet MethodPointNet PointNet++ DGCNN CurveNet Dc↓ (10-4)Dh↓ (10-2)Dl↓A.T↓ (s)Dc↓ (10-4)Dh↓ (10-2)Dl↓A.T↓ (s)Dc↓ (10-4)Dh↓ (10-2)Dl↓A.T↓ (s)Dc↓ (10-4)Dh↓ (10-2)Dl↓A.T↓ (s) 3D-Adv[7] 4.16 2.44 0.8816 7.34 7.45 2.42 1.1100 44.75 11.11 2.51 1.4021 21.89 7.86 3.62 1.2977 125.68 JGBA[22] 32.78 5.77 2.4388 5.45 31.91 3.89 2.6087 20.19 27.21 3.88 2.6089 5.55 34.80 4.08 3.0634 29.80 GeoA3[11] 3.39 0.38 2.4973 18.43 6.65 0.48 2.1677 28.29 7.96 0.58 3.4260 29.63 10.76 0.98 5.0409 494.10 GSDA[13] 2.94 0.56 2.3260 19.33 3.48 0.58 2.0193 35.24 4.30 0.60 2.9670 30.48 7.75 1.11 4.2541 581.76 SI-Adv[14] 2.36 2.29 0.6461 1.20 7.38 1.91 1.1654 9.57 6.87 1.38 1.0743 1.14 6.25 1.99 1.0090 10.96 PF-Attack[17] 25.10 2.01 2.7172 14.06 22.50 1.98 2.4774 12.94 25.49 2.08 2.6716 13.25 26.76 2.32 2.7807 166.50 CIM[15] 2.42 2.03 0.6507 1.31 6.61 1.76 1.0550 7.21 6.75 1.50 1.0576 1.09 6.59 2.10 1.0597 9.81 HiT-Adv[16] 13.77 0.98 1.3404 3.89 15.31 1.16 1.4015 9.61 17.86 1.90 1.6268 5.21 20.66 2.09 1.6982 10.90 WAAttack 0.43 0.58 0.1881 0.67 0.37 0.16 0.1635 8.79 0.51 0.18 0.1874 0.61 0.79 0.43 0.2321 10.10 SubAttack 0.41 0.56 0.1854 8.69 0.33 0.14 0.1564 68.09 0.46 0.17 0.1793 3.10 0.69 0.40 0.2196 57.52 Figure 3: Visualization of original and adversarial point clouds generated by different adversarial attack methods for attacking DGCNN Visualization Analysis. To gain a more intuitive understanding of the imperceptibility of the adversarial point clouds generated by our method, we visualize adversarial samples produced by different attack methods on the same original input. The results are shown in Figure 3. Compared to 7 baseline methods, our approach generates adversarial point clouds with fewer abnormal values and a more uniform distribution. This is achieved through the use of a weight term, an adaptive step size
|
https://arxiv.org/abs/2505.21854v1
|
mechanism, and sub-point cloud-based optimization, leading to improved imperceptibility without noticeable structural distortions. Attack Robustness Against Defenses. To evaluate the robustness of our attack method against various defense mechanisms, we selected three commonly used defense techniques: SOR, SRS, and DUP-Net[ 54]. Experiments were conducted in a pure white-box setting, where adversarial samples were generated on models enhanced with these defenses, and the Attack Success Rate (ASR) was evaluated under the same conditions. As shown in Table 2, our method achieves an ASR exceeding 100% across all three defense settings, outperforming all baseline attack methods. This superior performance is primarily attributed to the adaptive step size mechanism. It dynamically adjusts the step size based on the characteristics of both the input data and the target model, enabling more effective adversarial sample generation under diverse defensive strategies. Table 2: Comparison results of ASR(%) ↑for different attack methods on PointNet with and without applied defense methods. Defense 3D-Adv[7] JGBA[22] GeoA3[11] GSDA[13] SI-Adv[14] PF-Attack[17] CIM[15] HiT-Adv[16] Ours - SOR SRS DUP-Net100.0 82.1 90.7 87.7100.0 93.5 99.9 96.1100.0 96.2 95.5 92.1100.0 99.8 98.4 94.3100.0 97.4 93.5 95.8100.0 90.0 91.6 88.2100.0 99.5 96.3 98.4100.0 98.9 99.0 89.2100.0 100.0 100.0 100.0 Comparative ASR under Constrained Perturbation. To further evaluate the effectiveness of our attack method, we compare the attack success rate (ASR) of different attack approaches on various point cloud classification models under a constrained perturbation budget ( Dc<0.00005 ), as shown in Figure 4. The results demonstrate that our method achieves remarkably high ASR across all evaluated models. Specifically, it attains attack success rates of 76.4%, 89.0%, 91.8%, and 77.9% on PointNet, PointNet++, DGCNN, and CurveNet, respectively. On average, our method achieves an ASR of 84.2%, significantly outperforming existing baseline methods under the same perturbation constraint. This superior performance is attributed to the proposed weight term and adaptive step size adjustment strategy, which jointly ensure that the generated adversarial samples are highly imperceptible. Figure 4: Visualization of comparison on the ASR(%) of different attack methods under perturbation constraints 5.3 Ablation Studies and Other Analysis The Effect of Different Weighting Factors Given that the numerator of the weighting term is directly derived from the gradient magnitude and consistently reflects point importance across all variants, we focus on investigating the impact of different denominator designs on attack performance. 8 As shown in Figure 5, We compare our method with two alternative approaches. One is based on the L1 norm ( ∥∇L loss(P′ it)∥1), and the other is based on the L2 norm ( ∥∇L loss(P′ it)∥2), while our method adopts the max norm ( ∥∇L loss(P′ it)∥max). Experimental results demonstrate that our method achieves the best performance across all three perceptibility evaluation metrics. Figure 5: The Impact of Different Denominator Designs in the Weighting Term on Attack Perfor- mance. Victim model: PointNet.Table 3: The results of ablation experiment on weights and sub-point cloud division on attack performance. Victim model: PointNet. Weighs DivisionASR ↑ (%)Dc↓ (10-4)Dh↓ (10-2)Dl↓ × × ✓ ✓× ✓ × ✓100.0 100.0 100.0 100.04.24 0.84 0.43 0.412.58 1.09 0.58 0.560.8969 0.3125 0.1881 0.1854 The Effect of Weighting Factor
|
https://arxiv.org/abs/2505.21854v1
|
and Division To investigate the impact of weight factor and sub-point cloud dividing on attack performance, we removed certain operations to observe changes in performance. The results are shown in Table 3. From the table, it can be seen that the presence or absence of weight term has the greatest impact on performance. Weight term can significantly enhance performance, while sub-point cloud dividing can also improve performance to some extent. Therefore, by combining these two mechanisms, our adversarial samples can maintain higher imperceptibility. The Compatibility of SubAttack. To evaluate the compatibility of our sub-point cloud division- based attack, we apply it to six baseline adver- sarial attack methods: 3D-Adv, JGBA, GeoA, GSDA, SI-Adv, and CIM. The results are pre- sented in Table 4. By leveraging sub-point cloud division to select key sub-regions, these methods achieve significantly improved imperceptibility in terms of perturbation magnitude and distri- bution, while maintaining high attack success rates. This demonstrates that our sub-point cloud division strategy not only preserves strong adver- sarial effectiveness, but also enhances the stealth- iness of generated perturbations, thereby making the proposed method broadly applicable across various 3D adversarial attack frameworks.Table 4: Comparison on attack and impercep- tibility performance of different methods, both with and without applying our SubAttack. Vic- tim model: PointNet. AttackARS ↑ (%)Dc↓ (10-4)Dh↓ (10-2)Dl↓ 3D-Adv Sub-3D-Adv100.0 100.04.16 1.672.44 2.010.8816 0.5153 JGBA Sub-JGBA100.0 100.032.78 16.455.77 5.042.4388 1.7718 GeoA Sub-GeoA100.0 100.03.39 3.020.38 0.352.4973 2.1141 GSDA Sub-GSDA100.0 100.02.94 2.780.56 0.492.3260 1.9368 SI-Adv Sub-SI-Adv100.0 100.02.36 1.732.29 2.030.6461 0.5659 CIM Sub-CIM100.0 100.02.42 1.772.03 1.600.6507 0.5200 6 Conclusion In this paper, we revisit the mechanism of generating adversarial samples using gradient-based white-box attack methods and propose the WAAttack framework. Unlike previous approaches that focus on introducing complex regularization terms and finding optimal perturbation directions, WAAttack is based on the observation that the contribution of gradients to adversarial point clouds is not uniform. It assigns different weights to the gradients of different points through a weighting term and incorporates an adaptive step size adjustment mechanism to balance attack success rate and perceptibility. Additionally, we propose Sub-Attack, a sub-point-cloud-based attack method, to further improve performance and demonstrate its generalizability. Extensive experimental results confirm the high imperceptibility of the adversarial samples generated by our approach. However, the use of binary search to find the optimal step size for each point cloud and the combinatorial optimization attack after selecting key sub-point clouds are relatively time-consuming. Therefore, future research 9 can focus on developing more efficient step size adjustment mechanisms and strategies for selecting the most critical points from key sub-point clouds, aiming to further enhance the stealthiness and efficiency of attacks. References [1]Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 918–927, 2018. [2]Yin Zhou and Oncel Tuzel. V oxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4490–4499, 2018. [3]Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li,
|
https://arxiv.org/abs/2505.21854v1
|
and Tian Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition , pages 1907–1915, 2017. [4]Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In2017 IEEE international conference on robotics and automation (ICRA) , pages 3357–3364. IEEE, 2017. [5]Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4558–4567, 2018. [6]Minhyuk Sung, Hao Su, Vladimir G Kim, Siddhartha Chaudhuri, and Leonidas Guibas. Com- plementme: Weakly-supervised component suggestions for 3d modeling. ACM Transactions on Graphics (TOG) , 36(6):1–12, 2017. [7]Chong Xiang, Charles R Qi, and Bo Li. Generating 3d adversarial point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 9136–9144, 2019. [8]Matthew Wicker and Marta Kwiatkowska. Robustness of 3d deep learning in an adversarial set- ting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11767–11775, 2019. [9]Daniel Liu, Ronald Yu, and Hao Su. Extending adversarial attacks and defenses to deep 3d point cloud classifiers. In 2019 IEEE International Conference on Image Processing (ICIP) , pages 2279–2283. IEEE, 2019. [10] Tianhang Zheng, Changyou Chen, Junsong Yuan, Bo Li, and Kui Ren. Pointcloud saliency maps. In Proceedings of the IEEE/CVF international conference on computer vision , pages 1598–1606, 2019. [11] Yuxin Wen, Jiehong Lin, Ke Chen, CL Philip Chen, and Kui Jia. Geometry-aware generation of adversarial point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence , 44(6):2984–2999, 2020. [12] Tzungyu Tsai, Kaichen Yang, Tsung-Yi Ho, and Yier Jin. Robust adversarial objects against deep learning models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 34, pages 954–962, 2020. [13] Qianjiang Hu, Daizong Liu, and Wei Hu. Exploring the devil in graph spectral domain for 3d point cloud attacks. In European Conference on Computer Vision , pages 229–248. Springer, 2022. [14] Qidong Huang, Xiaoyi Dong, Dongdong Chen, Hang Zhou, Weiming Zhang, and Nenghai Yu. Shape-invariant 3d adversarial point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 15335–15344, 2022. 10 [15] Jianping Zhang, Wenwei Gu, Yizhan Huang, Zhihan Jiang, Weibin Wu, and Michael R Lyu. Curvature-invariant adversarial attacks for 3d point clouds. In Proceedings of the AAAI Confer- ence on Artificial Intelligence , volume 38, pages 7142–7150, 2024. [16] Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, and Xiaochun Cao. Hide in thicket: Generating imperceptible and rational adversarial perturbations on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 24326–24335, 2024. [17] Bangyan He, Jian Liu, Yiming Li, Siyuan Liang, Jingzhi Li, Xiaojun Jia, and Xiaochun Cao. Generating transferable 3d adversarial point cloud via random perturbation factorization. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37, pages 764–772, 2023. [18] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep
|
https://arxiv.org/abs/2505.21854v1
|
learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 652–660, 2017. [19] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems , 30, 2017. [20] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog) , 38(5):1–12, 2019. [21] Tiange Xiang, Chaoyi Zhang, Yang Song, Jianhui Yu, and Weidong Cai. Walk in the cloud: Learning curves for point clouds shape analysis. In Proceedings of the IEEE/CVF international conference on computer vision , pages 915–924, 2021. [22] Chengcheng Ma, Weiliang Meng, Baoyuan Wu, Shibiao Xu, and Xiaopeng Zhang. Efficient joint gradient based attack against sor defense for 3d point cloud classification. In Proceedings of the 28th ACM International Conference on Multimedia , pages 1819–1827, 2020. [23] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. Advances in neural information processing systems , 31, 2018. [24] Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition , pages 9621–9630, 2019. [25] Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF international conference on computer vision , pages 6411–6420, 2019. [26] Artem Komarichev, Zichun Zhong, and Jing Hua. A-cnn: Annularly convolutional neural networks on point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 7421–7430, 2019. [27] Mutian Xu, Runyu Ding, Hengshuang Zhao, and Xiaojuan Qi. Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 3173–3182, 2021. [28] Weijing Shi and Raj Rajkumar. Point-gnn: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 1711–1719, 2020. [29] Yuliang Sun, Yongwei Miao, Jiazhou Chen, and Renato Pajarola. Pgcnet: patch graph con- volutional network for point cloud segmentation of indoor scenes. The Visual Computer , 36(10):2407–2418, 2020. 11 [30] Hengshuang Zhao, Li Jiang, Chi-Wing Fu, and Jiaya Jia. Pointweb: Enhancing local neigh- borhood features for point cloud processing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 5565–5573, 2019. [31] Xiang Gao, Wei Hu, and Guo-Jun Qi. Graphter: Unsupervised learning of graph transformation equivariant representations via auto-encoding node-wise transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7163–7172, 2020. [32] Yiru Shen, Chen Feng, Yaoqing Yang, and Dong Tian. Mining point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4548–4557, 2018. [33] Abdullah Hamdi, Sara Rojas,
|
https://arxiv.org/abs/2505.21854v1
|
Ali Thabet, and Bernard Ghanem. Advpc: Transferable adversarial perturbations on 3d point clouds. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16 , pages 241–257. Springer, 2020. [34] Binbin Liu, Jinlai Zhang, and Jihong Zhu. Boosting 3d adversarial attacks with attacking on frequency. IEEE Access , 10:50974–50984, 2022. [35] Jinlai Zhang, Yinpeng Dong, Jun Zhu, Jihong Zhu, Minchi Kuang, and Xiaming Yuan. Improv- ing transferability of 3d adversarial attacks with scale and shear transformations. Information Sciences , 662:120245, 2024. [36] Xiaowen Cai, Yunbo Tao, Daizong Liu, Pan Zhou, Xiaoye Qu, Jianfeng Dong, Keke Tang, and Lichao Sun. Frequency-aware gan for imperceptible transfer attack on 3d point clouds. InProceedings of the 32nd ACM International Conference on Multimedia , pages 6162–6171, 2024. [37] Daizong Liu and Wei Hu. Imperceptible transfer attack and defense on 3d point cloud classi- fication. IEEE transactions on pattern analysis and machine intelligence , 45(4):4727–4746, 2022. [38] Yunbo Tao, Daizong Liu, Pan Zhou, Yulai Xie, Wei Du, and Wei Hu. 3dhacker: Spectrum- based decision boundary generation for hard-label 3d point cloud attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 14340–14350, 2023. [39] Daizong Liu, Wei Hu, and Xin Li. Point cloud attacks in graph spectral domain: When 3d geometry meets graph signal processing. IEEE Transactions on Pattern Analysis and Machine Intelligence , 46(5):3079–3095, 2023. [40] Daizong Liu and Wei Hu. Explicitly perceiving and preserving the local geometric structures for 3d point cloud attack. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 3576–3584, 2024. [41] Keke Tang, Zhensu Wang, Weilong Peng, Lujie Huang, Le Wang, Peican Zhu, Wenping Wang, and Zhihong Tian. Symattack: symmetry-aware imperceptible adversarial attacks on 3d point clouds. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 3131–3140, 2024. [42] Keke Tang, Xu He, Weilong Peng, Jianpeng Wu, Yawen Shi, Daizong Liu, Pan Zhou, Wenping Wang, and Zhihong Tian. Manifold constraints for imperceptible adversarial attacks on point clouds. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 5127–5135, 2024. [43] Keke Tang, Ziyong Du, Weilong Peng, Xiaofei Wang, Daizong Liu, Ligang Liu, and Zhihong Tian. Imperceptible 3d point cloud attacks on lattice-based barycentric coordinates. In Pro- ceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 20814–20822, 2025. [44] Zezeng Li, Xiaoyu Du, Na Lei, Liming Chen, and Weimin Wang. Nopain: No-box point cloud attack via optimal transport singular boundary. arXiv preprint arXiv:2503.00063 , 2025. 12 [45] Daniel Liu, Ronald Yu, and Hao Su. Adversarial shape perturbations on 3d point clouds. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 , pages 88–104. Springer, 2020. [46] Yinpeng Dong, Jun Zhu, Xiao-Shan Gao, et al. Isometric 3d adversarial examples in the physical world. Advances in Neural Information Processing Systems , 35:19716–19731, 2022. [47] Keke Tang, Jianpeng Wu, Weilong Peng, Yawen Shi, Peng Song, Zhaoquan Gu, Zhihong Tian, and Wenping Wang. Deep manifold attack on point clouds via parameter plane stretching. In Proceedings of the AAAI Conference on Artificial
|
https://arxiv.org/abs/2505.21854v1
|
Intelligence , volume 37, pages 2420–2428, 2023. [48] Jinlai Zhang, Lyujie Chen, Binbin Liu, Bo Ouyang, Qizhi Xie, Jihong Zhu, Weiming Li, and Yanmei Meng. 3d adversarial attacks beyond point cloud. Information Sciences , 633:491–503, 2023. [49] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp) , pages 39–57. Ieee, 2017. [50] Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, and Neil Zhenqiang Gong. Pointcert: Point cloud classification with deterministic certified robustness guarantees. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9496–9505, 2023. [51] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1912–1920, 2015. [52] Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 605–613, 2017. [53] Abdel Aziz Taha and Allan Hanbury. Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool. BMC medical imaging , 15:1–28, 2015. [54] Hang Zhou, Kejiang Chen, Weiming Zhang, Han Fang, Wenbo Zhou, and Nenghai Yu. Dup-net: Denoiser and upsampler network for 3d adversarial point clouds defense. In Proceedings of the IEEE/CVF international conference on computer vision , pages 1961–1970, 2019. Appendix This section provides additional materials in support of the main text. It is organized into five parts, as detailed below. A Effect of Maximum Number of Sub Point Clouds Nmax We analyzed the impact of the maximum number of sub-point cloud divisions Nmax on performance. We conducted a comparison of performance for different values of Nmax. The results are shown in Figure 6. As can be seen, when Nmax= 4, nearly optimal performance is achieved. Larger values of Nmax increase the optimization search space, requiring more time and reducing efficiency. Therefore, we ultimately chose Nmax= 4for all experiments. 13 Figure 6: Analysis on Maximum Number of Sub Point Clouds Nmax. Victim model: PointNet. Figure 7: Investigation on Different Division Methods. Victim model: PointNet. B Effect of Sub-point Cloud Division Method As shown in Figure 7, we compare the performance of two partitioning methods: random partitioning and hash function-based partitioning. The results indicate that our method is not highly sensitive to the partitioning strategy, but hash function-based partitioning achieves slightly better performance than random partitioning, with nearly identical computational speed. Therefore, we adopt the hash function for partitioning sub-point clouds in our approach. Table 5: Comparison of Attack Performance with and without the Adaptive Step Size Strategy Across Victim Models. Adaptive Step SizePointNet PointNet++ DGCNN CurveNet ASR ↑ (%)A.T↓ (s)ASR ↑ (%)A.T↓ (s)ASR ↑ (%)A.T↓ (s)ASR ↑ (%)A.T↓ (s) × ✓99.3 100.00.62 0.6794.0 100.07.02 8.7999.9 100.00.61 0.6194.9 100.08.29 10.10 C Effect of Min-first Adaptive Step Size Adjustment Strategy This section evaluates the effectiveness of the proposed min-first adaptive step size adjustment strategy in improving attack success rate (ASR) and computational efficiency (A.T.).
|
https://arxiv.org/abs/2505.21854v1
|
As shown in Table 5, the strategy consistently boosts ASR to 100% across all victim models (PointNet, PointNet++, DGCNN, CurveNet), while maintaining low attack time. Moreover, Table 6 presents the performance under various defense mechanisms using PointNet. Without the adaptive step size, ASR drops significantly; however, with the proposed strategy, the attack achieves 100% success rate across all tested defenses, demonstrating its strong robustness and adaptability. Table 6: Robustness Against Defense Mechanisms with and without the Adaptive Step Size Strategy. Victim model: PointNet. Model DefenseAdaptive Step Size × ✓ ASR ↑ (%)A.T↓ (s)ASR ↑ (%)A.T↓ (s) PointNet- SOR SRS DUP-Net99.3 76.3 92.6 65.50.62 0.73 0.72 19.13100.0 100.0 100.0 100.00.67 1.53 0.75 36.71 14 D Analysis of Optimal Step Sizes Across Different Architectures As shown in Figure 8, we visualize the distribution of optimal step sizes when attacking different model architectures. The horizontal axis represents the optimal step size intervals, and the vertical axis indicates the number of instances falling into each interval. It can be observed that the distributions of optimal step sizes vary significantly across models. For instance, in the step size interval [0.007, 0.010), PointNet, PointNet++, DGCNN, and CurveNet exhibit relatively high frequencies. However, as the step size intervals increase, the distributions demonstrate distinct trends among different models. This suggests that a fixed perturbation budget may not yield effective attack performance across diverse architectures. These observations further validate the necessity of our adaptive step size selection method, which dynamically adjusts the perturbation magnitude according to the characteristics of the target model, thereby enhancing attack effectiveness. Figure 8: Comparison of optimal step size distributions across different point cloud models. E Additional Visualizations To further investigate the effectiveness and characteristics of various adversarial attack methods, we supplement with visual comparisons of adversarial point clouds generated by different strategies on three additional models: PointNet, PointNet++, and CurveNet. The results are presented in Figures 9, 10, and 11, respectively. It can be observed that 3D-Adv, due to the lack of a weighting term and absence of additional complex regularization constraints, generates relatively scattered adversarial point clouds with numerous outliers. In contrast, GeoA3 enhances imperceptibility by introducing curvature consistency loss. JGBA and PF-Attack reduce perceptibility by incorporating adversarial cloud losses processed through SOR defense mechanisms and requiring sub-point clouds to also be adversarial, thus generating adversarial clouds farther from the decision boundary. GSDA achieves superior performance by altering the perturbation method to operate in the spectral domain. SI-Adv and CIM improve performance by changing the perturbation direction, though they require larger perturbations due to deviations from gradient directions, leading to more outliers. HiT- Adv, based on shape transformation, produces adversarial clouds that, despite not having obvious 15 outliers, still noticeably alter the shape relative to the original point cloud. Our proposed method, leveraging a weighting mechanism and optimization attacks based on sub-point clouds, generates adversarial samples with minimal outliers and uniform point distribution, thereby achieving superior imperceptibility. Figure 9: Visualization of original and adversarial point clouds generated by different adversarial attack methods for attacking PointNet Figure 10: Visualization of original and adversarial point clouds generated
|
https://arxiv.org/abs/2505.21854v1
|
Extracting Research Instruments from Educational Literature Using LLMs Jiseung Yoo1, Curran Mahowald2, Meiyu Li3, and Wei Ai3 1College of Education, University of Maryland, College Park, USA 2Annenberg Institute, Brown University, USA curran_mahowald@brown.edu 3College of Information, University of Maryland, College Park, USA {jyoo20, ml0521, aiwei}@umd.edu Abstract. Large Language Models (LLMs) are transforming informa- tion extraction from academic literature, offering new possibilities for knowledge management. This study presents an LLM-based system de- signed to extract detailed information about research instruments used in the education field, including their names, types, target respondents, measured constructs, and outcomes. Using multi-step prompting and a domain-specific data schema, it generates structured outputs optimized for educational research. Our evaluation shows that this system signifi- cantly outperforms other approaches, particularly in identifying instru- ment names and detailed information. This demonstrates the potential of LLM-powered information extraction in educational contexts, offer- ing a systematic way to organize research instrument information. The ability to aggregate such information at scale enhances accessibility for researchers and education leaders, facilitating informed decision-making in educational research and policy. Keywords: Research Instruments ·Large Language Models ·Informa- tion Extraction ·Prompt Engineering ·Automated Literature Review 1 Introduction Identifyingresearchinstrumentsintheeducationalliteratureiscrucialtosynthe- sizing findings and ensuring replicability between studies. Research instruments are tools to collect, measure, and analyze empirical evidence corresponding with research purpose and questions [1,2]. In education, effective measurement tools are essential for accurately capturing data on abstract concepts from student learning outcomes to school climate. These instruments allow researchers to col- lect standardized data across diverse populations and settings, enabling mean- ingful comparisons [3]. In addition, well-documented instruments bridge research and practice, ensuring consistent use of evidence to inform decisions [5]. For this reason, establishing a structured knowledge database to organize these instru- ments would enhance research interpretation, support consensus-building within the academic and practitioner communities, and help users adapt instrumentsarXiv:2505.21855v1 [cs.IR] 28 May 2025 2 Yoo et al. to their specific contexts. It would also empower researchers and educators to make informed decisions when selecting measurement tools by providing clear, accessible knowledge about research instruments. The identification and management of research instruments information in educationremainunderexplored,highlightingtheneedforsystematicapproaches. Databases like ERIC rely on manual expert annotation with predefined rules to categorize instruments by type, topic, and validity [9,6]. Similarly, the Annen- bergInstitute’sEdInstrumentssystem[10]compilesexistingresearchinstrument data and integrates newly proposed tools from database users. However, given the continuous emergence of new instruments and the high cost of manual cura- tion,awell-developedautomatedsystemiscrucial.Traditionaltextanalysis(e.g. n-grams) can be used for extraction but struggles with unstructured documents. Rule-based and machine-learning models rely on predefined patterns and hand- crafted lexicons, making them inflexible to text variations and requiring frequent manual updates [7,8]. They also fail to capture deeper semantic connections, such as related concepts or implicit relationships. Recent advances in Large Language Models (LLMs) offer promising capa- bilities to address the issues, but naïve application often leads to inconsistent and unreliable results. Information extraction (IE) with the LLMs task involves automatically deriving structured information from unstructured text data, en- abling machines to comprehend natural language text [11]. The two main IE tasks are Named Entity Recognition (NER) and Relation Extraction (RE). NER identifies
|
https://arxiv.org/abs/2505.21855v1
|
and classifies entities such as names and domain-specific terms, while RE detects and categorizes relationships between them. Although foundational LLMs and commercial platforms exhibit strong extraction and reasoning capa- bilities, they often struggle with hallucinations and domain specificity [12]. To mitigate these issues, studies have explored prompt engineering techniques and structured data schemas to enhance accuracy. Studies [13,14,15,16] show that specialized prompting—particularly iterative and multi-step methods—can sig- nificantly improve retrieval accuracy. Furthermore, studies [17,18,19,20] demon- strate that constraining responses to predefined domain-specific schemas, such as JSON-based extraction, ensures structured and reliable output. Building on these efforts, this study proposes a structured three-step prompt design with a domain-specific schema to systematically extract research in- struments from educational papers. Our approach leverages education-specific schemas and an iterative prompt design to capture context-rich information. Also, by incorporating existing instrument information within the education context as a dictionary and detecting methods sections of research papers, our system accurately extracts instrument names and key relational details, includ- ing respondents, types, constructs, and outcomes. This work introduces a novel AI-assisted method for information extraction, enhancing accuracy and enabling large-scale analysis of educational research. The resulting knowledge repository provides researchers, schools, and district leaders with easy access to rich infor- mation on measurement tools, helping educators select appropriate instruments and generate high-quality evidence in their contexts. Extracting Research Instruments from Educational Literature Using LLMs 3 2 Method This study introduces a structured schema with multi-step, zero-shot prompt- ing. The system identifies and links key entities, the instrument names, con- struct, outcomes, respondent, and instrument type, within their original con- texts. This data schema enables users to interpret which tools were used, what they measured, and the target in each study. Research instruments fall into five categories: surveys/questionnaires, interview protocols, observation protocols, tests/assessments, and other tools (e.g. checklists) [2]. Each type includes essen- tialcomponents:constructsdefinetheconceptbeingstudied,outcomesrepresent measured results, and respondents provide participant context. For instance, in anxiety research, constructs (e.g., "emotional distress"), outcomes (e.g., "anxi- ety level"), respondents (e.g., "undergraduate students"), and instrument type (e.g., "Likert-scale questionnaire") collectively define the measurement process. Fig. 1.Overview of the system pipeline. 2.1 Pipeline Thesystemoperatesinathree-steppipeline.Inthefirststep,itdetectsthemeth- ods section of research papers by using a pre-trained PDF parser model [21], converting documents into a hierarchical JSON format. Since empirical social science research presents data collection and analysis in the methodology sec- tion, focusing on this section—rather than the full text—improves accuracy. The system identifies the start and end pages of the methods and results sections; if the methods section cannot be isolated, the full text is used. This detection process achieves 92% accuracy (n=150), confirming its reliability. The extracted text is then split into 1000-token chunks to optimize processing efficiency with LLM APIs while preserving document structure. In the second step, the system employs multi-step prompting for NER to identify instrument names within the segmented text. Finally, in the third step, a RE prompt retrieves key details about the instruments with the extraction results being structured into a JSON format for downstream analysis. The next section will provide a detailed expla- nation of the second and third steps.
|
https://arxiv.org/abs/2505.21855v1
|
4 Yoo et al. 2.2 Prompt Design This study uses iterative prompts and targeted follow-up questions to enhance context understanding and filter more accurate responses [13,14]. The NER pro- cess for instrument extraction employs a three-step prompting approach. First, an extraction prompt instructs the model to retrieve specific information about the instruments used in the study while providing background knowledge on research instrument concept (e.g. definition, purpose, and general usage). Next, a summarization prompt asks to explain how the study collects, measures, and analyzes data using these tools, ensuring a comprehensive understanding of their usage.Finally,adecisionpromptconsolidatesandevaluatestheoutputsfromthe previous two prompts to generate a structured JSON output of key instruments. Once an instrument name is identified through NER, it is standardized using an instrument dictionary from Annenberg’s EdInstruments list before proceeding to the next stage. For the subsequent relation extraction (RE) task, the identified instrument names serve as anchors, with the data schema embedded in the prompts to ensure consistent formatting and contextual accuracy. Using OpenAI function calling, the system extracts detailed information about each instrument, includ- ing its associated constructs, measured outcomes, instrument type, and target respondents. 2.3 Evaluation Our experiment used an annotated dataset (n=150) from the Institute of Ed- ucation Sciences (IES), U.S. Department of Education. This dataset contains manually labeled data, including instrument names and detailed information about instruments, making it a gold standard for evaluating entity extraction and relation linking. We used it to test various prompting strategies, includ- ing baseline approaches (ChatGPT-4o via web interface, zero-shot prompting), few-shot prompting, single-step prompts, and our proposed multi-step approach. These techniques were applied to both method section excerpts and full docu- ments. Performance was assessed quantitatively and qualitatively, with three independent reviewers manually evaluating 30 randomly sampled documents for accuracy and error types. This evaluation allowed us to compare prompting methods and analyze trade-offs between accuracy, efficiency, and computational cost. 3 Result 3.1 IE Performance Our system, using multi-step prompts with method section excerpts, performed competitively against few-shot prompts and general LLM outputs. Figure 2 showsNERperformanceacrossbaselines,few-shotapproaches,andvariousprompt- ing strategies. The result suggests that targeting method excerpts can achieve Extracting Research Instruments from Educational Literature Using LLMs 5 strongperformancewhilesignificantlyreducingcomputationalcosts.Onaverage, processing one document required 11,248 input tokens and 6,730 output tokens, 61% fewer tokens than full-document processing, which used 26,181 input tokens and 20,109 output tokens. Additionally, method section excerpts with optimized prompting reduced processing time by 54.8% compared to full-document pro- cessing. Fig. 2.Performance comparison across different prompts and input text types. ‘Ex’ represents extraction, ‘Sum’ represents summarization, and ‘Dec’ represents decision. The highest F1 score (0.665) is achieved using a combination of summarization, ex- traction, and decision on the method section excerpt. Despite these efficiency gains, there is a tradeoff between recall and preci- sion of the multi-step prompting. As shown in Table 1, GPT-o1 achieved the highest F1-score, with a recall of 90%, effectively retrieving used instruments from research papers. However, its F1-score of 0.78 indicates lower precision, meaning that while the system identifies many instruments, some extractions are inaccurate or redundant. Table 1. Performance of Different LLMs with Multi-Step Prompt and
|
https://arxiv.org/abs/2505.21855v1
|
Method Excerpt Method Section Excerpt w/ Extraction + Summary + Decision Model Accuracy Precision Recall F1 Gpt-4o-mini 0.472 0.508 0.901 0.619 Gpt-4o 0.491 0.514 0.943 0.665 Gpt-o1 0.641 0.696 0.904 0.786 Claude-sonnet 0.615 0.644 0.929 0.761 Llama 3.3 70B 0.396 0.608 0.639 0.623 Table 2 highlights the system’s strengths and limitations in extracting struc- tured information from educational literature. It accurately identifies the in- strument name and type but sometimes broadens classifications, such as in- ferring "students" as respondents based on context. The system also expands constructs, capturing a broader set of related terms, whereas expert labels align with the CLASS framework’s predefined domains and dimensions. While the 6 Yoo et al. system’s output enhances contextual understanding, it may reduce precision in distinguishing key constructs. For outcomes, the system identifies "teacher inter- action" instead of the expert-labeled "classroom organization." However, given the original study’s focus on teacher-student interaction, the result remains rel- evant. Overall, the system effectively extracts and contextualizes instrument information. Table 2. Example of Research Information Extraction Output Category Ground-truth Model Output Instrument CLASS (Classroom Assessment Scoring System)CLASS (Classroom Assessment Scoring System) Type Observation Protocol Observation Protocol Respondent Teacher Students; Teachers Construct Behavioral Management Classroom Organization, Preventive Discipline, Time Management Outcomes Classroom Organization Teacher Interaction 3.2 Error Analysis Our error analysis revealed some challenges. First, the system’s performance is influenced by the number and complexity of instruments mentioned in research papers. The dataset includes papers referencing an average of 3.66 instruments, with the system performing well when extracting 2 to 5. However, for papers labeled as having a single instrument, the system over-extracts, identifying an average of 6.5 instruments. In this case, the system often extracted sub-tests as independent instruments rather than recognizing them as part of a larger test battery. Second, the model’s sensitivity to context led to false positives, extracting instruments that were merely mentioned rather than actually used. Additionally,itprioritizedinformationatthebeginningofmethodsections,often overlooking instruments listed later. 4 Conclusion and Implication This work presents a structured pipeline for extracting research instruments from educational literature, showcasing how LLM-powered tools can address education-specific challenges. By automating instrument extraction, it facilitates large-scale synthesis of educational research, reducing manual effort and im- proving accessibility. A multi-step prompting approach with a domain-specific schema enhances precision, reliability, and interpretability over naive methods. Error analysis revealed challenges such as hierarchical misclassification and false Extracting Research Instruments from Educational Literature Using LLMs 7 positives, highlighting the need for refined ontological rules and human-in-the- loop validation. Despite these limitations, structured prompting significantly im- proves instrument identification. By making organized instrument data more ac- cessible, this system helps researchers, educators, and policymakers select mea- surement tools suited to their purpose and contexts. References 1. Wilkinson, D., Birmingham, P. eds: Using Research Instruments: a Guide for Re- searchers. RoutledgeFalmer, London (2011). 2. Colton,D.,Covert,R.W.:Designingandconstructinginstrumentsforsocialresearch and evaluation. John Wiley and Sons, (2007). 3. Sturm, T., Ash, M.G.: Roles of Instruments in Psychological Research. History of Psychology. 8, 3–34 (2005). https://doi.org/10.1037/1093-4510.8.1.3 4. Schumacker,R.E.,Wind,S.A.,Holmes,L.F.:ResourcesforIdentifyingMeasurement Instruments for Social Science Research. Measurement: Interdisciplinary Research and Perspectives. 19, 250–257 (2021). https://doi.org/10.1080/15366367.2021. 1950486 5. Shaneyfelt, T., Baum, K.D., Bell, D.,
|
https://arxiv.org/abs/2505.21855v1
|
Feldstein, D., Houston, T.K., Kaatz, S., Whe- lan, C., Green, M.: Instruments for Evaluating Education in Evidence-Based Prac- tice: A Systematic Review. JAMA. 296, 1116 (2006). https://doi.org/10.1001/ jama.296.9.1116 6. Cox, J., Foster, B., Bamat, D.: A review of instruments for measuring social and emotional learning skills among secondary school students. Institute of Education Sciences (2019). . 7. Han, X., Wang, L.: A Novel Document-Level Relation Extraction Method Based on BERT and Entity Information. IEEE Access. 8, 96912–96919 (2020). https: //doi.org/10.1109/ACCESS.2020.2996642 . 8. Gupta, P., Rajaram, S., Schütze, H., Runkler, T.: Neural Relation Extraction within and across Sentence Boundaries. AAAI. 33, 6513–6520 (2019). https://doi.org/ 10.1609/aaai.v33i01.33016513 . 9. ERIC Homepage, https://eric.ed.gov/ . Last accessed 10 Feb 2025 10. EdInstrument Homepage, https://edinstruments.org/ . Last 10 Feb 2025 11. Chuang, S. L., Chang, K. C. C., Zhai, C. Context-aware wrapping: Synchronized data extraction. In Proceedings of the 33rd international conference on Very large data bases (pp. 699-710). VLDB, Vienna Austria (2007) 12. Chen, B., Zhang, Z., Langrené, N., Zhu, S.: Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review, arXiv, (2024). https://doi.org/10.48550/arXiv.2310.14735 13. Polak, M.P., Morgan, D.: Extracting accurate materials data from research papers with conversational language models and prompt engineering. Nature Communica- tions. 15, 1569 (2024). https://doi.org/10.1038/s41467-024-45914-8 14. Wei, X., Cui, X., Cheng, N., Wang, X., Zhang, X., Huang, S., Xie, P., Xu, J., Chen, Y., Zhang, M., Jiang, Y., Han, W.: ChatIE: Zero-Shot Information Extraction via Chatting with ChatGPT, arXiv, (2024). https://doi.org/10.48550/arXiv.2302. 10205 15. Wu, H., Yuan, Y., Mikaelyan, L., Meulemans, A., Liu, X., Hensman, J., Mitra, B.: Learning to Extract Structured Entities Using Language Models, arXiv, (2024). https://doi.org/10.48550/arXiv.2402.04437 8 Yoo et al. 16. Vatsal,S.,Dubey,H.:ASurveyofPromptEngineeringMethodsinLargeLanguage ModelsforDifferentNLPTasks,arXiv,(2024). https://doi.org/10.48550/ARXIV. 2407.12994 17. Vijayan, A.: A Prompt Engineering Approach for Structured Data Extraction from Unstructured Text Using Conversational LLMs. In: 2023 6th International Con- ference on Algorithms, Computing and Artificial Intelligence. pp. 183–189. ACM, Sanya China (2023). https://doi.org/10.1145/3639631.3639663 18. Wang, X., Huang, L., Xu, S., Lu, K.: How Does a Generative Large Language Model Perform on Domain-Specific Information Extraction?—A Comparison be- tween GPT-4 and a Rule-Based Method on Band Gap Extraction. J. Chem. Inf. Model. 64, 7895–7904 (2024). https://doi.org/10.1021/acs.jcim.4c00882 19. Wiest, I.C., Wolf, F., Leßmann, M.-E., Van Treeck, M., Ferber, D., Zhu, J., Boehme, H., Bressem, K.K., Ulrich, H., Ebert, M.P., Kather, J.N.: LLM-AIx: An open source pipeline for Information Extraction from unstructured medical text based on privacy preserving Large Language Models, medrxiv, (2024). https: //doi.org/10.1101/2024.09.02.24312917 20. Chusova, A., Artemieva, I., Chusov, A.: A Hybrid Approach to Extraction of KnowledgeFromScientificTextsBasedonLargeLanguageModelsandDomainDic- tionaries. In: 2024 IEEE International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON). pp. 266–271. IEEE, Novosibirsk, Russian Federation (2024). https://doi.org/10.1109/SIBIRCON63777.2024.10758538 21. Vikas, P.: PDF to JSON converter- Pretrained model toolkit. GitHub Repository, (2024)
|
https://arxiv.org/abs/2505.21855v1
|
arXiv:2505.21866v1 [eess.SP] 28 May 2025CSI-Bench: A Large-Scale In-the-Wild Dataset for Multi-task WiFi Sensing Guozhen Zhu, Yuqian Hu, Weihang Gao, Wei-Hsiang Wang, Beibei Wang, K. J. Ray Liu Origin Research Abstract WiFi sensing has emerged as a compelling contactless modality for human activity monitoring by capturing fine-grained variations in Channel State Information (CSI). Its ability to operate continuously and non-intrusively while preserving user privacy makes it particularly suitable for health monitoring. However, existing WiFi sensing systems struggle to generalize in real-world settings, largely due to datasets collected in controlled environments with homogeneous hardware and fragmented, session-based recordings that fail to reflect continuous daily activity. We present CSI-Bench, a large-scale, in-the-wild benchmark dataset collected using commercial WiFi edge devices across 26 diverse indoor environments with 35 real users. Spanning over 461 hours of effective data, CSI-Bench captures realistic signal variability under natural conditions. It includes task-specific datasets for fall detection, breathing monitoring, localization, and motion source recognition, as well as a co-labeled multitask dataset with joint annotations for user identity, activity, and proximity. To support the development of robust and generalizable models, CSI-Bench provides standardized evaluation splits and baseline results for both single-task and multi-task learning. CSI-Bench offers a foundation for scalable, privacy-preserving WiFi sensing systems in health and broader human- centric applications. Links: CSI-Bench Dataset ;CSI-Bench Code ;Project Page 1 Introduction Today’s smart IoT devices, such as smart speakers, smart bulbs, and various smart display devices, are commonly connected to home routers or mesh network hubs via WiFi. Beyond their primary role in communication, the WiFi signals between these devices inherently capture rich information about the surrounding environment through their propagation paths [ 24,42,25]. This has positioned WiFi sens- ing as a compelling alternative to vision- or wearable-based systems for human monitoring in smart environments. By capturing fine-grained temporal and spatial variations in Channel State Information (CSI), commodity WiFi devices can infer a wide range of human-centric phenomena—from gross motor events such as falls to subtle physiological signals like breathing. These properties make WiFi sensing especially attractive for health-related applications in smart homes, where privacy, continuous operation, and ease of deployment are critical. Moreover, because these signals are already being transmitted by existing infrastructure, WiFi-based sensing enables non-intrusive, cost-effective, and passive monitoring without requiring additional sensors or user instrumentation. Despite increasing research interest, existing WiFi sensing studies suffer from a fundamental limita- tion: a lack of large-scale, diverse, and real-world datasets. Most current datasets are collected in controlled laboratory settings, often using limited types of homogeneous hardware configurations and a narrow range of tasks. As a result, models trained on these datasets struggle to generalize to new users, devices, or environments, limiting their practical utility. To address these gaps, we introduce CSI-Bench, the first large-scale, in-the-wild benchmark dataset supporting multi-task WiFi sensing as illustrated in Figure 1. Using commercial edge devices, Preprint. Edge devicesRouters Fall detectionBreathing detectionProximity recognitionLocalizationMotion source recognitionActivity recognitionUser identification Router Edge device Figure 1: CSI-Bench overview. The benchmark features multiple commercial routers and IoT devices deployed in real homes and offices to collect CSI data. It supports a wide range
|
https://arxiv.org/abs/2505.21866v1
|
of human-centric sensing tasks, enabling robust model development across diverse hardware setups and real-world scenarios. CSI-Bench captures real-world signal variability across diverse environments, including apartments, multi-room houses, offices, and public indoor spaces. Data is recorded continuously from a broad spectrum of WiFi chipsets (Qualcomm, Broadcom, Espressif, MediaTek, and NXP), under both line-of-sight (LoS) and non-line-of-sight (NLoS) conditions, and during natural human activities with minimal intervention. CSI-Bench advances the field in three key ways: Large-scale, real-world coverage. The dataset spans over 461hours of CSI data from 35users, 26 distinct environments, and 16device configurations. It reflects realistic deployment conditions with background interference, user mobility, and ambient network traffic. Multi-task and co-labeled annotations. We provide both single-task specialist datasets (e.g., fall detection, breathing monitoring, localization, and motion source recognition) and a multi-task dataset with joint labels for user identification, activity recognition, and proximity estimation. The co-labeled samples enable efficient multi-task learning and low-latency inference on resource-constrained edge devices. Standardized benchmarking protocols. We establish strong baselines under supervised learning and multi-task learning. Our findings highlight generalization gaps and the promise of parameter-efficient multi-task learning. CSI-Bench aims to catalyze robust model development for passive, privacy-preserving WiFi sensing. By offering a unified platform for realistic, diverse, and reproducible evaluation, it provides a foundation for scalable AI applications in smart health, home monitoring, and beyond. 2 Related Work 2.1 WiFi Sensing Compared to vision-, audio-, or wearable-based systems, WiFi sensing offers a scalable, privacy- preserving, and non-intrusive alternative or complementary solution for continuous monitoring in smart environments and healthcare applications [ 22,12,33]. WiFi sensing has demonstrated substantial potential in tasks such as activity recognition [ 23,28], gesture detection [ 31,45], indoor localization [ 40,38], and vital sign monitoring [ 39,13]. However, most existing studies rely on data collected in constrained settings, which limits generalization to diverse users, hardware platforms, and real-world deployment scenarios. 2.2 WiFi Sensing Dataset A number of WiFi sensing datasets have contributed valuable resources to the community. Widar3.0 [ 47] offers large-scale CSI data for gesture recognition using Intel 5300 NICs [ 15]. SignFi [ 27] focuses on sign language recognition, capturing fine-grained hand gestures. MM- Fi [44] enables cross-modal analysis by combining WiFi CSI with synchronized video and depth data. 2 Table 1: Comparison of CSI-Bench with published datasets. Dataset (Year) Platform #Edge Device Type #Samples #Tasks #Envs #Users In-the-Wild WiAR [14] (2019) Intel 5300 NIC 1 4.8k 1 3 10 ✗ ARIL[35] (2019) USRP 1 1.4k 2 1 1 ✗ Widar3.0 [47] (2021) Intel 5300 NIC 1 271.1k 1 3 16 ✗ XRF55 [36] (2024) Intel 5300 NIC 1 42.9k 1 4 39 ✗ SignFi [27] (2018) Intel 5300 NIC 1 14.3k 1 2 5 ✗ WiMANs [21] (2024) Intel 5300 NIC 1 11.3k 3 3 5 ✗ CSIDA [19] (2021) Intel 5300 NIC 1 3k 1 2 5 ✗ MM-Fi [44] (2023) Atheros CSI Tool 1 1.1k 1 4 40 ✗ CSI-BenchBroadcom 16 231.6k 7 26 35 ✓ Qualcomm, MediaTek Espressif, NXP IRobot Pet Jumping Breathing Running Falling Fan Empty Walking Waving hand Figure 2: Representative CSI samples are shown for various scenarios,
|
https://arxiv.org/abs/2505.21866v1
|
including human actions (jumping, running, walking, hand waving, falling, breathing), non-human motions (pet movement, iRobot, fan), and empty environments. In each sample, the x-axis represents time, and the y-axis represents the subcarrier index. XRF55 [ 36] introduces a large corpus of RF-based activity data for action recognition. Additional datasets such as ARIL [35], CSIDA [19] support tasks like activity recognition and localization. While these datasets have advanced the field, they share several limitations as illustrated in Table 1. First, most are confined to controlled laboratory settings, offering limited variability in user behavior, device types, and environmental complexity. Second, they primarily support single-task scenarios, lacking the multi-task supervision needed for training general-purpose models. Third, nearly all rely on the Intel 5300 chipset, which does not support continuous CSI recording. As a result, data is collected in fragmented, pre-scripted sessions using manual triggers, which limits dataset scale and fails to capture users’ natural daily activities. There remains a growing demand for a unified benchmark that reflects the complexity of real-world deployments, supports multiple sensing tasks, and enables evaluation across diverse users, environments, and hardware platforms. To address this need, we introduce CSI-Bench, a large-scale in-the-wild benchmark for passive WiFi sensing. 3 Dataset Collection 3.1 Overview To support robust and generalizable WiFi sensing research, we build a diverse collection of datasets captured in real-world environments using commercial WiFi devices. CSI-Bench spans over 460 hours of CSI recordings across 35 unique users ,26 environments , and 16 device types , covering both routers and edge devices operating under varied network conditions. Data is collected in homes, offices, and public indoor areas with minimal control over ambient interference or user behavior. Each dataset is designed to support one or more sensing tasks, including fall detection (Fall), breathing monitoring (Breath), localization (Loc.), human activity recognition (HAR), user identification (UID), and proximity estimation (Prox.). Representative CSI samples illustrating task-specific signal patterns are visualized in Figure 2. The following section details the hardware, environments, and collection protocols used to capture the datasets. 3 3.2 Devices and Hardware Setup Hardware. To emulate the heterogeneity of real-world WiFi sensing deployments, we select a diverse set of WiFi routers and edge IoT devices commonly found in residential and commercial environments, with chipset models including Qualcomm, NXP, Broadcom, and Espressif [ 7,6,2,3]. All devices collectively support IEEE 802.11n/ac/ax standards, with MIMO configurations ranging from 1 ×1 to 2 ×2 and 1 ×4, and channel bandwidths of 20, 40, and 80 MHz. The detailed specifications of the edge IoT devices are provided in Appendix. A. CSI extraction and synchronization. In our system, IoT client devices periodically transmit CSI packets to routers at two sounding rates: 100 Hz for general sensing tasks and 30 Hz for breathing detection, accommodating different temporal dynamics. Given the distributed nature of these devices, propagation delays and clock drifts cause misalignment in CSI data streams. To address this, the router coordinates data collection by sending batch requests with defined time windows, asking devices to record and upload CSI within the same interval. Each device uses its own system clock to timestamp
|
https://arxiv.org/abs/2505.21866v1
|
the data, which allows us to later align the streams in software. Routers handle CSI extraction, buffering, and data upload to cloud servers, running either Linux or FreeRTOS depending on their chipset. CSI format. Due to hardware diversity, the CSI data in CSI-Bench varies in subcarrier granularity, antenna configurations, and supported bandwidths across different chipset architectures. For example, the NXP 88W8997 provides a 2 ×2 MIMO configuration with 58 subcarriers at 40 MHz on 5 GHz, while the ESP32-S3, with a 1 ×1 setup, captures 64 subcarriers at 20 MHz on 2.4 GHz. Qualcomm IPQ4019/IPQ4018 devices offer a 1×2 MIMO configuration, supporting 128 subcarriers at 40 MHz and 256 subcarriers at 80 MHz on 5 GHz. In contrast, the Broadcom BCM4345 employs a 1 ×4 antenna configuration, providing only 14/28 subcarriers at 20/40 MHz due to proprietary subcarrier grouping. These variations ensure CSI-Bench captures a wide spectrum of signal characteristics, enabling comprehensive evaluation of model generalization across heterogeneous hardware platforms. 3.3 Continuous Data Recording To overcome the limitations of prior works that typically rely on controlled environments or predefined protocols, we develop an integrated pipeline enabling scalable, in-the-wild CSI data collection across diverse residential settings. Leveraging commercial routers with developer-accessible CSI extraction, cloud infrastructure, and user-friendly annotation tools, our system unobtrusively captures large-scale CSI data from everyday WiFi usage without device-side modifications. We collaborate with multiple router chipset vendors, who provided firmware and drivers with CSI extraction capabilities enabled, along with proprietary CSI capture utilities for CSI extraction. Building on this, we develop our own tools to programmatically capture and manage CSI data. Specifically, we design separate tools for Linux or FreeRTOS [ 4], each design to send commands from the Linux application layer directly to the WLAN kernel module, enabling continuous collection and buffering of CSI from all registered devices into unified binary files, which are periodically uploaded to cloud storage via AWS S3 APIs [ 1]. Each file is timestamped using the router’s local system time embedded in the filename, ensuring straightforward temporal alignment across deployments. Upload frequency dynamically adjusts based on device count and bandwidth utilization. We also develop a lightweight user annotation tool integrated into Google Spreadsheet [ 5], allowing users to optionally log daily activities—such as waking up, sleeping, leaving or returning home, room occupancy, or inactivity—by tapping buttons that record local timestamps. Our system queries and retrieves CSI files matching these events, concatenates the relevant segments, and refines alignment using embedded packet-level timestamps, resulting in precisely labeled CSI data segments. We collect CSI of motion from non-human sources like pets and cleaning robot when users are not home. When possible, time-aligned external information is collected through camera recordings and local sensor logs to annotate non-human motions or highlight environmental changes. This pipeline enables extensive, accurately labeled CSI data collection reflective of authentic user behaviors and diverse environments, supporting a wide range of large-scale research applications. 4 Table 2: Summary of tasks, dataset statistics, partitions, and evaluation protocols. ST= single-task specialist, MT= multi-task joint. Task #Classes Dataset #Samples #Users #Envs #Devices Split, Setting Fall Detection 2 ST
|
https://arxiv.org/abs/2505.21866v1
|
6.7k 17 6 2 70/15/15, easy/med/hard Breath Detection 2 ST 100k 3 3 6 70/15/15, easy/med/hard Motion Source Recognition 4 ST 60.9k 35 10 1 70/15/15, easy/med/hard Room-level Localization 6 ST 7.1k 8 6 8 70/15/15, easy/med/hard Proximity Recognition 4 MT 20.3k 6 6 11 70/15/15, user/env/device Human Activity Recognition 5 MT 41.5k 6 6 11 70/15/15, user/env/device User Identification 6 MT 20.3k 6 6 11 70/15/15, device 3.4 Environments and Contexts We collect our data across a broad range of environments, including compact apartments, multi-room houses, offices, hallways, and open indoor public spaces, as detailed in Appendix A.2. These settings introduce diverse physical characteristics, including complex layouts, clutter, variable wall materials, and occlusions, that significantly affect signal propagation. Unlike prior datasets collected under controlled conditions, our data captures CSI under authentic, in-the-wild conditions. Devices were positioned freely by users, and data was recorded continuously during natural daily activities. Consequently, the CSI reflects realistic variability introduced by NLoS links, neighboring motion, background activity from appliances, WiFi traffic, and environmental factors such as wind and even rain drops. This level of interference is critical for benchmarking the robustness of WiFi sensing models, particularly for healthcare applications where reliable and through-the-wall monitoring in uncontrolled home environments is essential. 3.5 Data Collection Protocols Although participants are free to move naturally and perform tasks as they would in daily life, we implement basic data collection protocols to ensure consistency and repeatability. Each session begin with a brief calibration phase to verify device connectivity, synchronize timestamps, and confirm stable CSI logging. The recorded activities spans a range of motion patterns, including sitting still, walking, waving hands, and running through hallways. All participants signed a consent form prior to participation, with expenses around $20 /hr. Data from non-human motion sources—such as pets, cleaning robots, and electrical appliances like fans—are collected when users are not present. Detailed task-specific data collection procedures are provided in Appendix A. 3.6 Dataset Statistics CSI-Bench spans seven classification tasks with varied sensing objectives. Table 2 summarizes dataset scale and coverage, including the number of samples, recording duration, users, environments, and device types. This diversity reflects real-world deployment conditions and supports robust generalization benchmarking. 4 Data Quality and Preprocessing 4.1 CSI Quality Verification Motivation. CSI quality checking is critical for ensuring data reliability, as raw measurements often suffer from signal dropouts, high noise levels, or inconsistent timestamps. These issues can arise due to differences in chipset design, CSI extraction algorithms, hardware configurations (e.g., antenna layout, RF circuitry), and deployment conditions. As illustrated in Figure 3a, the CSI quality varies from device to device. Device 1 exhibits the best CSI quality, with consistent temporal patterns and a stable sampling rate near the nominal 30 and 100 Hz. Device 2 shows moderate quality with occasional outliers and a lower sampling rate, while Device 3 suffers from the poorest quality, marked by irregular sampling intervals and temporal clustering of CSI frames. Given the diverse hardware platforms and settings in CSI-Bench, these quality variations must be systematically addressed to enable meaningful benchmarking. 5 Device 1 Device 2 Device 3
|
https://arxiv.org/abs/2505.21866v1
|
200 400 Packet Index050100Interval (ms) Freq 47.62 Hz200 400 Packet Index050100Interval (ms) Freq 74.72 Hz200 400 Packet Index050100Interval (ms) Freq 98.23 Hz 20 40 60 Subcarrier Index0.050.10.150.2Amplitude20 40 60 Subcarrier Index00.1Amplitude 20 60 100 Subcarrier Index00.1Amplitude 200 400 Time Index30 60Subcarrier 0.10.150.2 200 400 Time Index30 60 90 120Subcarrier 00.050.1 200 400 Time Index30 60Subcarrier 0.10.150.2(a) (b) Figure 3: MATLAB-based CSI verification tool. (a) Visualization of CSI quality from three devices, showing variations in sampling interval, time-subcarrier heatmap, and amplitude response. (b) User interface for parsing and evaluating CSI data, supporting timestamp checks, amplitude analysis, and figure export to ensure data reliability in CSI-Bench. Verification tool. To systematically assess and ensure CSI data quality, we adopt a structured evaluation framework introduced in an existing work [ 20], which models CSI verification as a multilayered pipeline. Each layer of this pipeline targets a specific aspect of data integrity using customized metrics, covering timestamp consistency, CSI amplitude stability, and other modality- specific characteristics. This design allows us to characterize various perspectives of CSI quality and adapt the evaluation to different sensing tasks. In the context of CSI-Bench, we apply this framework to filter out samples with timestamp irregularities, unstable or flat CSI amplitude, and signal dropout, ensuring that only reliable traces are included in the benchmark. The CSI verification tool is implemented in MATLAB, as shown in Figure 3b, to facilitate systematic quality control before incorporating data into CSI-Bench. 4.2 CSI Preprocessing Pipeline Amplitude extraction. In real-world measurements, CSI is often corrupted by phase noise caused by timing and frequency synchronization offsets, as well as additive thermal noise. In the literature, two main approaches are used to handle phase distortions: phase cleaning [ 10,30,43] and phase elimination [ 37,41,46]. Phase cleaning aims to correct the distorted phase but cannot fully eliminate initial phase offsets, making it less reliable for consistent processing across diverse devices. Therefore, in our benchmark, we adopt the phase elimination approach. Specifically, if the extracted CSI at timetand subcarrier frequency fis represented as H(f, t), we use the amplitude |H(f, t)|as input, eliminating the unreliable phase component. Data segmentation. To facilitate task-specific model training, we segment the collected CSI data into fixed-duration samples. For tasks including Fall Detection, Localization, Motion Source Recognition, and the Multi-Task dataset, we segment CSI data into 5-second intervals. For the Breathing Detection dataset, considering the slower temporal variations inherent to respiration signals, we segment the CSI data into 10-second intervals. Amplitude normalization. The automatic gain controller on commercial WiFi devices affects the reported CSI amplitude. To mitigate the effects of varying signal strengths, we normalize the power response of each subcarrier across the entire frequency band as follows ˆH(fk, t) = |H(fk,t)|2 PNs k=1|H(fk,t)|2,for all k,where Nsis the number of subcarriers, and H(fk, t)is the original reported CSI on the k-th subcarrier. 6 Subcarrier standardization. Due to hardware differences, the number of subcarriers in CSI samples can vary across different platforms, leading to inconsistent input shapes along the frequency dimension. To standardize the data, we select a fixed number of subcarriers and apply zero-padding or clipping
|
https://arxiv.org/abs/2505.21866v1
|
in the frequency dimension as needed. This ensures all samples have consistent input shapes across the dataset. 5 Benchmark Design 5.1 Task Suite and Metrics CSI-Bench supports a suite of supervised classification tasks for WiFi sensing, covering key applica- tions in health monitoring and ambient intelligence. Each task operates on a fixed-length CSI tensor X∈RC×K×T, where Cis the channel count, Kis the standardized subcarrier dimension over antenna arrays, and Tis the temporal length of samples (5 seconds for most tasks, and 10 seconds for breathing detection). Single-task specialized dataset. The benchmark includes four single-task datasets: Fall Detection (binary classification of fall vs. non-fall), Breathing Detection (binary detection during sleep, sampled at 30 Hz), Motion Source Recognition (four-class classification of human, pet, robot, and fan motion), andRoom-Level Localization (six-way classification of the user location). These are evaluated independently using dedicated datasets. Multi-task joint dataset. A multi-task dataset contains co-labeled samples for three tasks: Human Activity Recognition (five-class classification), User Identification (multi-class over 6 users), and Proximity Recognition (four-class distance estimation). This enables parameter-efficient multi-task training with a shared backbone and task-specific heads. All tasks are evaluated using overall accuracy and weighted F1-score. Accuracy provides a global measure of classification correctness, while the weighted F1-score accounts for class imbalance by averaging per-class F1-scores weighted by class frequency. This is especially relevant for tasks with skewed distributions such as fall detection or proximity recognition. 5.2 Evaluation Protocols CSI-Bench provides standardized train/validation/test splits for all tasks to ensure fair comparison and reproducibility. For each dataset, 70% of samples are used for training, 15% for validation, and the remaining 15% for testing, with class balance and environment distribution preserved. Evaluation protocols and statistics for each task are summarized in Table 2. To evaluate real-world robustness, each test sample is annotated with a difficulty level—Easy, Medium, or Hard—based on signal quality, environment, and subject complexity. For the multi-task dataset, we define three out-of-distribution (OOD) splits—cross-user, cross-environment, and cross- device—reflecting domain shifts in deployment. These settings enable systematic robustness and generalization evaluation. Full details are provided in Appendix A. 5.3 Baseline Models To establish reference performance and benchmark learning effectiveness on CSI-Bench, we imple- ment a suite of baseline models across single-task supervised and multi-task learning settings. Supervised learning. We evaluate representative architectures spanning fully connected net- works (MLP) [ 32], recurrent models (LSTM) [ 17], convolutional backbones (ResNet-18) [ 16], and transformer-based sequence learners, including Vision Transformer (ViT) [ 11], PatchTST [ 29], and TimeSformer-1D [ 8]. All models are trained independently on each task using the corresponding specialist dataset. Input CSI tensors are amplitude-only with hyperparameters tuned using validation performance. Multi-task learning. To explore parameter efficiency and cross-task knowledge sharing, we also implement multi-task learning using a shared backbone with lightweight task-specific adapters [ 9]. We adopt the same backbones as in the supervised setting and attach low-rank (LoRA) adapters [ 18] and separate classification heads for each task. During training, task-labeled samples are drawn from 7 Table 3: Performance comparison of supervised models across four core WiFi sensing tasks. Accuracy (Acc) and F1-score are reported as
|
https://arxiv.org/abs/2505.21866v1
|
mean ± std (%) over three runs. ModelFall Detection Breathing Detection Room-Level Localization Motion Source Recognition Acc F1 Acc F1 Acc F1 Acc F1 MLP [32] 92.16 ± 0.91 92.17 ± 0.92 97.59 ± 0.08 97.59 ± 0.08 87.14 ± 0.80 86.90 ± 0.83 98.86 ± 0.07 98.86 ± 0.07 ResNet-18 [16] 94.88 ± 0.26 94.89 ± 0.26 98.58 ± 0.17 98.58 ± 0.17 100.00 ± 0.00 100.00 ± 0.00 99.56 ± 0.07 99.56 ± 0.07 LSTM [17] 94.93 ± 0.51 94.92 ± 0.50 98.62 ± 0.17 98.62 ± 0.17 99.12 ± 0.27 99.12 ± 0.26 98.42 ± 0.19 98.42 ± 0.19 Transformer [34] 94.28 ± 0.72 94.26 ± 0.72 98.64 ± 0.19 98.64 ± 0.19 99.27 ± 0.22 99.27 ± 0.22 98.61 ± 0.27 98.61 ± 0.27 ViT [11] 93.58 ± 0.71 93.59 ± 0.70 98.63 ± 0.17 98.63 ± 0.17 99.94 ± 0.11 99.94 ± 0.11 98.74 ± 0.10 98.74 ± 0.10 PatchTST [29] 94.03 ± 0.74 94.03 ± 0.73 98.84 ± 0.13 98.84 ± 0.13 99.91 ± 0.10 99.91 ± 0.10 98.86 ± 0.19 98.86 ± 0.19 TimeSformer-1D [8] 93.86 ± 1.16 93.87 ± 1.13 98.68 ± 0.21 98.68 ± 0.21 100.00 ± 0.00 100.00 ± 0.00 98.38 ± 0.17 98.39 ± 0.17 Table 4: Comparison of task-specific and multi-task training for the Transformer model across shared-data tasks. The improvements ( ∆) are reported as mean ± std (%) over three runs. TaskTask-Specific Training Multi-Task Joint Training Improvement Acc F1 Acc F1 ∆Acc ∆F1 Human activity Recognition 75.40 ±0.93 75.49 ±0.73 87.79 ±0.00 86.47 ±0.00 + 12.39 +10.98 User Identification 99.51 ±0.32 99.51 ±0.32 99.83 ±0.00 100.00 ±0.00 +0.32 +0.49 Proximity Recognition 77.52 ±3.13 77.35 ±3.24 87.85 ±0.00 88.67 ±0.00 + 10.33 +11.32 the joint multi-task dataset, and optimization proceeds with shared backbone updates and task-specific losses. All models are trained using the AdamW optimizer [ 26] with a cosine learning rate schedule and early stopping. Detailed architecture configurations and training hyperparameters are provided in Appendix B. 5.4 Results We report performance on all tasks using both standard supervised learning baselines. Table 3 summarizes accuracy and weighted F1-score for supervised models trained on the specialist datasets. Among the models, transformer-based architectures—particularly TimeSformer-1D and PatchTST—consistently achieve strong performance, highlighting their effectiveness in capturing temporal dynamics in high-dimensional CSI data. Simpler models such as MLP and LSTM perform adequately on some tasks but show clear limitations in harder cases. Multi-task learning results are presented in Table 4. Compared to task-specific training, our multi-task models with a shared Transformer backbone and lightweight adapter-based heads achieve improved performance across multiple tasks. These findings highlight the effectiveness of joint training in capturing shared representations while preserving task-specific specialization through adapters. They also suggest that multi-task learning can improve generalization in real-world settings where sensing tasks are naturally co-located and co-labeled. In addition to strong performance, our multi-task framework significantly reduces model complexity and training cost. By consolidating three single-task Transformers into a single backbone with task- specific adapters, we reduce the total parameter count by over 60%. This compression is achieved without degrading task
|
https://arxiv.org/abs/2505.21866v1
|
performance. Moreover, because all tasks are trained jointly in a single pass, the wall-clock training time is reduced by nearly 3×compared to training separate models for each task. These gains in model size and training efficiency make our approach especially suitable for deployment on resource-constrained edge devices, where memory and compute budgets are limited. We also report task-wise performance stratified by difficulty levels (Easy, Medium, Hard) for the single-task datasets in Appendix C.1. Performance drops on hard samples for tasks like fall detection due to signal degradation, cluttered environments, and hardware diversity, reinforcing the need for deployment-aware evaluation. While models perform well under in-distribution settings, we observe significant performance degra- dation under domain shifts. OOD evaluation across user, environment, and device axes—summarized 8 in Appendix C.2—reveals notable challenges in generalization, particularly in cross-device settings. This highlights a key motivation for developing robust and adaptive models in future work. 5.5 Discussion and Takeaways CSI-Bench enables scalable research on high-dimensional CSI-based sensing under real-world condi- tions. Its large scale, diverse hardware coverage, and co-labeled tasks support the development of unified multi-task models for on-device health monitoring. Multi-task learning yields competitive performance while significantly reducing model size and inference cost, making it well-suited for resource-constrained edge deployment. However, performance drops notably under OOD settings, particularly in cross-device scenarios, exposing persistent generalization challenges. Failure cases often arise from hardware heterogeneity, cluttered environments, or degraded signal quality. Over- all, CSI-Bench offers a realistic and comprehensive testbed for developing robust, efficient, and generalizable WiFi sensing systems in unconstrained environments. 6 Limitations The dataset uses amplitude-only CSI features due to phase instability across platforms. While this is practical, it limits exploration of techniques that exploit calibrated phase or angle-of-arrival information. CSI-Bench is designed around classification tasks. Extensions to regression (e.g., continuous sign estimation) and more temporally structured tasks (e.g., long-term activity tracking) are promising but not yet included. We release all data, tools, and splits to support community-driven extensions and improvements. 7 Conclusion We introduce CSI-Bench, a large-scale, in-the-wild benchmark dataset designed to advance research in WiFi-based sensing for health and human-centric applications. Collected using commercial WiFi edge devices deployed in real residential settings, CSI-Bench captures natural signal variability across users, devices, and environments—providing a realistic foundation for developing deployable, privacy-preserving WiFi sensing systems. The dataset includes single-task datasets for fall detection, breathing monitoring, localization, and motion source recognition, as well as a co-labeled multi-task dataset supporting user identification, activity recognition and proximity recognition. This enables the development of multi-task models that support efficient joint inference while allowing rigorous evaluation under both in-distribution and out-of-distribution conditions. To the best of our knowledge, CSI-Bench is the largest available dataset of its kind and can enable learning pipelines that benefit from high-dimensional CSI signals, diverse commercial edge devices, and real-world data (“in the wild”). Beyond the dataset, CSI-Bench includes a suite of baseline models and training protocols under supervised and multi-task settings. Our results show that multi-task learning can reduce model size and inference cost while maintaining competitive accuracy, making it suitable for health monitoring on resource-constrained devices. At the same
|
https://arxiv.org/abs/2505.21866v1
|
time, performance drops under domain shifts highlight the need for future research on adaptive and generalizable sensing models. CSI-Bench provides a comprehensive testbed to support this work and offers a scalable, practical resource for advancing WiFi sensing systems in healthcare and beyond. We release the full dataset and benchmark code to facilitate reproducibility and further innovation in this space. Broader Impact CSI-Bench enables privacy-preserving, contactless health monitoring using commercial WiFi, sup- porting applications like fall detection and breathing monitoring in homes and care facilities. By providing a large-scale, real-world dataset with diverse users, devices, and environments, it fosters inclusive, reproducible research and robust model development. Its high-dimensional CSI data presents unique challenges and opportunities for advancing ML methods in time-series analysis, representation learning, and domain generalization. All data were collected with consent, anonymized, and ethically managed. CSI-Bench serves both as a foundation for practical healthcare solutions and as a benchmark for high-dimensional, human-centered AI systems. 9 References [1]Amazon s3 api reference. https://docs.aws.amazon.com/AmazonS3/latest/API/ Welcome.html , 2024. [2] Broadcom inc. https://www.broadcom.com/ , 2024. [3] Espressif systems. https://www.espressif.com/ , 2024. [4]Freertos: Real-time operating system for microcontrollers. https://www.freertos.org/ , 2024. [5] Google sheets. https://www.google.com/sheets/about/ , 2024. [6] Nxp semiconductors. https://www.nxp.com/ , 2024. [7] Qualcomm technologies inc. https://www.qualcomm.com/ , 2024. [8]Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In Proceedings of the 38th International Conference on Machine Learning , pages 813–824. PMLR, 2021. [9] Rich Caruana. Multitask learning. Machine Learning , 28(1):41–75, 1997. [10] Chen Chen, Yan Chen, Yi Han, Hung-Quoc Lai, and K. J. Ray Liu. Achieving centimeter- accuracy indoor localization on WiFi platforms: A frequency hopping approach. IEEE Internet of Things Journal , 4(1):111–121, 2017. [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations , 2021. [12] Pengsong Duan, Xianguang Diao, Yangjie Cao, Dalong Zhang, Bo Zhang, and Jinsheng Kong. A comprehensive survey on wi-fi sensing for human identity recognition. Electronics , 12(23), 2023. [13] Qinghua Gao, Jingyu Tong, Jie Wang, Zhouhua Ran, and Miao Pan. Device-free multi-person respiration monitoring using WiFi. IEEE Transactions on Vehicular Technology , 69(11):14083– 14087, 2020. [14] Linlin Guo, Lei Wang, Chuang Lin, Jialin Liu, Bingxian Lu, Jian Fang, Zhonghao Liu, Zeyang Shan, Jingwen Yang, and Silu Guo. Wiar: A public dataset for wifi-based activity recognition. IEEE Access , 7:154935–154945, 2019. [15] Daniel Halperin, Wenjun Hu, Anmol Sheth, and David Wetherall. Tool release: Gathering 802.11n traces with channel state information (csi). Technical report, University of Washington, 2011. [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition , pages 770–778, 2016. [17] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997. [18] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models.
|
https://arxiv.org/abs/2505.21866v1
|
arXiv preprint arXiv:2106.09685 , 2021. [19] Pengli Hu, Chengpei Tang, Kang Yin, and Xie Zhang. Wigr: A practical wi-fi-based gesture recognition system with a lightweight few-shot network. Applied Sciences , 11(8):3329, 2021. [20] Yuqian Hu, Guozhen Zhu, Wei-Hsiang Wang, Beibei Wang, and K. J. Ray Liu. What you need is a good csi. In Proceedings of the 30th Annual International Conference on Mobile Computing and Networking , pages 1608–1610. ACM, 2024. 10 [21] Shuokang Huang, Kaihan Li, Di You, Yichong Chen, Arvin Lin, Siying Liu, Xiaohui Li, and Julie A McCann. Wimans: A benchmark dataset for wifi-based multi-user activity sensing. arXiv preprint arXiv:2402.09430 , 2024. [22] Abdullah Khalili, Abdel-Hamid Soliman, Md Asaduzzaman, and Alison Griffiths. Wi-fi sensing: applications and challenges. The Journal of Engineering , 2020:87–97, 2020. [23] Bing Li, Wei Cui, Wei Wang, Le Zhang, Zhenghua Chen, and Min Wu. Two-stream convolution augmented transformer for human activity recognition. In Proceedings of the AAAI Conference on Artificial Intelligence , 2021. [24] K. J. Ray Liu and Beibei Wang. Wireless AI: Wireless sensing, positioning, IoT, and communi- cations . Cambridge University Press, 2019. [25] K. J. Ray Liu and Beibei Wang. Statistical principles of time reversal [perspectives]. IEEE Signal Processing Magazine , 41(1):31–37, 2024. [26] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2019. [27] Yongsen Ma, Gang Zhou, Shuangquan Wang, Hongyang Zhao, and Woosub Jung. Signfi: Sign language recognition using WiFi. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies , 2(1), 2018. [28] Francesca Meneghello, Domenico Garlisi, Nicolò Dal Fabbro, Ilenia Tinnirello, and Michele Rossi. Sharp: Environment and person independent activity recognition with commodity ieee 802.11 access points. IEEE Transactions on Mobile Computing , 22(10):6160–6175, 2023. [29] Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In The Eleventh International Conference on Learning Representations , 2023. [30] Kun Qian, Chenshu Wu, Zheng Yang, Yunhao Liu, and Kyle Jamieson. Widar: Decimeter-level passive tracking via velocity monitoring with commodity wi-fi. In Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing , page 6. ACM, 2017. [31] Sai Deepika Regani, Beibei Wang, Yuqian Hu, and K. J. Ray Liu. Gwrite: Enabling through-the- wall gesture writing recognition using WiFi. IEEE Internet of Things Journal , 10(7):5977–5991, 2023. [32] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature , 323(6088):533–536, 1986. [33] Julio C.H. Soto, Iandra Galdino, Egberto Caballero, Vinicius Ferreira, Débora Muchaluat-Saade, and Célio Albuquerque. A survey on vital signs monitoring based on wi-fi csi data. Computer Communications , 195:99–110, 2022. [34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems , pages 6000–6010. Curran Associates Inc., 2017. [35] Fei Wang, Jianwei Feng, Yinliang Zhao, Xiaobin Zhang, Shiyuan Zhang, and Jinsong Han. Joint activity recognition and indoor localization with WiFi fingerprints. IEEE Access , 7:80058– 80068, 2019. [36]
|
https://arxiv.org/abs/2505.21866v1
|
Fei Wang, Yizhe Lv, Mengdie Zhu, Han Ding, and Jinsong Han. Xrf55: A radio frequency dataset for human indoor action analysis. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies , 8(1), 2024. [37] Wei Wang, Alex X. Liu, Muhammad Shahzad, Kang Ling, and Sanglu Lu. Understanding and modeling of WiFi signal based human activity recognition. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking , pages 65–76. ACM, 2015. 11 [38] Wei-Hsiang Wang, Beibei Wang, Yuqian Hu, Guozhen Zhu, and K. J. Ray Liu. Device-free room-level localization with WiFi utilizing spatial-frequency–time diversity. IEEE Internet of Things Journal , 11(21):35689–35698, 2024. [39] Xuanzhi Wang, Anlan Yu, Kai Niu, Weiyan Shi, Junzhe Wang, Zhiyun Yao, Rahul C. Shah, Hong Lu, and Daqing Zhang. Understanding the diffraction model in static multipath-rich environments for WiFi sensing system design. IEEE Transactions on Mobile Computing , 23(11):10393–10410, 2024. [40] Xuyu Wang, Lingjun Gao, Shiwen Mao, and Santosh Pandey. Csi-based fingerprinting for indoor localization: A deep learning approach. IEEE Transactions on Vehicular Technology , 66(1):763–776, 2017. [41] Yuxi Wang, Kaishun Wu, and Lionel M. Ni. Wifall: Device-free fall detection by wireless networks. IEEE Transactions on Mobile Computing , 16(2):581–594, 2017. [42] Chenshu Wu, Beibei Wang, Oscar C. Au, and K.J. Ray Liu. Wi-fi can do more: Toward ubiquitous wireless sensing. IEEE Communications Standards Magazine , 6(2):42–49, 2022. [43] Qinyi Xu, Yan Chen, BeiBei Wang, and K. J. Ray Liu. Radio biometrics: Human recognition through a wall. IEEE Transactions on Information Forensics and Security , 12(5):1141–1155, 2017. [44] Jianfei Yang, He Huang, Yunjiao Zhou, Xinyan Chen, Yuecong Xu, Shenghai Yuan, Han Zou, Chris Xiaoxuan Lu, and Lihua Xie. Mm-fi: Multi-modal non-intrusive 4d human dataset for versatile wireless sensing. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2023. [45] Jianfei Yang, Han Zou, Yuxun Zhou, and Lihua Xie. Learning gestures from wifi: A siamese recurrent convolutional architecture. IEEE Internet of Things Journal , 6(6):10763–10772, 2019. [46] Feng Zhang, Chenshu Wu, Beibei Wang, Hung-Quoc Lai, Yi Han, and K. J. Ray Liu. Widetect: Robust motion detection with a statistical electromagnetic model. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies , 3(3), 2019. [47] Yi Zhang, Yue Zheng, Kun Qian, Guidong Zhang, Yunhao Liu, Chenshu Wu, and Zheng Yang. Widar3.0: Zero-effort cross-domain gesture recognition with Wi-Fi. IEEE Transactions on Pattern Analysis and Machine Intelligence , 44(11):8671–8688, 2022. 12 A Dataset Description This appendix details the composition and collection protocols of CSI-Bench. A.1 Subjects and Scenarios CSI-Bench includes CSI data from 35 individual users (U01–U35), comprising 26 males and 9 females aged between 23 and 42 years, with heights ranging from 155 to 185 cm and body weights from 45 to 90 kg. In addition, six two-user sessions (UM01–UM06) are recorded to support multi- person interaction analysis. To further diversify subject types, the dataset includes 20 pets (P01–P20), with body weights from 6 to 40 kg, and a dedicated two-pet scenario (PM01). Finally, four distinct fan-based motion scenes (F01–F04) capture ambient signal patterns caused by oscillating and ceiling fans.
|
https://arxiv.org/abs/2505.21866v1
|
A.2 Environments Data is collected across 26 distinct real-world environments (E01–E26), including studio apartments, multi-bedroom apartments, townhouses, and multi-floor single-family houses. These environments vary in layout complexity, room geometry, wall materials, and furniture density, introducing rich multipath and occlusion effects. A summary of all environments is provided in Table 7. A.3 Devices and Hardware Diversity To ensure broad coverage of real-world IoT infrastructure, we select 16 types of commercial WiFi- enabled devices operating across both 2.4 GHz and 5 GHz bands, with bandwidths of 20, 40, and 80 MHz. Devices span major chipset vendors such as Qualcomm, Broadcom, Espressif, and NXP. Prior to data collection, each device is evaluated using our in-house CSI verification tool to assess signal consistency, sampling stability, and amplitude dynamics. Devices that pass quality thresholds are used in deployment. Figure 4 presents the CSI quality scores across candidate devices; Table 6 lists their specifications. A.4 Task-Specific Dataset Statistics CSI-Bench supports both single-task specialist datasets and a co-labeled multi-task dataset. Task-wise breakdowns include the number of samples, users, environments, and devices, as detailed in Table 5. Each task is annotated with appropriate labels to support supervised learning, multi-task training, and cross-domain evaluation. Note on Evaluation Splits. For rigorous benchmarking, CSI-Bench defines task-specific evaluation splits based on difficulty levels (Easy, Medium, Hard) and out-of-distribution (OOD) axes (cross-user, cross-environment, cross-device). These splits are introduced in Sections A.5–A.9 for each task and are used to generate the experimental results reported in Appendix C. A.5 Fall Detection The Fall Detection dataset is designed to evaluate human fall recognition in real residential settings using commodity WiFi hardware. Data is collected with synchronized video ground-truth under varied hardware and environmental conditions. Subjects and Scenarios. The dataset includes 17 participants across 6 indoor environments. Activi- ties include casual walking, sitting, lying down, and falling. Scenarios include both LoS and NLoS layouts, with added noise from ambient sources such as ceiling fans to simulate realistic deployments. Hardware Setup. WiFi CSI data is primarily collected using NXP88W8997 2 ×2 802.11ac chipsets operating at 5.18 GHz with a 40 MHz bandwidth. Each transmitter-receiver pair forms 4 spatial links and records 58 subcarriers at a sampling rate of 100 Hz. Additionally, a smaller portion of the data is collected using ESP32-S3 devices, which operate at 2.4 GHz with a 1 ×1 antenna setup and capture 64 subcarriers. 13 Table 5: Summary of tasks and dataset statistics. Task Users Envs Gender Age (yrs) Height (cm) Weight (kg) Fall U06 - U22 E21 - E26 14M / 3F 23 - 42 156 - 182 46 - 90 Breath U06, U23, U24 E05, E09, E10 1M / 2F 27 - 32 160 - 173 60 - 88 Loc.U01, U05, U06 UM01 - UM05E01, E03, E04 E06 - E084M / 4F 27 - 41 155 - 175 48 - 90 Prox. U01 - U06 E01 - E06 2M / 4F 26 - 41 163 - 173 45 - 90 HAR U01 - U06 E01 - E06 2M / 4F 26 - 41 163 - 173 45 - 90 UID U01 - U06 E01 -
|
https://arxiv.org/abs/2505.21866v1
|
E06 2M / 4F 26 - 41 163 - 173 45 - 90 MSRP01 - P20, PM01E11 - E13, E15 - E20- - - 6 - 40 U03 - U04, U08 - U10, U13, U18, U20, U25 - U27, U29 - U35, UM06E11 - E20 15M / 3F 23 - 35 155 - 185 50 - 90 F01 - F04 E11 - E13 - - - - AmazonPlugGoveePlugWyzePlug EightreePlugEchoPlusGoogleNest AppleHomePodEchoSpot EchoShow 8Echodot 2 genEchodot 3 genHex HomeHealthPodESP32 GoogleNestHubLyra020406080100Score Figure 4: Average CSI quality scores of 16 widely used IoT devices evaluated using our CSI verification tool. Each bar represents the mean score across five measurement trials, with error bars indicating the standard deviation. Data Collection Protocol. Each session lasts 1–5 minutes, capturing both routine and fall-related activities. Fall events are annotated using synchronized video recordings. Scale and Composition: 6 environments (homes and offices); 17 participants; 2,770 fall events; 3,930 non-fall activities. Difficulty-Level Evaluation. Test samples are stratified into Easy ,Medium , and Hard tiers based on environmental complexity and device quality. Medium includes fan-induced interference; Hard includes ESP32-based low-quality CSI. A.6 Breathing Detection This dataset captures subtle respiration signals under natural sleep conditions using diverse IoT hardware in real homes. Subjects and Scenarios. Breathing data is collected from 3 participants across 3 residential environ- ments. Deployment setups range from same-room (LoS) to cross-room (NLoS), with and without fan interference. 14 Table 6: Summary of edge devices, WiFi chipsets, and specifications. Device WiFi Chipset VendorModel Antenna Bandwidth Band AmazonPlug MediaTek MT7697N 1x1 20MHz 2.4G GoveePlug Espressif ESP8266/ESP8285 1x1 20MHz 2.4G WyzePlug Espressif ESP8266/ESP8285 1x1 20MHz 2.4G EightreePlug Espressif ESP8266/ESP8285 1x1 20MHz 2.4G EchoPlus MediaTek MT8516 1x1 20/40/80MHz 2.4G & 5G GoogleNest Qualcomm IPQ4019 1x1 20/40MHz 2.4G & 5G AppleHomePod - - 1x1 20/40MHz 2.4G & 5G EchoSpot MediaTek MT6625L 1x1 20/40MHz 2.4G & 5G EchoShow 8 MediaTek MT8183 1x1 20/40/80MHz 2.4G & 5G Echodot 2 gen MediaTek MT6625LN 1x1 20/40MHz 2.4G & 5G Echodot 3 gen MediaTek MT7658CSN 1x1 20/40/80MHz 2.4G & 5G Hex Home Qualcomm - 1x2 20/40MHz 5G HealthPod NXP 88W889 2x2 20/40/80MHz 5G ESP32 Espressif S3 1x1 20/40MHz 2.4G GoogleNestHub Broadcom BCM4345 1x1 20/40/80MHz 2.4G & 5G Lyra Qualcomm - 2x2 20/40/80MHz 2.4G & 5G Hardware Setup. Devices include Amazon Echo Dots, Echo Plus, Google Nest Hub, and Qualcomm- based 5 GHz routers. Sampling is fixed at 30 Hz. Data Collection Protocol. Overnight sessions are passively recorded during natural sleep without intervention. Participants optionally log activity context. Scale and Composition: ∼55,000 breathing samples; ∼45,000 empty-room samples; ∼11,400 fan-interfered samples; Diverse device placements and heights (0.47–2.18 m). Difficulty-Level Evaluation. Difficulty is assigned based on device-user distance, interference level, and deployment complexity. Hard tiers involve distant NLoS setups and overlapping fan motion. A.7 Room-Level Localization This dataset supports room-level user localization in typical households with both single- and multi- user presence. Subjects and Scenarios. Data is collected from 8 users in 6 homes. Three rooms per home are labeled for occupancy. Scenarios include both single and two-user activity. Hardware Setup. Devices span 8 types (Echo, Google Nest, Apple HomePod, etc.) operating on
|
https://arxiv.org/abs/2505.21866v1
|
2.4/5 GHz at 30 or 100 Hz. Bandwidths vary from 20–80 MHz. Data Collection Protocol. Users annotate their room presence and co-occupancy manually. Sessions reflect natural daily activities. Scale and Composition: 3,805 single-user samples; 3,257 multi-user samples; 6 diverse environ- ments; 8 device types. Difficulty-Level Evaluation. Tiers are defined by user count and hardware quality. Easy cases use high-quality CSI from 5 GHz devices; hard cases include 2.4 GHz plugs and multi-user ambiguity. A.8 Motion Source Recognition This dataset captures motion patterns from humans, pets, robots, and fans in diverse indoor settings. 15 Table 7: Summary of environments ( XBXB indicates X bedrooms and X bathrooms ). Env ID Type Area (sqft) Layout Type # Floors E01 Single-family house 2400 Multi-room 3 E02 Apartment 633 Studio 1 E03 Apartment 1077 2B2B 1 E04 Apartment 790 1B1B 1 E05 Apartment 714 1B1B 1 E06 Apartment 1652 3B2B 1 E07 Apartment 1250 2B2B 1 E08 Single-family house 1790 Multi-room 2 E09 Apartment 1200 2B2B 1 E10 Single-family house 1904 Multi-room 2 E11 Single-family house 1352 Multi-room 2 E12 Apartment 830 1B1B 1 E13 Apartment 2242 4B2B 1 E14 Single-family house 1700 Multi-room 2 E15 Single-family house 2000 Multi-room 2 E16 Apartment 960 1B1B 1 E17 Apartment 860 1B1B 1 E18 Single-family house 1680 Multi-room 2 E19 Town house 2600 Multi-room 4 E20 Office 1224 Partitioned rooms 1 E21 Apartment 700 2B1B 1 E22 Single-family house 1300 Multi-room 2 E23 Office 1500 Partitioned rooms 1 E24 Single-family house 1250 Multi-room 2 E25 Single-family house 1400 Multi-room 2 E26 Single-family house 900 Multi-room 3 Subjects and Scenarios. Data include 13 humans (ages 23–34), 11 pets, Roomba robots, and oscillating fans. Activities include walking, sneaking, and simulated intrusion. Environments span homes, townhouses, and offices. Hardware Setup. CSI is collected via NXP88W8997 2×2 devices at 100 Hz over 58 subcarriers. Data Collection Protocol. Each session lasts 3–8 minutes. Human data is optionally logged by users; non-human motion is passively captured. Scale and Composition: ∼150K seconds of human motion; ∼2,000 minutes of pet activity; ∼1,000 minutes of robot activity; ∼200 minutes of fan motion. Difficulty-Level Evaluation. Difficulty is based on motion type, subject diversity, and signal quality. Easy cases include clean human walking or small pets; hard cases include multi-subjects, large pets, or intrusion patterns under NLoS. A.9 Multi-task Dataset This dataset enables multi-task learning across activity recognition, user identification, and proximity estimation. Subjects and Scenarios. Six users perform 5 activities across 6 homes: walking (at 4 distances), running, jumping, seated breathing, and waving. Cross-user and cross-environment samples are included. 16 Hardware Setup. Each environment uses 5–7 IoT devices across 2.4/5 GHz bands. Devices include Echo, Google Nest, Apple HomePod, ESP32 plugs, and more. Data Collection Protocol. Each activity lasts 3–6 minutes. Participants use a lightweight UI to annotate activity boundaries and proximity distances. Scale and Composition: 41,503 total samples; ∼5,000–6,000 samples per activity; 4 proximity distances: 0.5, 1.5, 2.5, 3.5 m. Cross-Domain Evaluation. To evaluate generalization, held-out domains include: •Cross-User: U02 •Cross-Environment: E05 •Cross-Device: Amazon Plug, Echo Spot These exclusions are reserved for OOD test sets used in
|
https://arxiv.org/abs/2505.21866v1
|
Appendix C. B Model Architectures and Training Details To support rigorous evaluation across in-distribution, cross-domain, and few-shot generalization, we implement and benchmark a suite of neural network models representative of contemporary time-series and vision-inspired architectures. All models are implemented in PyTorch and trained under consistent protocols unless otherwise noted. B.1 Supervised Learning Architectures We benchmark the following supervised architectures across all tasks: Multi-Layer Perceptron (MLP) The MLP model consists of three fully-connected layers with ReLU activations and dropout for regularization. The input to the model is a flattened CSI feature vector, capped at a maximum of 10,000 dimensions to control memory usage. Specifically, the architecture is: [Input →Linear(512) →ReLU→Dropout(0.5) →Linear(128) →ReLU→ Dropout(0.3) →Linear(Output classes)]. Long Short-Term Memory (LSTM) Our LSTM baseline includes a bidirectional LSTM with two layers, each containing 256 hidden units. A linear classifier follows this, accompanied by dropout for regularization: [Input →Bi-LSTM(256, 2 layers, dropout=0.3) →Linear(256) →ReLU→ Dropout(0.3) →Linear(Output classes)]. ResNet-18 We modify a standard ResNet-18 architecture to accept single-channel input (WiFi CSI data) by adapting the first convolutional layer accordingly. The final fully connected layer is tailored to the task-specific number of classes. Vision Transformer (ViT) The ViT model converts input CSI data into embedded patches using convolutional patch embeddings, followed by Transformer encoder layers (6 layers, embedding dimension=128, and 4 heads). A class token is prepended for classification tasks. Dropout and layer normalization are employed for stability. Transformer This architecture employs Transformer encoder layers (4 layers, model dimen- sion=256, 8 attention heads). Inputs are linearly projected into the model dimension, positional encodings are added, and global average pooling is applied before classification. Dropout is set to 0.1 to prevent overfitting. PatchTST PatchTST utilizes temporal patch embeddings (patch length=16, stride=8) processed through Transformer encoder layers (4 layers, embedding dimension=128, 4 heads). The architecture includes positional encodings, dropout (0.1), and a CLS token or mean pooling strategy for final prediction. 17 TimeSformer-1D TimeSformer-1D adopts patch embeddings (patch size=4) followed by separate temporal and feature attention within Transformer blocks (4 layers, embedding dimension=128, 8 heads). A class token and positional embeddings are included for classification, with dropout layers added for robustness. All models use a final linear classifier and are initialized using Xavier uniform initialization unless otherwise specified. B.2 Multi-Task Learning with Adapters To enable efficient multi-task learning across diverse WiFi sensing tasks, we implement task-specific adapter modules on top of a shared backbone: •LoRA Adapters: For Transformer backnone model, we apply LoRA to the attention modules. Each task has separate adapter weights (rank=8, α=32, dropout=0.05). •Task Adapters: A residual two-layer bottleneck MLP (down-project, GELU, up-project, followed by LayerNorm) is applied post-backbone for each task. •Task-Specific Heads: Each task has a separate classification head, initialized via Xavier uniform. During training, we activate one task at a time and update both the shared backbone and the active task’s adapter and head. B.3 Training Protocol All models are trained with the AdamW optimizer, a batch size of 128, and initial learning rate of 1e−3. We apply cosine learning rate decay with 5 warm-up epochs and weight decay of 1e−5. Training lasts up
|
https://arxiv.org/abs/2505.21866v1
|
to 100 epochs, with early stopping based on validation loss (patience = 15). We use categorical cross-entropy as the loss function. The hyperparameter are tuned based on models’ accuracy on validation dataset. Data is loaded from HDF5 using standardized splits as discussed in Section 5.2 and label mappings. Our experiments utilize NVIDIA GeForce RTX 4090 GPUs and AWS Sagemaker involved training with three random seeds across all datasets. For training in AWS Sagemaker, we use ml.g5.g5.12xlarge, which includes 4 NVIDIA A10G Tensor Core GPUs. The training time for tasks ranges from 0.5 hour to 13 hours. C Additional Experiments The results in Appendix C are stratified by the difficulty tiers and OOD evaluation protocols de- fined in Appendix A.1–A.5. For each task, performance is reported across (i) three difficulty levels (Easy, Medium, Hard) reflecting environmental and signal complexity for single-task datasets (Ap- pendix C.1), and (ii) three out-of-distribution (OOD) axes—cross-user, cross-environment, and cross-device—for multi-task datasets (Appendix C.1). All splits are predefined during data collection and are described per task in Appendix A. C.1 Evaluation with Difficulty Tiers Table 8 compares Fall Detection performance across three difficulty levels. All models perform well under the Easy setting, with LSTM, PatchTST, ResNet18, and ViT achieving F1-scores above 97%. MLP underperforms due to limited temporal modeling. In the Medium tier, performance drops notably—ResNet18 and ViT remain strong (F1 ∼77%), while PatchTST degrades significantly (F1 ∼56%). TimeSformer-1D and Transformer show moderate results. In the Hard tier, ResNet18 leads with 68.08% F1, while others degrade further. The larger variance in Medium and Hard tiers is due to smaller dataset sizes, which increase sensitivity to noise and reduce performance stability. 18 Table 8: Fall Detection performance comparison of supervised models. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelEasy Medium Hard Acc F1 Acc F1 Acc F1 LSTM [17] 97.62 ± 0.52 97.62 ± 0.52 69.12 ± 5.63 68.20 ± 5.19 67.12 ± 2.96 66.05 ± 4.03 MLP [32] 94.84 ± 0.85 94.84 ± 0.85 70.59 ± 9.61 70.19 ± 9.91 63.70 ± 2.37 63.41 ± 2.25 PatchTST [29] 97.13 ± 0.72 97.13 ± 0.72 61.76 ± 7.59 56.31 ± 12.22 62.67 ± 2.82 61.36 ± 4.14 ResNet18 [16] 97.27 ± 0.32 97.27 ± 0.32 77.94 ± 5.63 76.96 ± 6.46 68.84 ± 3.04 68.08 ± 3.58 TimeSformer-1D [8] 96.58 ± 0.50 96.59 ± 0.49 67.65 ± 7.59 64.55 ± 11.72 65.75 ± 9.29 61.19 ± 17.16 Transformer [34] 97.08 ± 0.54 97.08 ± 0.54 69.12 ± 5.63 68.10 ± 5.91 65.07 ± 6.94 63.89 ± 7.40 ViT [11] 97.40 ± 0.42 97.40 ± 0.42 77.94 ± 13.04 77.07 ± 14.46 65.75 ± 3.71 64.06 ± 6.66 Table 10: Localization performance comparison of supervised models. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelEasy Medium Hard Acc F1 Acc F1 Acc F1 LSTM [17] 99.72 ± 0.32 99.75 ± 0.29 100.00 ± 0.00 100.00 ± 0.00 98.31 ± 0.50 98.31 ± 0.50 MLP [32] 91.36 ± 0.93 92.03 ± 0.82 96.11 ± 1.31 96.18 ± 1.29 80.20
|
https://arxiv.org/abs/2505.21866v1
|
± 1.06 80.03 ± 1.19 PatchTST [29] 100.00 ± 0.00 100.00 ± 0.00 99.90 ± 0.19 99.95 ± 0.10 99.86 ± 0.17 99.86 ± 0.18 ResNet18 [16] 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 TimeSformer-1D [8] 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 Transformer [34] 99.30 ± 0.36 99.40 ± 0.24 99.90 ± 0.19 99.90 ± 0.19 98.95 ± 0.66 98.95 ± 0.66 ViT [11] 99.79 ± 0.42 99.82 ± 0.35 99.90 ± 0.19 99.90 ± 0.19 99.50 ± 0.23 99.50 ± 0.23 Table 9: Breathing Detection performance comparison of supervised models. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelEasy Medium Hard Acc F1 Acc F1 Acc F1 LSTM [17] 99.11 ± 0.17 99.11 ± 0.17 98.61 ± 0.13 98.61 ± 0.13 98.08 ± 0.28 98.08 ± 0.28 MLP [32] 98.54 ± 0.14 98.54 ± 0.14 97.67 ± 0.15 97.67 ± 0.15 96.46 ± 0.13 96.46 ± 0.13 PatchTST [29] 99.20 ± 0.06 99.20 ± 0.06 98.77 ± 0.19 98.77 ± 0.19 98.49 ± 0.22 98.49 ± 0.22 ResNet18 [16] 98.94 ± 0.17 98.94 ± 0.17 98.42 ± 0.16 98.42 ± 0.16 98.32 ± 0.25 98.32 ± 0.25 TimeSformer-1D [8] 99.05 ± 0.22 99.05 ± 0.22 98.29 ± 0.31 98.29 ± 0.31 98.60 ± 0.23 98.60 ± 0.23 Transformer [34] 98.23 ± 0.24 98.23 ± 0.24 97.31 ± 0.47 97.31 ± 0.47 97.54 ± 0.31 97.54 ± 0.31 ViT [11] 99.56 ± 0.08 99.56 ± 0.08 99.41 ± 0.08 99.41 ± 0.08 99.17 ± 0.11 99.17 ± 0.11 Table 9 presents breathing detection results, where all models maintain high accuracy and F1-scores (>96%) across tiers. ViT performs best, achieving over 99% F1 consistently. LSTM and PatchTST follow closely, especially in the Easy setting. Even in the Hard tier, model performance drops only slightly. ResNet18 and TimeSformer-1D also generalize well, with minimal performance variance. The results suggest that breathing patterns are relatively easier to model and robust to environmental changes. Table 10 demonstrates that localization is a highly separable task. Most models—including PatchTST, ResNet18, and TimeSformer-1D—achieve perfect scores in the Easy and Medium tiers and retain 19 Table 11: Motion Source Recognition performance comparison of supervised models. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelEasy Medium Hard Acc F1 Acc F1 Acc F1 LSTM [17] 96.65 ± 0.96 96.99 ± 0.78 98.79 ± 0.11 98.80 ± 0.11 96.94 ± 0.94 96.94 ± 0.95 MLP [32] 98.21 ± 0.28 98.29 ± 0.18 99.13 ± 0.11 99.13 ± 0.11 98.19 ± 0.36 98.19 ± 0.36 PatchTST [29] 98.01 ± 0.69 98.28 ± 0.54 98.59 ± 0.36 98.59 ± 0.36 97.49 ± 0.71 97.49 ± 0.72 ResNet18 [16] 99.86 ± 0.11 99.86 ± 0.11 99.73 ± 0.05 99.73 ± 0.05 99.48 ± 0.32 99.48 ± 0.32 TimeSformer-1D [8] 96.56 ± 0.64 96.92 ± 0.56 98.68 ± 0.18 98.69 ± 0.18 97.32 ± 0.31 97.31 ± 0.32 Transformer [34] 98.73 ±
|
https://arxiv.org/abs/2505.21866v1
|
0.62 98.80 ± 0.55 98.63 ± 0.17 98.63 ± 0.17 98.08 ± 0.55 98.08 ± 0.55 ViT [11] 98.38 ± 0.87 98.41 ± 0.81 99.27 ± 0.32 99.27 ± 0.32 98.10 ± 0.45 98.10 ± 0.45 Table 12: Human Activity Recognition cross-domain performance. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelCross-Device Cross-Env Cross-User Acc F1 Acc F1 Acc F1 LSTM [17] 60.57 ± 2.12 57.04 ± 2.32 53.65 ± 0.89 46.22 ± 0.72 53.33 ± 2.11 45.70 ± 2.01 MLP [32] 56.33 ± 1.23 50.79 ± 1.11 52.15 ± 0.85 43.45 ± 1.40 52.06 ± 0.54 42.05 ± 0.97 PatchTST [29] 61.61 ± 1.81 58.05 ± 1.54 56.85 ± 0.63 49.55 ± 0.47 56.44 ± 1.47 49.25 ± 1.33 ResNet18 [16] 66.21 ± 1.96 63.57 ± 1.90 57.98 ± 0.87 50.90 ± 0.96 59.24 ± 1.47 52.07 ± 1.53 TimeSformer-1D [8] 60.24 ± 1.00 55.70 ± 1.20 54.65 ± 0.93 46.63 ± 0.79 54.95 ± 0.84 45.74 ± 0.79 Transformer [34] 61.82 ± 0.95 57.80 ± 0.78 54.92 ± 0.98 47.17 ± 1.12 54.72 ± 0.84 46.67 ± 1.00 ViT [11] 66.33 ± 1.73 63.65 ± 1.69 58.87 ± 1.12 51.86 ± 1.31 59.00 ± 1.36 51.48 ± 1.26 near-perfect performance in the Hard tier. ViT, Transformer, and LSTM also show strong results (F1 >98%). MLP consistently underperforms, particularly in the Hard tier (F1: 80.03%), likely due to limited spatial modeling. Overall, most models handle localization with high reliability. These results indicate that CSI-based localization is a highly separable task, and that most temporal or spatially-aware models can solve it with high reliability. Table 11 shows consistently high motion source recognition performance across all difficulty levels. Most models achieve F1-scores above 96%, with ViT and ResNet18 exceeding 99% even in the Hard setting. MLP, PatchTST, and Transformer also perform well, indicating the task is relatively easy to separate. Performance variance remains low, suggesting stable generalization. C.2 Evaluation on OOD Splits Tables 12-14 present the performance of supervised models under three cross-domain generalization settings—Cross-Device, Cross-Environment, and Cross-User—for Human Activity Recognition, Human Identification, and Proximity Recognition, respectively. Across all tasks, ViT consistently achieves the highest performance, with the best F1-scores in most OOD settings. ForHuman Activity Recognition (Table 12), performance drops significantly under all OOD axes, particularly in the Cross-Environment and Cross-User settings, where even the top-performing models (ViT and ResNet18) show F1-scores below 53%. This highlights the challenge of domain shifts in activity classification. InHuman Identification results (Table 13), ViT again leads with a 69.55% F1 under Cross-Device, followed closely by ResNet18, suggesting strong person-specific feature learning. 20 Table 13: Human Identification cross-domain performance. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelCross-Device Acc F1 LSTM [17] 59.25 ± 1.69 59.32 ± 1.72 MLP [32] 57.31 ± 1.61 57.15 ± 1.45 PatchTST [29] 60.45 ± 1.07 60.56 ± 1.17 ResNet18 [16] 68.07 ± 1.93 68.21 ± 1.97 TimeSformer-1D [8] 60.84 ± 0.81 61.00 ± 0.79 Transformer [34] 59.94 ± 0.77 59.81 ± 0.96 ViT [11] 69.37 ± 1.53 69.55 ±
|
https://arxiv.org/abs/2505.21866v1
|
1.61 Table 14: Proximity Recognition cross-domain performance. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelCross-Device Cross-Env Cross-User Acc F1 Acc F1 Acc F1 LSTM [17] 24.89 ± 2.97 24.29 ± 3.02 28.64 ± 1.08 26.76 ± 1.02 29.20 ± 0.55 23.83 ± 1.34 MLP [32] 28.73 ± 0.89 27.31 ± 0.84 25.76 ± 0.77 20.86 ± 1.41 26.19 ± 0.57 17.32 ± 0.74 PatchTST [29] 28.13 ± 2.17 26.60 ± 1.38 26.42 ± 1.88 25.35 ± 1.63 28.86 ± 0.67 23.15 ± 0.76 ResNet18 [16] 31.19 ± 5.81 27.93 ± 4.95 30.67 ± 3.06 28.01 ± 3.78 32.67 ± 1.50 27.64 ± 2.53 TimeSformer-1D [8] 27.95 ± 2.04 25.85 ± 2.67 29.73 ± 2.99 27.93 ± 3.97 31.19 ± 0.87 26.98 ± 1.48 Transformer [34] 30.68 ± 3.11 28.76 ± 3.51 29.67 ± 1.63 27.12 ± 1.79 30.26 ± 1.94 25.97 ± 2.26 ViT [11] 32.04 ± 1.95 30.11 ± 2.12 30.83 ± 2.36 28.62 ± 2.51 31.54 ± 1.66 26.94 ± 1.77 Lastly, Proximity Recognition (Table 14) is the most challenging task, with all models performing poorly across OOD conditions. Even the best-performing ViT model achieves only around 30% F1, and large variances are observed, indicating poor robustness and generalization. Overall, these results reveal that while certain models like ViT and ResNet18 show relative resilience, significant performance degradation remains under distribution shifts, underscoring the need for more robust domain generalization strategies in CSI-based sensing tasks. 21
|
https://arxiv.org/abs/2505.21866v1
|
Evaluating the Retrieval Robustness of Large Language Models Shuyang Cao♠*, Karthik Radhakrishnan♡, David Rosenberg♡, Steven Lu♡, Pengxiang Cheng♡, Lu Wang♠, Shiyue Zhang♡ Bloomberg♡University of Michigan♠ {kradhakris1, drosenberg44, slu126, pcheng134, szhang1061}@bloomberg.net {caoshuy, wangluxy}@umich.edu Abstract Retrieval-augmented generation (RAG) gener- ally enhances large language models’ (LLMs) ability to solve knowledge-intensive tasks. But RAG may also lead to performance degrada- tion due to imperfect retrieval and the model’s limited ability to leverage retrieved content. In this work, we evaluate the robustness of LLMs in practical RAG setups (henceforth retrieval robustness ). We focus on three research ques- tions: (1) whether RAG is always better than non-RAG; (2) whether more retrieved docu- ments always lead to better performance; (3) and whether document orders impact results. To facilitate this study, we establish a bench- mark of 1500 open-domain questions, each with retrieved documents from Wikipedia. We introduce three robustness metrics, each cor- responds to one research question. Our com- prehensive experiments, involving 11 LLMs and 3 prompting strategies, reveal that all of these LLMs exhibit surprisingly high retrieval robustness; nonetheless, different degrees of imperfect robustness hinders them from fully utilizing the benefits of RAG.1 1 Introduction Large language models (LLMs) learn to acquire massive amounts of knowledge through large-scale pre-training, enabling them to answer knowledge- intensive questions (OpenAI et al., 2024; An- thropic, July. 2024; Meta, September 2024). How- ever, relying exclusively on parametric knowledge can lead to inaccuracies when dealing with unseen or time-sensitive information, or when the model fails to precisely retrieve relevant knowledge from its own parameters. To alleviate these limitations, retrieval-augmented generation (RAG) is proposed, where external documents containing information relevant to the task are fetched from a datastore *Work done during an internship at Bloomberg 1We will release our evaluation harness soon. 0.3 0.4 0.5 0.6 0.7 Task Performance0.840.860.880.900.92Retrieval Robustness 1B 3B 8B 70B 32B 104B 12B 123B 4o / o3-mini sonnet Llama Command Mistral OpenAI ClaudeFigure 1: Comparison of retrieval robustness and QA task performance across various LLMs. The y-axis rep- resents robustness (geometric mean of the three robust- ness metrics), while the x-axis represents task perfor- mance (average across all k,o, retrievers, and datasets). OpenAI GPT-4o and o3-mini have very close robustness and performance. and provided to the model as context during infer- ence (Chen et al., 2017; Lewis et al., 2020). Despite its potential, RAG does not always guar- antee performance improvements. The retriever might fail to retrieve relevant documents, and the LLMs might be distracted by irrelevant content, leading to performance drop (Mallen et al., 2023). As achieving a perfect retriever remains an elusive goal in practice, it is crucial for LLMs to behave ro- bustly in the RAG setup to reduce the risks during actual deployment. Previous work has shown that LLMs are partic- ularly vulnerable when provided with noisy con- texts that are synthetically constructed (Chen et al., 2024). Distracted by the specially designed mis- leading information, models tend to produce incor- rect outputs (Wu et al., 2024b). Despite yielding valuable insights, synthetically constructed con-arXiv:2505.21870v1 [cs.CL] 28 May 2025 texts are dissimilar to realistic retrieved contexts that
|
https://arxiv.org/abs/2505.21870v1
|
are usually drawn from credible corpora like Wikipedia and trusted news outlets. To bridge this gap, this work benchmarks LLMs’ robustness under realistic RAG setups. We con- sider an LLM retrieval robust if (1) its RAG perfor- mance is equal to or better than its non-RAG perfor- mance; (2) adding more retrieved documents leads to equal or better performance; and (3) its RAG per- formance is invariant to the order of retrieved docu- ments. Three metrics are defined correspondingly— no-degradation rate, retrieval size robustness, and retrieval order robustness. We focus on open-domain question answering— a knowledge-intensive task where RAG is widely adopted. We curate a benchmark of 1,500 sam- ples by randomly drawing 500 questions each from three datasets—Natural Questions (Kwiatkowski et al., 2019), Hotpot QA (Yang et al., 2018), ASQA (Stelmakh et al., 2022)—covering diverse domains and complexities. To construct retrieved contexts, we leverage two retrievers, including a canonical sparse BM25 (Robertson and Zaragoza, 2009) retriever and a dense retriever based on a strong embedding model, BGE (Xiao et al., 2023). Both retrievers retrieve context from Wikipedia arti- cles. For analyses of retrieval size and order robust- ness, RAG setups with multiple retrieval sizes (5 to 100 documents) and three ways of ordering them (original rank, reversed rank, random shuffle) are evaluated. Our experiments cover 11 LLMs from both open-source and proprietary families. Each LLM is evaluated via vanilla prompting and two more sophisticated prompting strategies: one aug- ments the model’s own knowledge, and the other filters relevant retrieval contexts. We find that LLMs generally demonstrate strong robustness, achieving over 80% scores on the geo- metric mean of the three retrieval robustness met- rics, as shown by Figure 1. This indicates that, oftentimes , (1) RAG is better than non-RAG; (2) more retrieved documents lead to better perfor- mance; and (3) order of the documents does not matter a lot. Nonetheless, the imperfect retrieval robustness reflects undesired behaviors, notably the performance trade-off among individual samples (i.e., decreasing performance on some examples while improving it on others), which prevents the models from fully utilizing the benefits of RAG and destabilizes response quality when changing the re- trieval size or order. Such unpredictable trade-off poses risks for realistic applications that demandconsistent outcomes. Therefore, retrieval robust- ness provides a novel perspective for benchmark- ing and understanding LLMs’ RAG performance. For example, as shown in Figure 1, even if GPT- 4o/O3-mini and Claude 3.5 Sonnet have similar RAG task performance, the higher retrieval robust- ness of GPT-4o/O3-mini makes them more pre- ferred in practice. Finally, we find that retrieval robustness can be enhanced by augmenting the an- swers generated with the model’s own knowledge, though it also limits the potential task performance gain from RAG. Our contributions are summarized as follows: •We propose sample-level metrics to rigor- ously measure retrieval robustness —how ro- bust LLMs handle queries in RAG setups, which provides a new perspective of under- standing LLM’s RAG performance. •We compile a benchmark for evaluating re- trieval robustness, following common RAG setups in practice. It comprises diverse open- domain QA tasks along with
|
https://arxiv.org/abs/2505.21870v1
|
retrieved doc- uments from Wikipedia obtained by widely- used and strong retrievers. •We conduct a comprehensive empirical study of 11 modern LLMs with 3 different prompt- ing strategies, revealing the generally good robustness of LLMs in more realistic settings and highlighting the consequences of their im- perfect robustness. 2 Related Works Retrieval-Augmented Generation (RAG) en- hances parametric models by retrieving seman- tically relevant information from a knowledge base (Gao et al., 2023b; Wu et al., 2024a). Typi- cally, it involves a retriever and a parametric lan- guage model. RAG can potentially help adapt pre- trained models to up-to-date knowledge, ground models with long-tail information, and thus im- prove factuality and accuracy (Asai et al., 2024). The pioneering RAG framework, DrQA (Chen et al., 2017), was introduced to tackle knowledge- intensive open-domain question answering (QA) tasks, which is still the main evaluation target of recent works (Wu et al., 2024b; Chen et al., 2024). RAG has also been used for non-knowledge- intensive tasks like language modeling, understand- ing, and reasoning (Borgeaud et al., 2022; Guo et al., 2023; Izacard et al., 2024). There are many different ways to implement RAG. Some works, e.g., knn-LM (Khandelwal et al., 2020), retrieve hidden states, while many other works retrieve text. To utilize the retrieved documents, some works modified the model architecture. e.g., FiD (Izacard and Grave, 2021) encoded each document sepa- rately and concatenated their hidden states in the decoder, while RETRO (Borgeaud et al., 2022) added a chunked cross-attention module into the regular Transformer block. Another widely used method is to simply include the retrieved docu- ments directly into the input. This can be done by putting them all together in one context (Ram et al., 2023; Lee et al., 2024) or by generating answers with each of them separately and ensembling the re- sults (Guu et al., 2020; Lewis et al., 2020; Shi et al., 2024). Some works train the retriever and the lan- guage model jointly (Lewis et al., 2020; Borgeaud et al., 2022; Lin et al., 2024), while others fix the model and and only train the retriever (Ram et al., 2023; Shi et al., 2024). In this paper, we opt for the simplest setup: we use off-the-shelf retrievers and LLMs, and we use the retrieved documents by directly including them in a single context window. This approach has become increasingly practical with the long-context ability of modern LLMs (Lee et al., 2024). Retrieval Robustness. Neural language models are shown to be easily distracted by adversarially inserted irrelevant content (Jia and Liang, 2017; Shi et al., 2023; Weston and Sukhbaatar, 2023). How- ever, irrelevant context comes in naturally in any RAG setup due to the imperfect retriever. Chen et al. (2024) showed that the LLM-based RAG performance goes down when increasing the noise (i.e., documents that are relevant to the question but do not contain any information about the answer) rate. Wu et al. (2024b) conducted a deeper analysis and found that highly semantically related infor- mation is more likely to distract LLMs. Thakur et al. (2024) evaluated LLM RAG performance with
|
https://arxiv.org/abs/2505.21870v1
|
a completely irrelevant set of documents and observed non-trivial hallucination rates. Yoran et al. (2024) introduced the concept of retrieval robustness , “retrieval-robust LLMs states that: (a) when relevant, the retrieved context should improve model performance; (b) when irrelevant, the re- trieved context should not hurt model performance.” However, all these works usually handcrafted con- trolled yet synthetic evaluation setups that mixing irrelevant context with relevant ones. Following the same spirit, we instead resort to a more realisticand practical setup where we simply pick the top- Kcontexts returned by a retriever with a natural mixture of relevant and irrelevant content. And we extend the definition of retrieval robustness to the three conditions stated in the introduction. In addi- tion, some recent works tried to make RAG robust to intentional knowledge corruption attacks, e.g., injecting malicious facts (Zou et al., 2024; Anony- mous, 2024), which is not the type of robustness we would like to evaluate in this paper. 3 Robustness Metrics In this section, we present the three critical metrics for evaluating the retrieval robustness of an LLM system, illustrated in Figure 2. We define an LLM system as a backbone LLM, paired with a prompt- ing strategy. Let f(q, k, o )denote the performance of an LLM system, where qis the task query, kis the number of retrieved documents, and ospeci- fies the order of the retrieved documents. In this paper, f(q, k, o )is the correctness of the model’s response to q, assessed by an LLM judge by com- paring with the reference answer (§4.1). When k >0,f(q, k, o )represents the performance of the LLM system in the RAG setup. For consistency, we use f(q,0)to denote the performance of the LLM system in the non-RAG setup, where model answers the query using its own knowledge. See §4.3 for the choices of kandoin our experiments. No-Degradation Rate (NDR). This metric mea- sures how often the LLM system’s performance with RAG f(q, k, o )(for any k >0ando) is at least as good as the performance without RAG f(q,0), which is calculated as: NDR =1 ZX q∈QX k∈KX o∈O1 f(q, k, o )≥f(q,0) (1) where Kincludes all choices of numbers of re- trieved documents, Orepresents all possible docu- ment orders used in the benchmark, and Qis the set of all task samples. Z=|Q| · |K| · |O|is the nor- malization factor for the aggregation. A high NDR implies that, for most queries, using retrieval does not degrade performance relative to the non-RAG baseline. Retrieval Size Robustness (RSR). This metric examines how the system behaves as the num- ber of retrieved documents increases. Specifically, for each task query qand each value of k, we Retrieval Size RobustnessNo-Degradation Rate Retrieval Order Robustness Figure 2: Our retrieval robustness metrics, targeting three research questions: (1) whether RAG is always better than non-RAG; (2) whether more retrieved doc- uments always lead to better performance; (3) whether different document orders lead to consistent results. check whether the performance is maintained or im- proved, compared to all smaller values of k. RSR only considers k >0, not involving
|
https://arxiv.org/abs/2505.21870v1
|
the effect of NDR. Results for various ks are then aggregated across all task samples, formally defined as: RSR (q,k i,o)= 1 ∧j<i[f(q, ki, o)≥f(q, kj, o)] RSR=1 ZX q∈QX ki∈K,i> 1X o∈ORSR (q,k i,o) (2) where Z=|Q| ·(|K| −1)· |O|. A high RSR indicates that performance rarely degrades when adding more retrieved documents. Retrieval Order Robustness (ROR). ROR con- cerns the sensitivity of the system to the order of the same set of retrieved documents. For a task sample qandk >0, letOdenote selected choices of permutations of the kdocuments. We can com- pute the standard deviation of the model perfor- mance over all permutations o∈O, which is rep- resented as σo∈O[f(q, k, o )]. For performance met- rics bounded between 0 and 1, the standard devia- tion is bounded between 0 and 0.5. Therefore, we scale it by a factor of 2 to ensure the robustness metric ranges between 0 and 1. We compute theROR score as: ROR =1 ZX q∈QX k∈K 1−2σo∈O f(q, k, o ) (3) where Z=|Q| · |K|. A higher ROR means that different permutations of the same set of documents produce more consistent performance. The three metrics capture complementary as- pects of retrieval robustness, reflecting different de- sired behaviors of LLM systems with RAG in real world applications. NDR provides a safety guaran- tee that retrieval will not harm performance; RSR is critical for scenarios where retrieval size can be scaled up for enhanced performance; and ROR is important for situations where document ranking is imperfect. Note that, for simplicity, we omit the marginalization over two different retrievers (see §4.3) from the equations of all three metrics. 4 Benchmark Setups We conduct experiments to benchmark retrieval robustness of LLM systems. Though RAG can be applied for various tasks, we focus on the task where RAG is commonly adopted—answering knowledge-intensive open-domain questions. 4.1 Data and Evaluation Metrics Open-domain QA Tasks. We sample from three QA datasets. Natural Questions (Kwiatkowski et al., 2019) contains samples derived from Google Search queries, covering a broad range of ques- tions real-world users ask online; Hotpot QA (Yang et al., 2018) is a multi-hop QA dataset that requires chaining multiple passages to answer questions; ASQA (Stelmakh et al., 2022) targets extraction of key information from multiple sources. We randomly sample 500 examples from each of the datasets, totaling 1500 samples. Evaluation Metrics. Previous work usually used string match metrics for answers evalua- tion (Mallen et al., 2023; Gao et al., 2023a). How- ever, it is rigid and can not evaluate model per- formance very well. Therefore, we prompt (see the prompts we used in Appendix C) Llama-3.3- 70B-Instruct to evaluate whether the generated re- sponses align with the gold answers.2 2We also tried GPT-4o as the judge initially. However due to cost constraints for large-scale evaluation, we opt for Llama-3.3-70B-Instruct. And on a subset of 2,000 samples, we find these two models agree at 93% of time. Retrieval Corpus. We use Wikipedia as the cor- pus to retrieve documents from. We processed the wikidump from June 2024, which contains 6 mil- lion
|
https://arxiv.org/abs/2505.21870v1
|
articles. We split each article into chunks by double newlines, resulting in 20 million chunks. Each chunk is treated as an independent “docu- ment” for retrieval. 4.2 LLM Systems Backbone LLMs. 11 LLMs from three open- source families and two proprietary families are tested, including Llama-3 Instruct (3.1-8B, 3.1- 70B, 3.2-1B, 3.2-3B) (Meta, July 2024,S), Mistral Instruct (Nemo, Large) (Mistral.ai, July 2024,F), Command (R, R plus) (Cohere, Aug. 2024), Ope- nAI GPT-4o (OpenAI et al., 2024), o3-mini (Ope- nAI, 2025), and Claude-3.5-sonnet (Anthropic, July. 2024). Prompting Strategies. Besides the vanilla prompting strategy that concatenates all retrieved documents in the prompt, we explore two alterna- tive strategies that might help model incorporate information in the retrieved documents more ro- bustly. Both strategies involve two steps. (1) Own- Know obtains a draft answer based on models’ own knowledge by prompting without retrieval in the first step, and then inserts this draft answer into the prompt for the RAG setup. (2) S2A, inspired by System 2 Attention (Weston and Sukhbaatar, 2023), first tries to identify the relevant retrieved documents in the first step, and then only uses the identified documents in the RAG setup. This de- couples relevance estimation from answer extrac- tion, allowing the answer extraction step to focus on the most pertinent information. See the Jinja2 templates of our prompts in Appendix C. 4.3 RAG Parameters Retrievers. Our retrieval system is built on top of Solr 93. We use two retrievers: one is the canonical sparse retriever based on BM25 (Robertson and Zaragoza, 2009), and the other is cosine similarity based dense retriever where we embeded each doc- ument by bge-large-en-v1.54(Xiao et al., 2023). For any robustness metric defined in §3, we get the results for both retrievers and take the average. Sizes. We experiment with retrieval sizes of 5, 10, 25, 50, 75, and 100 documents. The retrieval size 3https://solr.apache.org/docs/9_0_0/index.html 4http://huggingface.co/BAAI/bge-large-en-v1.5 0 20 40 60 80 100 Number of Retrieved Documents0.20.30.40.50.60.70.8Recall of Gold Answer Natural Questions Hotpot QA ASQABM25 Dense Non-RAG PerfFigure 3: Performance of the retrievers, measured by the recall of gold answers within the concatenated retrieved documents. The gold answer is considered covered if any of its alternative forms exactly match a substring in the concatenated retrieved documents. is capped at 100 documents as most models have reached their maximum context lengths. When the retrieved documents exceed the maximum context length of a model, we iteratively drop the lowest ranked document. Orders. For each of these sizes, we apply three ordering strategies based on the retriever’s ranking of the documents: the original order (returned by the retriever), the reversed order (reversing the original order), and a randomly shuffled order. We test the reversed order because sometimes we want to put the most relevant document to the end of the prompt (the closest to the answer). We include a random order to simulate any potential reranking logic on top of the retriever. Retrieval Quality. As our retrieval robustness benchmark relies on the retrievers, we examine the retrieval quality by checking the recall of gold answers within the retrieved documents. We fol-
|
https://arxiv.org/abs/2505.21870v1
|
low prior work and determine if the concatenated retrieved documents contain the gold answer if its substring is an exact match of any form of the gold answer (substring exact match) (Mallen et al., 2023). For reference, we also report the best model performance without RAG (Non-RAG Perf) to highlight the potential improvement that can be obtained with RAG. As shown in Figure 3, both retrievers provide sufficiently high-quality retrieval, ensuring that the findings of our experiments are based on valid setups. 1B3B8B70B32B104B12B123B4oo3msonn0.00.20.40.60.8 No-Degradation Rate 1B3B8B70B32B104B12B123B4oo3msonn Retrieval Size Robustness 1B3B8B70B32B104B12B123B4oo3msonn Retrieval Order Robustness Performance Robustness Metric Llama Command Mistral OpenAI ClaudeFigure 4: The three retrieval robustness metrics and task performance of experimented LLMs using vanilla prompting. Model families are indicated by icons, while the variants are indicated by model sizes or names (o3m: o3-mini; sonn: sonnet). 12B and 123B Mistral models respectively correspond to Mistral-Nemo and Mistral-Large. Task performance is the averaged QA accuracy across different retrieval sizes and orders. Models generally demonstrate strong retrieval robustness (achieving 80% scores). While larger model sizes lead to improved task performance, there exists no consistent trend across the retrieval robustness metrics. 5 Results 5.1 Overall Robustness We report the three retrieval robustness metrics for LLM systems using vanilla prompting (see the prompt rag_qa.j2 in Appendix C) in Figure 4. Besides robustness, task performance is shown in the same figure with bars with a different hatch style. Retrieval robustness is calculated following the definitions in §3, while task performance is the average score across all k,o, retrievers, and datasets. All models achieve higher than 80% re- trieval robustness across all metrics, with GPT- 4o and o3-mini surpassing 90%. Compared to prior studies that highlight the weak robustness of RAG systems under synthetic setups, such as using artificially created documents (Wu et al., 2024b), we show that LLMs demonstrate surprisingly good retrieval robustness in more realistic settings. This high retrieval robustness means we can safely apply RAG without overly stressing about whether RAG is better than non-RAG and about the decisions on retrieval size and order, which can potential sim- plify RAG systems. Nevertheless, the remaining 10% may pose challenges for real-world deploy- ment, particular for high-stake domains where com- prehensive reliability is required. 5.2 Relation between Robustness and Performance Although retrieval robustness metrics are derived from the sample-level task performance, retrievalrobustness does not always correlate with task per- formance. As shown in Figure 1 and Figure 4, task performance usually gets better when models get larger. In contrast, we note that, larger LLMs can have lower retrieval robustness than smaller LLMs . For example, in Figure 1, Llama-3-8B has higher robustness than 70B. If we “zoom in” to each of the three robustness metrics (Figure 4), we can see that this inverse scaling trend mainly comes from No-Degradation Rate (NDR). This is because larger models usually have richer parametric knowl- edge and answers more questions correctly without retrieval, which means RAG will have a higher baseline to beat and thus RAG is more likely to get worse than non-RAG. Therefore, in practice, when we apply RAG
|
https://arxiv.org/abs/2505.21870v1
|
to knowledge-rich LLMs (usually models of larger sizes), we need to be cautious about whether it will lead to performance degrada- tion compared to non-RAG. Here, we use one example to show how low ro- bustness reduces RAG efficacy . In Figure 5, solid lines illustrate the actual performance of Mistral- Large and o3-mini at different number of retrieved documents. Dashed lines show their hypothetical performance under an oracle setup. This oracle setup assumes perfect NDR , meaning the models consistently generate responses at least as good as those produced without retrieval. As the solid lines show, although Mistral-Large surpasses o3-mini without retrieval (0 retrieved documents), it yields worse performance than o3-mini and even its own non-RAG baseline when RAG is applied. Con- 0510 25 50 75 100 Retrieved Documents0.60.70.8Task Performance Mistral Large o3-mini Actual NDR Perfect NDRMistral Large o3-mini Actual NDR Perfect NDRFigure 5: Task performance of models using vanilla prompting under setups with actual no-degradation rate (NDR) and perfect NDR. Enhancing retrieval robustness could lead to a 12% absolute performance gain for both models. versely, if Mistral-Large has perfect NDR, it would outperform o3-mini in the RAG setup. The gap between the actual and oracle setups demonstrate that Mistral-Large fails to preserve its non-RAG performance for approximately 14% of the dataset samples, due to the insufficient retrieval robustness. Overall, retrieval robustness metrics complement standard task performance metrics and provide a new perspective of how well LLMs perform in RAG settings. 5.3 Effect of Retrieval Size For most of the models, the overall task perfor- mance is generally increasing as more retrieved documents are added (see Figure 13, 14, 15, and 16 in Appendix). This again demonstrates that in practice we do not have to overly concern about picking the optimal retrieval size. If budget allows, we can simply keep adding more documents till it reaches the max input length limit. Nevertheless, this does not indicate perfect re- trieval size robustness, as models keep trading off performance across individual samples , i.e., hurting performance on some examples while gain- ing performance on others. Similar to the perfect NDR setup, we investigate an oracle setup with per- fect RSR—choosing the best answer among those generated at current and all preceding values of ks as the final answer. Note that only answers produced by RAG (i.e., k >0) are considered in the perfect RSR setup to eliminate the effect of NDR. Although, in the normal setup (actual RSR), task performance is increasing from k= 10 to k= 75 , the gain is much more significant in the 10 25 50 75 100 Retrieved Documents0.60.70.8Task Performance Mistral Large GPT-4o Actual RSR Perfect RSRMistral Large GPT-4o Actual RSR Perfect RSRFigure 6: Task performance of models using vanilla prompting under setups with actual RSR and perfect RSR. 0.4 0.5 0.6 0.7 Task Performance0.800.850.900.95Robustness Llama 3.2 3B Llama 3.1 70B Command R Command R+Mistral Nemo Mistral Large GPT-4o o3-miniClaude 3.5 Sonnet Original Reversed Shuffled Figure 7: Geometric mean of no-degradation rate and re- trieval size robustness, grouped by the order of retrieved documents. hypothetical perfect RSR situation,
|
https://arxiv.org/abs/2505.21870v1
|
enlarging the gap between the two setups. This implies that mod- els are constantly sacrificing some samples while enhancing others with larger retrieval sizes. We think that the increasing number of retrieved docu- ments challenges models’ ability to identify helpful documents and handle longer inputs, and thus leads to the imperfect robustness on retrieval size. 5.4 Effect of Retrieval Order We break down retrieval robustness and task per- formance by the order of the retrieved documents (Figure 7). Overall, LLMs demonstrate good retrieval order robustness – the performance achieved with different orders of the retrieved documents is similar . This means, in practice, we do not have to overly concern about the order of documents. While GPT-4o and o3-mini demon- strate the strongest retrieval robustness and perfor- 510 25 50 75 100 Retrieved Documents0.60.70.8Task Performance Mistral Large GPT-4o OriginalReversed Shuffled Perfect RORMistral Large GPT-4o OriginalReversed Shuffled Perfect RORFigure 8: Task performance of models using vanilla prompting under setups with actual ROR for each order and perfect ROR. mance with normally ordered documents, all other models prefer the reversed order. This suggests thatplacing higher-ranked retrieved documents closer to the question generally optimizes RAG performance. Despite this high robustness, we underscore that performance variance across orders per- sists at the sample level . We establish an oracle setup for retrieval order robustness that selects the best response among responses generated with re- trieved contexts ordered differently ( perfect ROR ), as shown in Figure 8. Picking the best response for each example across different orders exhibits a large performance gain from each individual docu- ment order. This indicates that each example has a different best order, highlighting the need for continuing efforts to improve order robustness. 5.5 Effects of Prompting Strategies Using prompting strategies to decompose response generation has demonstrated effectiveness in han- dling complex tasks. Figure 9 shows that only the OwnKnow strategy (see the prompt ownknow.j2 in Appendix C) that incorporates answers gener- ated in the non-RAG setup can consistently en- hance retrieval robustness. We believe outputs given by the non-RAG setup serve as drafts and anchors, leading to reduced variance. It is also possible that OwnKnow benefits from its simi- larity to self-refinement that was shown to be an effective prompting technique (Yang et al., 2022; Madaan et al., 2023). Although selecting task- relevant context benefits robustness when synthetic noisy passages are injected into the input as shown by Weston and Sukhbaatar (2023), a similar S2A 0.4 0.5 0.6 0.7 Task Performance0.800.850.900.95RobustnessVanilla vs. OwnKnow 0.4 0.5 0.6 0.7 Task Performance0.800.850.900.95RobustnessVanilla vs. S2A Llama 3.2 3B Llama 3.1 70B Command R Command R+Mistral Nemo Mistral Large GPT-4o o3-miniClaude 3.5 Sonnet Vanilla OwnKnow S2AFigure 9: Geometry mean of the three retrieval robust- ness metrics and task performance of LLMs paired with different prompting strategies. The mean of task perfor- mance achieved with different retrieval sizes and orders are shown for each model. Models are differentiated with colors and prompting strategies are indicated by marker styles. The bar on the right of each marker indi- cates the maximum performance across retrieval sizes. prompting strategy
|
https://arxiv.org/abs/2505.21870v1
|
(see the prompt s2a.j2 in Ap- pendix C) fails to enhance retrieval robustness in our evaluations. We conjecture that, compared to synthetic noisy contexts, realistic retrievers provide models with harder negative contexts that are more challenging for the model to identify. As we look into the maximum task performance across retrieval sizes rather than the mean task per- formance, we observe that using OwnKnow might limit the maximum performance models can pos- sibly achieve, suggesting that the higher retrieval robustness of OwnKnow comes at a cost of RAG effectiveness. 6 Conclusions We introduce retrieval robustness metrics—no- degradation rate, retrieval size robustness, and re- trieval order robustness—to quantify how reliably LLMs handle queries via RAG. A realistic bench- mark of 1,500 questions is compiled, spanning three open-domain QA datasets, with augmented documents retrieved from Wikipedia using both sparse and dense retrievers. Our experiments with 11 LLMs from 5 families reveal that models gener- ally demonstrate strong robustness, achieving 80% scores on those metrics. Nonetheless, imperfect robustness result in sample-level trade-offs, often hurting the performance of some samples for the improvement on others, which forfeits RAG’s po- tential gains. While incorporating outputs gener- ated with the model’s own knowledge can enhance retrieval robustness, it also limits the best perfor- mance that can be achieved by RAG. We believe retrieval robustness provides a new perspective for evaluating and understanding LLMs’ RAG perfor- mance and we hope it can guide and inspire further research on building robust RAG systems. 7 Limitations Our study of retrieval robustness focuses on open- domain QA, though we recognize that RAG can also be applied to other tasks, such as fact check- ing and code completion. We choose open-domain QA, as it is arguably the most common use case of RAG and is being used in prior work on retrieval robustness with synthetic setups (Wu et al., 2024b; Chen et al., 2024). That being said, our proposed retrieval robustness metrics are specifically formu- lated such that they can be used for any task, as long as its evaluation metric returns a scalar value. 8 Ethical Considerations This benchmark comprises of multiple public mod- els and datasets. We performed an internal legal review for each model and dataset to ensure that they contained permissive licenses to be used for research purposes. We also do not pretrain or fine- tune any language models as part of this research and hence not anticipate the environmental impact to be significant. Additionally, before ingesting the Wikipedia data for retrieval, we ensured that all Personally Identifiable Information was removed from the dataset (By removing sections listed as “Personal Information”). However, we acknowledge that the models and datasets could still contain biases (such as race, gender, etc.) that could be reflected in the generated answers.Acknowledgements We thank Bang An and Mark Dredze for their help- ful discussions. References Anonymous. 2024. Certifiably robust RAG against re- trieval corruption attacks. In Submitted to The Thir- teenth International Conference on Learning Repre- sentations . Under review. Anthropic. July. 2024. claude-3-5-sonnet. Akari Asai, Zexuan Zhong, Danqi Chen, Pang Wei Koh, Luke Zettlemoyer,
|
https://arxiv.org/abs/2505.21870v1
|
Hannaneh Hajishirzi, and Wen-tau Yih. 2024. Reliable, adaptable, and at- tributable language models with retrieval. arXiv preprint arXiv:2403.03187 . Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Mag- giore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning , volume 162 of Proceedings of Machine Learning Research , pages 2206–2240. PMLR. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2024. Benchmarking large language models in retrieval-augmented generation. In Proceedings of the AAAI Conference on Artificial Intelligence , vol- ume 38, pages 17754–17762. Cohere. Aug. 2024. command-r. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023a. Enabling large language models to generate text with citations. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 6465–6488, Singapore. Associa- tion for Computational Linguistics. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023b. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 . Zhicheng Guo, Sijie Cheng, Yile Wang, Peng Li, and Yang Liu. 2023. Prompt-guided retrieval augmen- tation for non-knowledge-intensive tasks. In Find- ings of the Association for Computational Linguistics: ACL 2023 , pages 10896–10912, Toronto, Canada. As- sociation for Computational Linguistics. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: retrieval- augmented language model pre-training. In Proceed- ings of the 37th International Conference on Machine Learning , ICML’20. JMLR.org. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open do- main question answering. In Proceedings of the 16th Conference of the European Chapter of the Associ- ation for Computational Linguistics: Main Volume , pages 874–880, Online. Association for Computa- tional Linguistics. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2024. Atlas: few-shot learning with retrieval augmented language models. J. Mach. Learn. Res. , 24(1). Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. InProceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing , pages 2021–2031, Copenhagen, Denmark. Association for Computational Linguistics. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In International Conference on Learning Representations . Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina
|
https://arxiv.org/abs/2505.21870v1
|
Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics , 7:452–466. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Jinhyuk Lee, Anthony Chen, Zhuyun Dai, Dheeru Dua, Devendra Singh Sachan, Michael Boratko, Yi Luan, Sébastien MR Arnold, Vincent Perot, Siddharth Dalmia, et al. 2024. Can long-context language mod- els subsume retrieval, rag, sql, and more? arXiv preprint arXiv:2406.13121 .Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- intensive nlp tasks. In Proceedings of the 34th Inter- national Conference on Neural Information Process- ing Systems , NIPS ’20, Red Hook, NY , USA. Curran Associates Inc. Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Richard James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2024. RA-DIT: Retrieval-augmented dual instruction tuning. In The Twelfth International Conference on Learning Repre- sentations . Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023. Self-refine: It- erative refinement with self-feedback. Preprint , arXiv:2303.17651. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric mem- ories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 9802–9822, Toronto, Canada. Association for Computational Linguistics. Meta. July 2024. Introducing llama 3.1. Meta. September 2024. Llama 3.2. Mistral.ai. Feb. 2024. mistral-large. Mistral.ai. July 2024. mistral-nemo. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, An- drew Galu, Andrew Kondrich, Andrew Tulloch, An- drey Mishchenko, Angela Baek, Angela Jiang, An- toine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon East- man, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing
|
https://arxiv.org/abs/2505.21870v1
|
Conger, Char- lotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Clau- dia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robin- son, David Sasaki, Denny Jin, Dev Valladares, Dim- itris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan As- dar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wal- lace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Sil- ber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Var- avva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Lan- ders, Joel Parish, Johannes Heidecke, John Schul- man, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lau- ren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lil- ian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kon- draciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Jan- ner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Mi-nal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Na- talie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Pe- ter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder,
|
https://arxiv.org/abs/2505.21870v1
|
Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Ra- jan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchan- dani, Romain Huet, Rory Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agar- wal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shi- rong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stew- art Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. OpenAI. 2025. OpenAI o3-mini — openai.com. https: //openai.com/index/openai-o3-mini/ . Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented lan- guage models. Transactions of the Association for Computational Linguistics , 11:1316–1331. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Found. Trends Inf. Retr. , 3(4):333–389. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, and Denny Zhou. 2023. Large language models can be easily distracted by irrelevant context. In Proceed- ings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org. Weijia Shi, Sewon Min, Michihiro Yasunaga, Min- joon Seo, Richard James, Mike Lewis, Luke Zettle- moyer, and Wen-tau Yih. 2024. REPLUG: Retrieval- augmented black-box language models. In Proceed- ings of the 2024 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (Volume 1: Long Papers) , pages 8371–8384, Mexico City, Mexico. Association for Computational Linguistics. Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, and Ming- Wei Chang. 2022. ASQA: Factoid questions meet long-form answers. In Proceedings of the 2022 Con- ference on Empirical Methods in Natural Language Processing , pages 8273–8288, Abu Dhabi, United Arab Emirates. Association for Computational Lin- guistics. Nandan Thakur, Luiz Bonifacio, Crystina Zhang, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Box- ing Chen, Mehdi Rezagholizadeh, and Jimmy Lin. 2024. “knowing when you don’t know”: A multilin- gual relevance assessment dataset for robust retrieval- augmented generation. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 12508–12526, Miami, Florida, USA. Association for Computational Linguistics. Jason Weston and Sainbayar Sukhbaatar. 2023. Sys- tem 2 attention (is something you might need too). Preprint , arXiv:2311.11829. Shangyu Wu, Ying Xiong, Yufei Cui, Haolun Wu, Can Chen, Ye Yuan, Lianming Huang, Xue
|
https://arxiv.org/abs/2505.21870v1
|
Liu, Tei-Wei Kuo, Nan Guan, et al. 2024a. Retrieval-augmented generation for natural language processing: A survey. arXiv preprint arXiv:2407.13193 . Siye Wu, Jian Xie, Jiangjie Chen, Tinghui Zhu, Kai Zhang, and Yanghua Xiao. 2024b. How easily do irrelevant inputs skew the responses of large language models? In First Conference on Language Modeling . Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-pack: Packaged resources to advance general chinese embedding. Preprint , arXiv:2309.07597. Kevin Yang, Yuandong Tian, Nanyun Peng, and Dan Klein. 2022. Re3: Generating longer stories with recursive reprompting and revision. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing , pages 4393–4479, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. InProceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing , pages 2369–2380, Brussels, Belgium. Association for Com- putational Linguistics. Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan Be- rant. 2024. Making retrieval-augmented languagemodels robust to irrelevant context. In The Twelfth International Conference on Learning Representa- tions . Wei Zou, Runpeng Geng, Binghui Wang, and Jinyuan Jia. 2024. Poisonedrag: Knowledge corruption at- tacks to retrieval-augmented generation of large lan- guage models. Preprint , arXiv:2402.07867. A Additional Results A.1 Dataset Breakdown of Retrieval Robustness We show the retrieval robustness metrics and av- erage RAG performance in Figure 10, 11, and 12. Across all individual datasets, there is still no con- sistent improvement in retrieval robustness with increased model sizes. A.2 Dataset Breakdown of RAG Performance across ks We show open-domain QA performance at differ- ent numbers of retrieved documents in Figure 13, with dataset breakdown in Figure 14, 15, and 16. Performance with each retriever and document or- der can be found in Figure 17, 18, and 19. Compared to non-RAG, open-source LLMs with RAG can always boost performance, with the ex- ception of Command R+ on Natural Questions. We also observe a performance drop on Hotpot QA with the dense retriever when using Llama-3.1- 70B. B Inference Setup Inference Parameters. Due to the computational cost and running time, we use greedy decoding and perform inference with each model under each setup once. During inference, models are allowed to generate at most 100 tokens, though they never exceed the limit. Inference Infrastructure. We use vLLM for more efficient inference (Kwon et al., 2023) and our experiments are conducted on compute nodes with 8 H100 GPUs. C Prompt Templates The prompt templates (in jinja2 format) used in our experiments can be found at the end of Appendix. 1B 3B 8B70B 32B104B 12B123B 4osonnet0.00.20.40.60.8 No-Degradation Rate 1B 3B 8B70B 32B104B 12B123B 4osonnet Retrieval Size Robustness 1B 3B 8B70B 32B104B 12B123B 4osonnet Retrieval Order Robustness Performance Robustness Metric Llama Command Mistral GPT ClaudeFigure 10: The three retrieval robustness metrics and task performance of experimented LLMs using vanilla prompting on Natural Questions. 1B 3B 8B70B 32B104B 12B123B 4osonnet0.00.20.40.60.8 No-Degradation Rate 1B 3B 8B70B 32B104B 12B123B 4osonnet Retrieval
|
https://arxiv.org/abs/2505.21870v1
|
Size Robustness 1B 3B 8B70B 32B104B 12B123B 4osonnet Retrieval Order Robustness Performance Robustness Metric Llama Command Mistral GPT Claude Figure 11: The three retrieval robustness metrics and task performance of experimented LLMs using vanilla prompting on Hotpot QA. 1B 3B 8B70B 32B104B 12B123B 4osonnet0.00.20.40.60.8 No-Degradation Rate 1B 3B 8B70B 32B104B 12B123B 4osonnet Retrieval Size Robustness 1B 3B 8B70B 32B104B 12B123B 4osonnet Retrieval Order Robustness Performance Robustness Metric Llama Command Mistral GPT Claude Figure 12: The three retrieval robustness metrics and task performance of experimented LLMs using vanilla prompting on ASQA. 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9Task Performance Llama 3.2 1B Llama 3.2 3B Llama 3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 SonnetFigure 13: Performance averaged across datasets, re- trievers, and document orders. 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9Task Performance Llama 3.2 1B Llama 3.2 3B Llama 3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 Sonnet Figure 14: Performance on Natural Questions, averaged across retrievers and document orders. 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9Task Performance Llama 3.2 1B Llama 3.2 3B Llama 3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 Sonnet Figure 15: Performance on Hotpot QA, averaged across retrievers and document orders. 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9Task Performance Llama 3.2 1B Llama 3.2 3B Llama 3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 Sonnet Figure 16: Performance on ASQA, averaged across retrievers and document orders. 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9Task Performance Bm25 - Original 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9 Bm25 - Reversed 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9 Bm25 - Shuffled 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9Task Performance Dense - Original 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9 Dense - Reversed 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9 Dense - ShuffledLlama 3.2 1B Llama 3.2 3B Llama 3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 SonnetFigure 17: Performance on Natural Questions with different retrievers and document orders. 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9Task Performance Bm25 - Original 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9 Bm25 - Reversed 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9 Bm25 - Shuffled 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9Task Performance Dense - Original 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9 Dense - Reversed 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9 Dense - ShuffledLlama 3.2 1B Llama 3.2 3B Llama 3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 Sonnet Figure 18: Performance on Hotpot QA with different retrievers and document orders. 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9Task Performance Bm25 - Original 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9 Bm25 - Reversed 0510 25 50 75 1000.10.20.30.40.50.60.70.80.9 Bm25 - Shuffled 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9Task Performance Dense - Original 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9 Dense - Reversed 0510 25 50 75 100 Retrieved Documents0.10.20.30.40.50.60.70.80.9 Dense - ShuffledLlama 3.2 1B Llama 3.2 3B Llama
|
https://arxiv.org/abs/2505.21870v1
|
3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 SonnetFigure 19: Performance on ASQA with different retrievers and document orders. non_rag_qa.j2 1Answer the following question in a concise manner without explanation . Indicate your answer with " Answer :" and only include the answer words or phrases . For example : " Question : What city is Kowloon a part of? Answer : Hong Kong ." 2 3{{ question }} rag_qa.j2 1Based on your own knowledge and retrieved contexts , answer the question in a concise manner without any explanation . Indicate your answer with " Answer :". For example : " Question : What city is Kowloon a part of? Answer : Hong Kong ." If the answer is not specified or mentioned in the retrieved context , you must ignore the context and provide an answer by yourself . You must not refrain from answering the question . 2 3Retrieved contexts : 4{% for c in sources %} Context {{ loop . index }} 5{{c}} 6{% endfor %} 7{{ question }} ownknow.j2 1Previously , you answer the question with your own knowledge . Now , based on your own knowledge and additional retrieved contexts , answer the question in a concise manner without any explanation . Indicate your answer with " Answer :". For example : " Question : What city is Kowloon a part of? Previous Answer : previous answer . Answer : Hong Kong ." If the answer is not specified or mentioned in the retrieved context , you must ignore the context and provide an answer by yourself . You must not refrain from answering the question . 2 3Retrieved contexts : 4{% for c in sources %} Context {{ loop . index }} 5{{c}} 6{% endfor %} 7{{ question }} Previous Answer : {{ non_rag_output }}. s2a.j2 1Identify the retrieved context (s) that would be good context for providing an unbiased answer to the question . Indicate your selected context (s) " Selected Contexts :". For example : " Question : What city is Kowloon a part of? Selected Conetxts : Context 2, Context 5." If there is no retrieved context , reply with " Selected Conetxts : None ". 2 3Retrieved contexts : 4{% for c in sources %} Context {{ loop . index }} 5{{c}} 6{% endfor %} 7{{ question }} answer_evaluation_nq_hotpot.j2 1You will be given a question , a list of gold answers to this question , and a predicted answer . Any one answer or multiple answers from the gold answer list can correctly answer the question . Your task is to judge whether the predicted answer can answer the question correctly . 2Note that predicted answer does not have to exactly match one or multiple gold answers . It can answer the question correctly as long as its meaning entails one or multiple gold answers and there is no any additional incorrect information . 3 4Question : 5{{ question }} 6 7Gold Answers : 8{{ gold_answer }} 9 10Predicted Answer : 11{{ pred_answer
|
https://arxiv.org/abs/2505.21870v1
|
}} 12 13Is the predicted answer a correct answer to the question ? 14 15IMPORTANT : Please strictly follow the following format in your response : 16[ Start answer ] 17<Your answer . Choose from : Yes , No > 18[ End answer ] answer_evaluation_asqa.j2 1You will be given a question , gold answers to this question , and a predicted answer . Gold answers are composed of multiple groups . Your task is to judge whether the predicted answer cover each group of the gold answers . Within one gold answer group , there can be multiple alternative answers . As long as one of the alternative answers is covered , the group is covered . Note that " cover " means " entail ", in other words , you need to judge the predicted answer entails any answer within each group . 2 3Question : 4{{ question }} 5 6Gold Answers : 7{% for group in short_answer %} Group {{ loop . index }}: {{ group }} 8{% endfor %} 9Predicted Answer : 10{{ pred_answer }} 11 12Does the predicted answer cover each group of the gold answers ? 13 14IMPORTANT : Please strictly follow the following format in your response : 15[ Start answer ] 16{% for group in short_answer %} Group {{ loop . index }}: <Your answer . Choose from : Yes , No > 17{% endfor %}[ End answer ]
|
https://arxiv.org/abs/2505.21870v1
|
arXiv:2505.21873v1 [q-bio.BM] 28 May 2025HELIX DESIGN -BINDER : A S CALABLE PRODUCTION -GRADE PLATFORM FOR BINDER DESIGN BUILT ON HELIX FOLD3 Jie Gao, Jun Li, Jing Hu, Shanzhuo Zhang, Kunrui Zhu, Yueyang Huang, Xiaonan Zhang, Xiaomin Fang∗ PaddleHelix team, Baidu Inc. ABSTRACT Protein binder design is central to therapeutics, diagnostics, and synthetic biology, yet practical deployment remains challenging due to fragmented workflows, high computational costs, and complex tool integration. We present HelixDesign-Binder, a production-grade, high-throughput platform built on HelixFold3 that automates the full binder design pipeline—from backbone generation and sequence design to structural evaluation and multi-dimensional scoring. By unifying these stages into a scalable and user-friendly system, HelixDesign-Binder enables efficient exploration of binder candidates with favorable structural, energetic, and physicochemical properties. The platform leverages Baidu AI Cloud’s high-performance infrastructure to support large-scale design and incorporates advanced scoring metrics, including ipTM, predicted binding free energy, and interface hydrophobicity. Benchmarking across six protein targets demonstrates that HelixDesign- Binder reliably produces diverse and high-quality binders, some of which match or exceed validated designs in predicted binding affinity. HelixDesign-Binder is accessible via an interactive web interface in PaddleHelix platform, supporting both academic research and industrial applications in antibody and protein binder development. Keywords Binder design ·High-throughput design and screening ·HelixFold3 ·Production-grade 1 Introduction Protein binder design is a foundational technique in therapeutic development, molecular diagnostics, and synthetic biology. By engineering proteins that selectively bind to target biomolecules, researchers can modulate biological pathways, facilitate targeted delivery, and construct molecular sensors. Structure-based binder design aims to improve binding specificity and stability by generating candidate sequences that adopt desirable conformations and interact favorably with a given target. Recent breakthroughs in structure prediction, most notably AlphaFold-Multimer [1], AlphaFold3 [2], and HelixFold3 [3], have substantially advanced our ability to model protein–protein complexes with high accuracy. Recent work such as AlphaProteo [ 4] and BindCraft [ 5] utilize these structure prediction models in silico screening of designed sequences for favorable binding conformations, making it feasible to incorporate structural evaluation early in the binder design process. In a typical pipeline, the 3D structure of a target protein is fixed as the design scaffold. Tools like RFDiffusion [6,7] are first used to generate binder backbone structures that are spatially compatible with the target surface. Then, inverse folding models such as ESM-IF [ 8] and ProteinMPNN [ 9] are applied to design sequences expected to fold into these backbones and maintain favorable interactions with the target. Structural prediction tools assess whether the designed binders adopt realistic conformations and correctly engage the target interface. Downstream filters, such as clash detection, binding pose analysis, and interaction scoring with FoldX [ 10] or PRODIGY [ 11], help prioritize high-quality candidates for experimental validation. However, deploying such workflows in practice is hindered by several factors. First, integrating diverse tools—each with its own interface, input requirements, and parameter conventions—leads to fragmented pipelines that are challenging to configure and maintain, particularly for researchers without extensive programming expertise. Second, tuning parameters across multiple modules is non-trivial due to the lack of standardized guidelines. Finally, high-accuracy ∗Corresponding author. Email: fangxiaomin01@baidu.com HelixDesign-Binder structural prediction
|
https://arxiv.org/abs/2505.21873v1
|
demands substantial computational resources, and limited sampling can constrain sequence diversity, reducing the likelihood of identifying high-affinity binders. Empirical evidence [ 12,8,9,13] indicates that broader sequence exploration significantly enhances the probability of discovering stable, high-affinity candidates. To address these challenges, we present HelixDesign-Binder, a scalable production-grade binder design platform built upon HelixFold3. Unlike traditional workflows where backbone generation, sequence design, and structural evaluation are typically handled by disparate modules requiring extensive manual configuration and resource coordination, HelixDesign-Binder offers a fully integrated and automated pipeline. It unifies backbone construction, structure-based sequence design, structure prediction, and multi-dimensional scoring into a cohesive framework that scales efficiently for large-scale binder design. Engineered to support both academic research and industrial deployment, the platform enables comprehensive evaluation of candidate binders across several critical dimensions, including sequence plausibility, structural quality, and physicochemical compatibility. This tight integration significantly reduces system complexity and operational overhead, while enhancing the overall efficiency, scalability, and robustness of the design process. Extensive benchmarking across a wide range of protein targets demonstrates that HelixDesign-Binder enables systematic and high-fidelity exploration of the sequence space, consistently yielding structurally plausible and functionally promising binder candidates. The platform is accessible via the PaddleHelix platform, allowing researchers to easily initiate binder design tasks and interact with results through a user-friendly interface. Key features of the HelixDesign-Binder are summarized as follows: •Precise Structural Evaluation via HelixFold3: Powered by HelixFold3, the platform delivers atomic-level predictions of protein-binder interactions with accuracy on par with AlphaFold3, and surpasses it in some cases. Incorporating prior knowledge like reference structures and interaction constraints further improves prediction precision, enabling early identification of promising binders and accelerating design. •High-Throughput Design and Evaluation : HelixDesign-Binder leverages the computational power of Baidu AI Cloud’s high-performance computing (HPC) platform to enable rapid design and evaluation of thousands of binder candidates. This capability supports comprehensive exploration of the sequence space, thereby increasing the probability of identifying candidates with favorable binding affinities and interface qualities. •Multi-Dimensional Scoring and Filtering : HelixDesign-Binder integrates multiple evaluation dimensions, including: (i) sequence plausibility based on learned models, (ii) structural metrics such as interface RMSD and pLDDT confidence, (iii) physicochemical properties like charge complementarity and interface hydrophobicity. This comprehensive analysis allows for more nuanced prioritization, identifying candidates with both strong binding affinity and structural stability. •Integrated and User-Friendly Platform : The platform integrates multiple design and analysis tools into a streamlined workflow, eliminating the need for users to assemble components individually. With minimal configuration required, researchers can complete full design cycles effortlessly. Via the PaddleHelix platform, users can easily launch binder design tasks and interact with predicted structures through an intuitive online interface. 2 Method HelixDesign-Binder (Figure 1) is a scalable production-grade platform designed for the efficient generation and evaluation of protein binder candidates targeting a specified sequence. Existing design approaches [ 6,12,14] typically require the generation of thousands to millions of candidate sequences. Even methods employing iterative sampling strategies [ 5] still necessitate hundreds to thousands of samples. These candidates must subsequently undergo structure prediction for downstream filtering, resulting in considerable computational overhead. To address this challenge, HelixDesign-Binder
|
https://arxiv.org/abs/2505.21873v1
|
is optimized for high-throughput workflows in high-performance computing (HPC) environments, enabling the rapid screening of thousands of candidate binders within a single iteration. It accommodates flexible design specifications, including both linear peptides and small proteins, making it well-suited for applications requiring high binding affinity and precise interface geometry. As shown in Figure 4, users can specify the desired sequence length range of the binder according to their design objectives. Based on these inputs, HelixDesign-Binder systematically constructs and ranks candidate sequences with strong potential for high-affinity recognition of the target, thereby enabling an efficient and focused search across a large and diverse sequence space. Figure 5 presents the design results generated by HelixDesign-Binder, along with the multi-dimensional interaction analysis output, which facilitates downstream interpretation and application by users. The HelixDesign-Binder workflow comprises the following major components: 2 HelixDesign-Binder Structure Based Sequence DesignBinder BackbonesDesigned SequencePredicted StructureStructure Based ScoresSequence Based ScoresStructure Based ScoresPhysicochemicalScoresIteration RankingConstraintsTarget Structure| ContactsTarget Structure (Optional)Target SequenceDesignlength range High-throughputProtein Folding(HelixFold3)Multi-dimensional Interaction AnalysisTarget Sequence Backbonefilter Backbone GenerationDesigned Binder Figure 1: HelixDesign-Binder for structure-based binder design. HelixDesign-Binder begins with a user-defined target sequence within a specified design length range. Candidate backbone conformations are generated and filtered by a backbone generation module. A structure-based sequence design module then proposes optimized amino acid sequences for the selected backbones. These candidate sequences undergo high-throughput structure prediction using HelixFold3, optionally guided by structural or residue-level contact constraints. The predicted models are subsequently evaluated through multi-dimensional interaction analysis (e.g., binding energy, interface features). Final binder candidates are prioritized via a ranking module that integrates structural and energetic metrics. •Backbone Generation: Initial structural backbones of the binder are generated based on the user-specified design length range and target sequence. A backbone filter module selects diverse, plausible scaffold candidates suitable for interface formation. •Structure-Based Sequence Design: Given the selected backbones, this module employs inverse folding techniques to design amino acid sequences compatible with the backbone geometry and the target interface. •High-throughput Protein Folding (HelixFold3): Designed sequences are folded in complex with the target using HelixFold3, which supports the incorporation of external structural constraints. This step evaluates structural plausibility and predicts the full complex conformation. •Multi-dimensional Interaction Analysis: Predicted complex structures are assessed across several dimensions, including sequence-based fitness, structure-based interface metrics, and physicochemical interaction scores. These scores are used to rank and prioritize candidate designs. 2.1 Backbone Generation In the backbone generation stage, we generate a diverse ensemble of candidate binder scaffolds based on the user- provided target and specified binder length range. Rather than constructing these backbones de novo, we leverage structural data from the Protein Data Bank (PDB) [ 15] by identifying complexes that are structurally similar to the input target. From these complexes, we extract interface fragments that meet the spatial and length constraints defined by the user, treating them as potential binder backbones. To ensure their structural quality and relevance, we evaluate these candidates across multiple dimensions—including sequence and geometric similarity to the target, structural plausibility as assessed by HelixFold3, and overall conformational diversity. This approach enables broad yet focused exploration of the structural landscape, providing reliable starting
|
https://arxiv.org/abs/2505.21873v1
|
points for downstream sequence design. 2.2 Structure-Based Sequence Design At this stage, binder amino acid sequences are generated based on the target proteins and binder backbones produced in the previous step. We utilize the ESM-IF1 inverse folding model [ 8], which generates sequences conditioned on backbone structures by estimating the probability distribution of residues at each position, informed by both local and global structural features. Because binder design involves at least two protein chains—the target and the binder—the original ESM-IF1 model, which supports sequence generation only for single chains, was modified in its inference procedure to accommodate multi-chain protein complexes. To ensure broad structural coverage, sequences are generated approximately evenly across all selected backbones. The design module uses backbone constraints to optimize residue identities within each scaffold. For each designed sequence, a fitness score is computed from the inverse folding model outputs to quantify its compatibility with the corresponding backbone. These scores guide an initial filtering step that retains high-quality sequences while maintaining sequence diversity. For each target, at least 1,000 sequences passing this filter advance to the next stage, enabling thorough exploration of the sequence space. In future applications, users will have the flexibility to specify the number of sequences to be designed according to their experimental needs. Generating such a large number of candidate sequences 3 HelixDesign-Binder is essential because binder design inherently involves a vast and complex sequence landscape; consistent with prior studies [12, 8, 9, 13], producing thousands to millions of initial sequences before structural filtering. 2.3 High-throughput Protein Folding (HelixFold3) Designed sequences are modeled in complex with the target using HelixFold3 [ 3], our independently developed biomolecular structure prediction model. HelixFold3 achieves accuracy comparable to AlphaFold3 and surpasses it in certain specialized scenarios. It produces full atomic-resolution 3D structures of predicted complexes along with quantitative confidence metrics such as the interface predicted TM-score (ipTM), which facilitate reliable assessment of interface quality. In addition, HelixFold3 supports the incorporation of reference structures and interaction constraints, allowing the integration of prior knowledge to further enhance prediction accuracy and reliability. To meet the computational demands of high-throughput binder screening, we leverage the high-performance computing (HPC) platform provided by Baidu AI Cloud. We have conducted extensive performance optimizations targeting critical stages such as multiple sequence alignment (MSA) search and structure prediction. By fully exploiting the parallel computing capabilities of the HPC platform and implementing optimization strategies tailored to the varying resource requirements and runtime characteristics of different modules, we achieve efficient utilization of computational resources and rapid task execution. These optimizations significantly improve overall throughput, enabling the rapid exploration and screening of large sequence spaces. 2.4 Multi-dimensional Interaction Analysis To identify high-quality binder candidates, HelixDesign-Binder employs a comprehensive multi-dimensional evaluation framework comprising sequence-based, structure-based, and energy-based assessment methodologies. Sequence-based evaluation utilizes protein language models trained on extensive natural sequence repositories to quantify the statistical likelihood and evolutionary fitness of each design. This assessment incorporates fitness scores derived from ESM-IF1[ 8] calculations across the complete sequence length, providing estimates of biological plausibility based on learned evolutionary patterns. Structure-based evaluation leverages HelixFold3-derived structural metrics
|
https://arxiv.org/abs/2505.21873v1
|
to assess the physical feasibility and binding specificity of predicted protein-protein interfaces. Key metrics include the interface predicted TM- score (ipTM) for structural similarity assessment, inter-chain predicted aligned error (PAE) for confidence evaluation of inter-molecular interactions, and geometric contact maps for spatial relationship validation. Physicochemical evaluation employs the computational tool PRODIGY[ 11] to estimate critical physicochemical properties governing binding interactions. These scoring metrics encompass predicted binding affinity, percentage of apolar non-interface surface (NIS) residues, and comprehensive contact interface statistics including the total number of intermolecular contacts and the frequency of charged-apolar contact pairs. Each candidate is filtered and ranked based on these sequence, structure, and physicochemical metrics. Final selection prioritizes designs that demonstrate strong binding potential while maintaining high structural fidelity to the intended interaction interface. 3 Results 3.1 Binder Design To rigorously evaluate the effectiveness of HelixDesign-Binder, we selected six previously characterized protein targets from [ 14] as a validation benchmark. Our evaluation focused on two complementary metrics: the interface predicted TM-score (ipTM) and FoldX-predicted binding free energy. These metrics have been widely used and validated in prior studies as proxies for binding efficacy [ 16,17,18,19]. Specifically, ipTM reflects the model’s structural confidence at the predicted binding interface (with higher values indicating greater reliability), while FoldX estimates the binding free energy of the protein complex (with lower values indicating stronger predicted binding affinity). Together, these two measures provide a robust framework for assessing both structural plausibility and energetic favorability of designed binders. Across the six targets: Interleukin-7 Receptor- α(IL-7R α)[20], Virulence factor B8 (VirB8)[ 21], Tropomyosin Receptor Kinase A (TrkA)[ 22], Insulin Receptor (InsR)[ 23], Fibroblast Growth Factor Receptor 2 (FGFR2)[ 24], Platelet Derived Growth Factor Receptor (PDGFR)[ 25], our computational approach demonstrated clear performance advantages. As positive controls, we used experimentally validated binders with confirmed binding affinity from the original study, hereafter referred to as validated binders. For targets with more than ten validated binders, we randomly selected a subset of ten for comparison. 4 HelixDesign-Binder (a) Interface predicted TM-score (ipTM) (b) Predicted binding free energy (c) Sequence identity analysis of designed binders (d) Improvement in binding free energy as a function of sampling size IL-7RαVirB8TrkA InsRFGFR2PDGFRInvolvedinchronicpainInvolvedinleukemiaand HIVInvolved in antibacterial Involved in diabetes and metabolic disordersInvolved in cancer and developmental disordersInvolved in cancer and fibrotic diseasesValidated BinderΔG bind: -22.50Designed BinderΔG bind: -25.36 Validated BinderΔG bind: -26.05Designed BinderΔG bind: -31.68 Validated BinderΔG bind: -25.21Designed BinderΔG bind: -19.40 Validated BinderΔG bind: -25.0Designed BinderΔG bind: -28.98Validated BinderΔG bind: -24.78Designed BinderΔG bind: -28.56Validated BinderΔG bind: -31.73Designed BinderΔG bind: -16.92 (e) Structural visualization of designed and validated binders Figure 2: Results of HelixDesign-Binder on six protein targets. 5 HelixDesign-Binder (a) (b) (c) (d) (e) (f) Figure 3: Correlation between predicted binding free energies (kcal/mol) and ipTM score for six protein targets: (a) IL1RA, (b) VirB8, (c) TrkA, (d) InsR, (e) FGFR2, and (f) PDGFR. Each point represents a binder, colored by the number of apolar-apolar contacts. Dots indicate designed binders, the stars represent the experimentally validated binders. Pearson correlation coefficients reveal varying relationships across targets, with values ranging from -0.353 (InsR) to -0.692 (FGFR2). As
|
https://arxiv.org/abs/2505.21873v1
|
shown in Figure 2(a) and (b), the designed binders consistently achieved ipTM scores above 0.8, indicating that the structure prediction model considered these binder–target complexes to be highly reliable. Previous studies have demonstrated that ipTM scores are positively correlated with binding capability, supporting the relevance of this metric in evaluating binder quality [ 26,12,27]. For four targets (IL-7R α, VirB8, FGFR2, and PDGFR), the designed binders achieved ipTM scores comparable to or higher than those of validated binders. For the other two targets (TrkA and INSULNR), the designed binders exhibited slightly lower ipTM scores than their validated counterparts. From a thermodynamic perspective, FoldX analysis indicated that the designed binders exhibited more favorable (i.e., lower) predicted binding free energies than the validated binders across all targets except TRKA. Notably, binders designed for VIRB8 demonstrated particularly favorable energetics, with mean Gibbs free energy values approaching –27.3 kcal/mol—substantially lower than those of the corresponding validated binders (-12.4 kcal/mol). While ipTM and FoldX-predicted binding energy correlate with binding activity, they are not definitive predictors, some validated binders score poorly on one or both axes. Nonetheless, binders with favorable values for both metrics show a markedly higher likelihood of being active, making them valuable for candidate prioritization. 3.2 Design Space Exploration and Functional Performance The HelixDesign-Binder workflow demonstrates strong capabilities in generating novel sequences while maintaining diversity within each design set, two essential factors for effective binder discovery. In terms of novelty, the top panel of Figure 2(c) shows that the designed binders exhibit extremely low sequence identity ( ≤0.1) to experimentally validated binders from prior studies [ 14], indicating that the workflow actively explores unexplored regions of the sequence space rather than replicating known solutions. In terms of diversity, the bottom panel reveals broad pairwise sequence identity distributions within each target-specific design set, suggesting substantial intra-group variability. For all targets, a significant fraction of binder pairs share less than 20% sequence identity, reflecting the framework’s capacity to generate structurally viable yet sequence-diverse candidates. 6 HelixDesign-Binder We visualized representative structures from the top-ranking region alongside experimentally validated binders and reported their predicted binding free energies (Figure 2e). The designed binders exhibit diverse secondary structure topologies—including alpha-helical, beta-sheet, and mixed conformations—particularly for targets such as TrkA and FGFR2, highlighting structural as well as sequence-level diversity. For three targets where the binding sites of designed and validated binders are spatially aligned, the predicted binding energies of the designed binders outperform those of the validated counterparts, suggesting that HelixDesign-Binder can generate candidates with potentially improved binding properties. 3.3 Computational Scale Enables Binder Optimization Most successful protein design studies to date rely on massive sampling strategies, often generating thousands to millions of candidate sequences before applying downstream filtering [ 12,8,9,13]. However, this approach poses a significant practical barrier: many prospective users of these design tools lack access to such computational scale and can only afford to generate a small number of designs, often resulting in suboptimal performance. To address this limitation, the HelixDesign-Binder is built to fully leverage high-performance computing (HPC) resources, enabling efficient large-scale sampling across diverse protein targets. This
|
https://arxiv.org/abs/2505.21873v1
|
allows for comprehensive exploration of sequence space, improving the chances of identifying candidates with strong biophysical and structural properties even for challenging targets. This extensive sampling is not only computationally feasible within our framework—it also yields clear performance benefits. As shown in Figure 2(d), we observe a consistent scaling relationship between the number of sampled sequences and predicted binding quality. Specifically, as sample size increases logarithmically, the average predicted binding free energy improves across all six targets, indicating enhanced thermodynamic favorability. At the same time, the mean ipTM scores of the top 100 binders also rise, reflecting greater structural confidence and interface consistency. Together, these results reveal a quantitative scaling law: larger sampling directly contributes to the discovery of more optimized binders, both in terms of binding energetics and structural reliability. 3.4 Multi-dimensional Interaction Analysis To comprehensively evaluate binder quality, we adopt a multi-dimensional assessment framework, as different metrics capture distinct yet complementary aspects of binding performance. Specifically, we selected: (i) the interface predicted TM-score (ipTM) to assess structural confidence at the binding interface; (ii) predicted binding free energy calculated using FoldX2; and (iii) the number of hydrophobic contacts to quantify interface compactness and physicochemical complementarity [ 28]. Together, these metrics reflect geometric plausibility, energetic favorability, and chemical compatibility, providing a comprehensive characterization of designed binders. Across six protein targets, we observed a strong inverse correlation between ipTM and predicted binding free energy (Pearson r ranging from –0.35 to –0.69), indicating that designs with higher structural confidence at the interface tend to exhibit more favorable energetics. The imperfect correlation highlights their complementarity—ipTM emphasizes geometric consistency, while the predicted binding free energy evaluates atomistic physical interactions. Hydrophobic contacts further refine this assessment by identifying well-packed interfaces that may not be fully captured by geometry or energy alone. In the two-dimensional space defined by ipTM and binding free energy, designed binders are broadly distributed, with a notable subset enriched in hydrophobic contacts (indicated in blue) clustering in the upper-right quadrant, where both structural and energetic metrics are optimized. This region overlaps with experimentally validated binders (e.g., FGFR2 and IL-7R α), suggesting that HelixDesign-Binder reliably identifies designs with favorable and balanced interaction profiles. 4 Conclusion While the fundamental workflow for protein design—encompassing backbone generation, sequence design, and filtering—has become well-established, its practical application remains resource-intensive and complex for many users due to the multitude of tools, parameters, and high computational requirements involved. To lower these barriers and broaden accessibility, we developed HelixDesign-Binder, a scalable production-grade platform for protein binder design that features an intuitive, user-friendly visual interface. The platform’s effectiveness has been preliminarily validated across six diverse protein targets. 2Note that the public online version of HelixDesign-Binder uses PRODIGY for binding free energy estimation due to its higher efficiency in web deployment 7 HelixDesign-Binder We invite researchers from both academia and industry to explore and evaluate HelixDesign-Binder via PaddleHelix platform. We encourage feedback to identify challenges and guide ongoing improvements. Furthermore, we welcome collaboration opportunities to collectively advance the field of protein binder design. For inquiries and partnership discussions, please contact us at baidubio_cooperate@baidu.com. 8
|
https://arxiv.org/abs/2505.21873v1
|
HelixDesign-Binder References [1]Richard Evans, Michael O’Neill, Alexander Pritzel, Natasha Antropova, Andrew Senior, Tim Green, Augustin Žídek, Russ Bates, Sam Blackwell, Jason Yim, et al. Protein complex prediction with alphafold-multimer. biorxiv , pages 2021–10, 2021. [2]Josh Abramson, Jonas Adler, Jack Dunger, Richard Evans, Tim Green, Alexander Pritzel, Olaf Ronneberger, Lindsay Willmore, Andrew J Ballard, Joshua Bambrick, et al. Accurate structure prediction of biomolecular interactions with alphafold 3. Nature , 630(8016):493–500, 2024. [3]Lihang Liu, Shanzhuo Zhang, Yang Xue, Xianbin Ye, Kunrui Zhu, Yuxin Li, Yang Liu, Jie Gao, Wenlai Zhao, Hongkun Yu, et al. Technical report of helixfold3 for biomolecular structure prediction. arXiv preprint arXiv:2408.16975 , 2024. [4]Vinicius Zambaldi, David La, Alexander E Chu, Harshnira Patani, Amy E Danson, Tristan OC Kwan, Thomas Frerix, Rosalia G Schneider, David Saxton, Ashok Thillaisundaram, et al. De novo design of high-affinity protein binders with alphaproteo. arXiv preprint arXiv:2409.08022 , 2024. [5]Martin Pacesa, Lennart Nickel, Christian Schellhaas, Joseph Schmidt, Ekaterina Pyatova, Lucas Kissling, Patrick Barendse, Jagrity Choudhury, Srajan Kapoor, Ana Alcaraz-Serna, et al. Bindcraft: one-shot design of functional protein binders. bioRxiv , pages 2024–09, 2024. [6]Joseph L Watson, David Juergens, Nathaniel R Bennett, Brian L Trippe, Jason Yim, Helen E Eisenach, Woody Ahern, Andrew J Borst, Robert J Ragotte, Lukas F Milles, et al. De novo design of protein structure and function with rfdiffusion. Nature , 620(7976):1089–1100, 2023. [7]Woody Ahern, Jason Yim, Doug Tischer, Saman Salike, Seth Woodbury, Donghyo Kim, Indrek Kalvet, Yakov Kipnis, Brian Coventry, Han Altae-Tran, et al. Atom level enzyme active site scaffolding using rfdiffusion2. bioRxiv , pages 2025–04, 2025. [8]Chloe Hsu, Robert Verkuil, Jason Liu, Zeming Lin, Brian Hie, Tom Sercu, Adam Lerer, and Alexander Rives. Learning inverse folding from millions of predicted structures. In International conference on machine learning , pages 8946–8970. PMLR, 2022. [9]Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al. Robust deep learning–based protein sequence design using proteinmpnn. Science , 378(6615):49–56, 2022. [10] Joost Schymkowitz, Jesper Borg, Francois Stricher, Robby Nys, Frederic Rousseau, and Luis Serrano. The foldx web server: an online force field. Nucleic acids research , 33(suppl_2):W382–W388, 2005. [11] Li C Xue, João Pglm Rodrigues, Panagiotis L Kastritis, Alexandre Mjj Bonvin, and Anna Vangone. Prodigy: a web server for predicting the binding affinity of protein–protein complexes. Bioinformatics , 32(23):3676–3678, 2016. [12] Nathaniel R Bennett, Brian Coventry, Inna Goreshnik, Buwei Huang, Aza Allen, Dionne Vafeados, Ying Po Peng, Justas Dauparas, Minkyung Baek, Lance Stewart, et al. Improving de novo protein binder design with deep learning. Nature Communications , 14(1):2625, 2023. [13] Thomas Hayes, Roshan Rao, Halil Akin, Nicholas J Sofroniew, Deniz Oktay, Zeming Lin, Robert Verkuil, Vincent Q Tran, Jonathan Deaton, Marius Wiggert, et al. Simulating 500 million years of evolution with a language model. Science , page eads0018, 2025. [14] Longxing Cao, Brian Coventry, Inna Goreshnik, Buwei Huang, William Sheffler, Joon Sung Park, Kevin M Jude, Iva Markovi ´c, Rameshwar U Kadam, Koen HG Verschueren, et al. Design of protein-binding proteins from the target structure alone. Nature , 605(7910):551–560, 2022.
|
https://arxiv.org/abs/2505.21873v1
|
[15] Stephen K Burley, Helen M Berman, Gerard J Kleywegt, John L Markley, Haruki Nakamura, and Sameer Velankar. Protein data bank (pdb): the single global macromolecular structure archive. Protein crystallography: methods and protocols , pages 627–641, 2017. [16] JunJie Wee and Guo-Wei Wei. Evaluation of alphafold 3’s protein–protein complexes for predicting binding free energy changes upon mutation. Journal of Chemical Information and Modeling , 64(16):6676–6683, 2024. [17] Dharmeshkumar Patel, Jagdish Suresh Patel, and F Marty Ytreberg. Implementing and assessing an alchemical method for calculating protein–protein binding free energy. Journal of chemical theory and computation , 17(4):2457–2464, 2021. [18] Tawny R Gonzalez, Kyle P Martin, Jonathan E Barnes, Jagdish Suresh Patel, and F Marty Ytreberg. Assessment of software methods for estimating protein-protein relative binding affinities. PLoS One , 15(12):e0240573, 2020. 9 HelixDesign-Binder [19] Sarah Sirin, James R Apgar, Eric M Bennett, and Amy E Keating. Ab-bind: antibody binding mutational database for computational affinity predictions. Protein Science , 25(2):393–409, 2016. [20] Craig A McElroy, Julie A Dohm, and Scott TR Walsh. Structural and biophysical studies of the human il-7/il-7r α complex. Structure , 17(1):54–65, 2009. [21] Joseph J Gillespie, Isabelle QH Phan, Holger Scheib, Sandhya Subramanian, Thomas E Edwards, Stephanie S Lehman, Hanna Piitulainen, M Sayeedur Rahman, Kristen E Rennoll-Bankert, Bart L Staker, et al. Structural insight into how bacteria prevent interference between multiple divergent type iv secretion systems. MBio , 6(6):10–1128, 2015. [22] Christian Wiesmann, Mark H Ultsch, Steven H Bass, and Abraham M de V os. Crystal structure of nerve growth factor in complex with the ligand-binding domain of the trka receptor. Nature , 401(6749):184–188, 1999. [23] Tristan I Croll, Brian J Smith, Mai B Margetts, Jonathan Whittaker, Michael A Weiss, Colin W Ward, and Michael C Lawrence. Higher-resolution structure of the human insulin receptor ectodomain: multi-modal inclusion of the insert domain. Structure , 24(3):469–476, 2016. [24] Alexander N Plotnikov, Stevan R Hubbard, Joseph Schlessinger, and Moosa Mohammadi. Crystal structures of two fgf-fgfr complexes reveal the determinants of ligand-receptor specificity. Cell, 101(4):413–424, 2000. [25] Ann Hye-Ryong Shim, Heli Liu, Pamela J Focia, Xiaoyan Chen, P Charles Lin, and Xiaolin He. Structures of a platelet-derived growth factor/propeptide complex and a platelet-derived growth factor/receptor complex. Proceedings of the National Academy of Sciences , 107(25):11307–11312, 2010. [26] Patrick Bryant, Gabriele Pozzati, and Arne Elofsson. Improved prediction of protein-protein interactions using alphafold2. Nature communications , 13(1):1265, 2022. [27] Ah-Ram Kim, Yanhui Hu, Aram Comjean, Jonathan Rodiger, Stephanie E Mohr, and Norbert Perrimon. Enhanced protein-protein interaction discovery via alphafold-multimer. bioRxiv , pages 2024–02, 2024. [28] Gerhard Klebe. Protein modeling and structure-based drug design. In Drug Design: From Structure and Mode-of-Action to Rational Design Concepts , pages 309–321. Springer, 2025. 10 HelixDesign-Binder Figure 4: Example input interface of the HelixDesign-Binder Server. 11 HelixDesign-Binder Figure 5: Example output display page of the HelixDesign-Binder Server. 12
|
https://arxiv.org/abs/2505.21873v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.