text
string
source
string
Table 3: Imitation learning accuracy on RoboMimic [ 15] environment. Our method (in green) compared against baselines (in red) /and ablations (in blue). See text for details. diffusion policy that uses 10 DDIM [ 17] steps, Row 3: conventional flow matching-based policy [ 5] and Row 4: Streaming Diffusion Policy [ 14], ...
https://arxiv.org/abs/2505.21851v1
parallelize action generation with robot execution. In practice, this can avoid delays and jerky robot movements. Diffusion policy can be sped up by running fewer diffusion steps via DDIM [ 17]. And flow-matching policy is also faster than diffusion policy. However, their speed seems to come at the cost of sometimes si...
https://arxiv.org/abs/2505.21851v1
noise levels. The rolling buffer approach has been applied to video prediction [ 33] and character motion generation [ 34]. Our method is more economical since it computes only as many actions are streamed to the robot, without requiring a buffer. We evaluate our method against Streaming Diffusion Policy in Sec. 6. 8 C...
https://arxiv.org/abs/2505.21851v1
Figure 3: A toy example illustrating how streaming flow policy matches marginal distribution of actions in the trajectory at all time steps, but not necessarily their joint distribution. The x-axis represents a 1-D action space, and the y-axis represents both trajectory time and flow time. (a)The bi-modal training set ...
https://arxiv.org/abs/2505.21851v1
that a /∈Q=⇒ pξ(a|t)< ϵ, for some small ϵ >0and for all ξ, t. Then we have from Eq. 5 that pξ(a|t)< ϵ=⇒ p∗(a|t, h)< ϵfor all t∈[0,1]. Therefore, the probability of sampling an action athat violates the constraint Qis extremely low. 9.4 SFPs can learn convex velocity constraints Theorem 2 of Lipman et al. [3]implies tha...
https://arxiv.org/abs/2505.21851v1
diffusion probabilistic models. Advances in Neural Information Processing Systems (NeurIPS) , 33:6840–6851, 2020. [10] A. Block, A. Jadbabaie, D. Pfrommer, M. Simchowitz, and R. Tedrake. Provable guarantees for generative behavior cloning: Bridging low-level stability and high-level behavior. In Proceedings of Advances...
https://arxiv.org/abs/2505.21851v1
rectified flow transformers for high-resolution image synthesis. In Proceedings of the 41st International Conference on Machine Learning (ICML) , 2024. [25] N. Ma, M. Goldstein, M. S. Albergo, N. M. Boffi, E. Vanden-Eijnden, and S. Xie. SiT: Exploring flow and diffusion-based generative models with scalable interpolant...
https://arxiv.org/abs/2505.21851v1
in an extended state space by introducing a latent variable z∈ A. The latent variable zdecouples stochasticity from the flow trajectory, allowing us to sample multiple modes of the trajectory distribution at test time while deterministically starting the sampling process from the most recently generated action. We now ...
https://arxiv.org/abs/2505.21851v1
1). Intuitively, the variable afollows the shape of the action trajectory ξ(t)with an error starting from a0−ξ(0)and decreasing with an exponential factor due to the stabilizing gain. However, it uses the sampled noise variable z0∼ N(0, I)to increase the standard deviation from σ0around ξ(0)toσ1 15 around ξ(1). This is...
https://arxiv.org/abs/2505.21851v1
learned velocity field. (c, d) Shows the a- and z- projections, respectively, of trajectories sampled from the learned velocity field. By construction, a(0)deterministically starts from the most recently generated action, whereas z(0)is sampled from N(0,1). Trajectories starting withz(0)<0are shown in blue, and those w...
https://arxiv.org/abs/2505.21851v1
arXiv:2505.21852v1 [cs.LG] 28 May 2025A Provable Approach for End-to-End Safe Reinforcement Learning Akifumi Wachi∗Kohei Miyaguchi∗Takumi Tanabe∗Rei Sato∗Youhei Akimoto†‡ ∗LY Corporation†University of Tsukuba‡RIKEN AIP {akifumi.wachi, kmiyaguc, takumi.tanabe, sato.rei}@lycorp.co.jp akimoto@cs.tsukuba.ac.jp Abstract A l...
https://arxiv.org/abs/2505.21852v1
through safe online policy evaluation via Gaussian processes. A key advantage of PLS is that safety is guaranteed at least with high probability in the entire process. safe policy in a real environment due to distribution mismatch issues between the offline data and the actual environment, even though training can proc...
https://arxiv.org/abs/2505.21852v1
cumulative reward while satisfying pre-specified 2 safety constraints, all from a static dataset. Because the policy is never deployed during training, offline safe RL is especially appealing for safety-critical domains. Le et al. [ 30] pioneered this direction with an algorithm that optimizes return under safety const...
https://arxiv.org/abs/2505.21852v1
context to an action distribu- tion, subsequently identifying a joint probability distribution PπonΞsuch that at∼π(xt)and (rt, gt, st+1)∼PT(st, at)for all t∈[H].1Given a trajectory τ= (ξ1, ξ2, . . . , ξ H), returns are given bybR(τ):=PH t=1r(st, at)for reward and bG(τ):=PH t=1g(st, at)for safety cost, respectively. We ...
https://arxiv.org/abs/2505.21852v1
probability densities, and Φ(θ)≥0is a penalty term representing inductive biases in parameter optimization. The output of DT is then given by πˆθ,R, where πθ,Rdenotes the policy associated with pθ(· | ·, R). 4.3 Constrained Decision Transformer Constrained decision transformer (CDT, [ 37]) is a promising paradigm that ...
https://arxiv.org/abs/2505.21852v1
discrepancies between the target and actual returns, there seem to be some relations that can be captured using data. Our goal here is to theoretically understand when and how closely the CDT policy πˆθ,zachieves the target returns, z. Unfortunately, however, given the architecture and learning complexity of CDTs, it i...
https://arxiv.org/abs/2505.21852v1
is decomposed into an unbiased Gaussian process term H2F(z)/√n, a small bias term ε(z), and an asymptotically negligible term oP(1/√n). Remark 1 (Smoothness) .Examining the explicit form of the covariance function k(·,·)reveals that F(·)is smooth (under suitable conditions). Specifically, the smoothness of F(·)is known...
https://arxiv.org/abs/2505.21852v1
=k⋄(z,z)−k⋄,N(z)⊤(K⋄,N+ν2 ⋄IN)−1k⋄,N(z), where k⋄,N(z) = [ k⋄(z1,z), . . . , k ⋄(zN,z)]⊤andK⋄,Nis the positive definite kernel matrix [k⋄(z,˜z)]z,˜z∈ZN, andIN∈RN×Nis the identify matrix. Finally, we assume that Jg(πz)is L-Lipschitz continuous with respect to some distance metric d(·,·)inZ. This assumption is rather mil...
https://arxiv.org/abs/2505.21852v1
RandG. See Appendix I for the full proofs. 7 Assumption 7 (Initial safe set) .There exists a singleton seed set Z0that is known to satisfy the safety constraint; that is, for all z∈ Z0,Jg(πz)≤bholds. Theorem 2 (Safety guarantee) .At every iteration j, suppose that αg,jis set as in (5)and the target returns zjare chosen...
https://arxiv.org/abs/2505.21852v1
(DICE) family, is specifically designed for offline safe RL and directly estimates the stationary distribution correction of the optimal policy in terms of reward returns under safety constraints. CDT [ 37] is a DT-based algorithm that learns a policy conditioned on the target returns, as discussed in Section 2 as a pr...
https://arxiv.org/abs/2505.21852v1
basis function kernels to optimize the target returns for maximizing the reward under the safety constraint. Main results. Table 1 summarizes our experimental results under a safety cost threshold of b= 20 . Additional results, including Table 6 for b= 40 , are provided in Appendix J. Notably, PLS is the only method th...
https://arxiv.org/abs/2505.21852v1
P. Christiano, J. Schulman, and D. Mané. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 , 2016. [5]F. Berkenkamp, M. Turchetta, A. Schoellig, and A. Krause. Safe model-based reinforcement learning with stability guarantees. In Advances in Neural Information Processing Systems (NeurIPS) , 2017. [6]S. Bh...
https://arxiv.org/abs/2505.21852v1
Wang, and W. Li. Constraint-conditioned actor-critic for offline safe reinforcement learning. In International Conference on Learning Representations (ICLR) , 2025. [23] B. Hambly, R. Xu, and H. Yang. Recent advances in reinforcement learning in finance. Mathematical Finance , 33(3):437–503, 2023. [24] K.-C. Hsu, A. Z....
https://arxiv.org/abs/2505.21852v1
2022. [39] S. Paternain, M. Calvo-Fullana, L. F. Chamon, and A. Ribeiro. Safe policies for reinforcement learning via primal-dual methods. arXiv preprint arXiv:1911.09101 , 2019. [40] R. F. Prudencio, M. R. Maximo, and E. L. Colombini. A survey on offline reinforcement learning: Taxonomy, review, and open problems. IEE...
https://arxiv.org/abs/2505.21852v1
Yu, T. Zhang, and D. Zhao. Constraint-conditioned policy optimization for versatile safe reinforcement learning. Advances in Neural Information Processing Systems (NeurIPS) , 36:12555–12568, 2023. [59] C. Yu, J. Liu, S. Nemati, and G. Yin. Reinforcement learning in healthcare: A survey. ACM Computing Surveys (CSUR) , 5...
https://arxiv.org/abs/2505.21852v1
small constants ϵr, ϵs, δ≥0such that, if (r, s′)∼PT(s, a), 1.the reward density pr(r′|s, a):=d dr′PT{r≤r′|s, a}4is well-defined and bounded by ϵr outside the δ-neighborhood of ˆr(s, a), i.e., supr:∥r−ˆr(s,a)∥∞>δpr(r|s, a)≤ϵr, and 2. the successor state s′coincides with ˆs′(s, a)with probability of at least 1−ϵs, for al...
https://arxiv.org/abs/2505.21852v1
Note that the original DT is for a single-dimensional reward function, we presented (12) by extending it to multi-dimensional settings. We introduce some notation and conditions on the probabilistic model Pand the penalty Φ. Let us define a regularized risk of θrelative to βRby RΦ(θ):=Eβ t∼Unif[ H]h DKL(βbR(xt)∥πθ,bR(x...
https://arxiv.org/abs/2505.21852v1
f(R|s1)is bounded away from 0and ∞, while Theorem 1 of [ 9] is not. On the contrary, Theorem 1 of [ 9] is applicable when there is a nonzero probability of exactly R=bR, while our result is not since f(R|s1) =∞. Remark 7. Our result also extends Theorem 1 in Brandfonbrener et al. [9]in allowing the transition kernel PT...
https://arxiv.org/abs/2505.21852v1
on ϕ(x1) since, by Assumption 11, ∥J(βR)−R∥∞=ϕ(x1) f(R|x1)=ϕ(x1) ηR. (28) To this end, we will make use of ˆPT:S × A → ∆(Rm× S), the near-deterministic component of PTsuch that dˆPT(r, s′|s, a) =I{(r, s′)∈ˆT(s, a)} PT(ˆT(s, a)|s, a)dPT(r, s′|s, a), (29) where ˆT(s, a) =B∞(ˆr(s, a), δ)× {ˆs′(s, a)} ⊂Rm× S is the image o...
https://arxiv.org/abs/2505.21852v1
[51] associated with the criterion function Mθ(a|x, R):= lnpθ(a|x, R) pθ∗(a|x, R)−Φ(θ) + Φ( θ∗). (38) 20 Also note that Mθis locally bounded in the sense that, for every ℓ2-ballUinΘwith a sufficiently small radius ρ >0, Eβ t∼Unif[ H] sup θ∈UMθ(at|xt,ˆr) (39) ≤Eβ t∼Unif[ H] Mθ0(at|xt,ˆrk) +ρsup θ∈U∥ψθ(at|xt,ˆr)∥2 (4...
https://arxiv.org/abs/2505.21852v1
at)˙ℓθ(at|xt)i . (53) 22 Proof. Letω >0and fix λ∈Rdarbitrarily. Set π=πθ+ωλandπ′=πθ, and let νbe the base measure on Arelative to which pθ(a|s)is defined. Now, divide both sides of (47) by ω, and take the limitω→0to obtain λ⊤∇θJ(πθ) =HX t=1lim ω→0EπθZ Qπθ(xt, a)pθ+ωλ(a|xt)−pθ(a|xt) ωdν(a) (54) =HX t=1EπθZ Qπθ(xt, a)...
https://arxiv.org/abs/2505.21852v1
regret. Thus, there exists ˆz∈ Z in the samples such that Jr(πˆz)≥Jr(πz⋆)− E. J Experiment Details and Additional Results J.1 Computational Resources Our experiments were conducted in a workstation with Intel(R) Xeon(R) Silver 4316 CPUs@2.30GHz and 1 NVIDIA A100-SXM4-80GB GPUs. J.2 Hyperparameters We use the OSRL libra...
https://arxiv.org/abs/2505.21852v1
regarding the initial safe set of target returns. Table 4: Safe target return range ( Z0) forPLS (Bullet-Safety-Gym). Parameter Ant-Circle Ant-Run Car-Circle Drone-Circle Drone-Run Reward [250, 300] [700, 750] [400, 475] [700, 720] [400, 450] Safety [0, 5] [0, 5] [0, 5] [0, 5] [0, 5] Table 5: Safe target return range (...
https://arxiv.org/abs/2505.21852v1
-Velocity Safety cost ↓14.10±3.46 6.34 ±5.46 0.10±0.11 0.00 ±0.00 0.05 ±0.11 1.22±0.09 0.01±0.00 Hopper Reward ↑ 0.85±0.19 0.40 ±0.21 0.23 ±0.00 0.05±0.07 0.67 ±0.03 0.60 ±0.17 0.84 ±0.00 -Velocity Safety cost ↓ 5.30±3.85 6.08 ±3.09 2.75 ±0.04 0.46±0.17 0.56 ±0.56 0.60 ±0.63 0.20 ±0.03 0 2 4 6 8 10 GP Iteration0.00.20....
https://arxiv.org/abs/2505.21852v1
arXiv:2505.21854v1 [cs.CV] 28 May 2025Rethinking Gradient-based Adversarial Attacks on Point Cloud Classification Jun Chen1,3∗Xinke Li2∗Mingyue Xu4Tianrui Li1,3Chongshou Li1,3† 1School of Computing and Artificial Intelligence, Southwest Jiaotong University 2College of Computing, City University of Hong Kong 3Engineerin...
https://arxiv.org/abs/2505.21854v1
of Figure 1. Recently, the HiT-Adv [ 16] method embeds perturbations into less perceptible regions through deformations, achieving an improved balance between attack strength and invisibility. Nonetheless, structural deformations may still appear unnatural under certain viewing conditions. The root cause lies in the fa...
https://arxiv.org/abs/2505.21854v1
developing graph neural networks[28, 29, 30, 31, 32] to further enhance the performance. Adversarial Attack on Point Cloud Classification Although advanced point cloud classifica- tion models achieve strong performance, recent studies[ 7,8,9,10] indicate their vulnerability to adversarial attacks. Xiang et al.[ 7] pion...
https://arxiv.org/abs/2505.21854v1
the result of the i-th point at the t-th iteration, ηis the step size,and ⃗ nis a unit vector indicating only the update direction. Through multiple iterations, the perturbation of all points is gradually optimized until the maximum number of iterations is reached, at which point the adversarial point cloud P′is genera...
https://arxiv.org/abs/2505.21854v1
denoted as ηbest. Otherwise, we proceed with a modified binary search to find the optimal step size within a predefined range. During this process, we aim to identify the smallest step size that still ensures a successful attack and maintains imperceptibility, which is then recorded as ηbest. This approach enables us t...
https://arxiv.org/abs/2505.21854v1
the size of the search space and computational efficiency. We then perform attacks only on the key sub-point clouds whose predictions match the original label. To achieve selective updating, we define a binary mask M= [m1, m2, . . . , m n], where nis the total number of points in the original point cloud. Each element ...
https://arxiv.org/abs/2505.21854v1
transformation attack: HiT-Adv[16]. These seven methods were used as benchmark approaches for comparison. Evaluation Metrics. we use five common metrics: Attack Success Rate (ASR), Chamfer distance [52], Hausdorff distance [53], L2 norm distance, and average time cost (A.T). 6 5.2 Performance Evaluation Imperceptibilit...
https://arxiv.org/abs/2505.21854v1
mechanism, and sub-point cloud-based optimization, leading to improved imperceptibility without noticeable structural distortions. Attack Robustness Against Defenses. To evaluate the robustness of our attack method against various defense mechanisms, we selected three commonly used defense techniques: SOR, SRS, and DUP...
https://arxiv.org/abs/2505.21854v1
and Division To investigate the impact of weight factor and sub-point cloud dividing on attack performance, we removed certain operations to observe changes in performance. The results are shown in Table 3. From the table, it can be seen that the presence or absence of weight term has the greatest impact on performance...
https://arxiv.org/abs/2505.21854v1
and Tian Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition , pages 1907–1915, 2017. [4]Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. Target-driven visual navigation in i...
https://arxiv.org/abs/2505.21854v1
learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 652–660, 2017. [19] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advanc...
https://arxiv.org/abs/2505.21854v1
Ali Thabet, and Bernard Ghanem. Advpc: Transferable adversarial perturbations on 3d point clouds. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16 , pages 241–257. Springer, 2020. [34] Binbin Liu, Jinlai Zhang, and Jihong Zhu. Boosting 3d adversarial atta...
https://arxiv.org/abs/2505.21854v1
Intelligence , volume 37, pages 2420–2428, 2023. [48] Jinlai Zhang, Lyujie Chen, Binbin Liu, Bo Ouyang, Qizhi Xie, Jihong Zhu, Weiming Li, and Yanmei Meng. 3d adversarial attacks beyond point cloud. Information Sciences , 633:491–503, 2023. [49] Nicholas Carlini and David Wagner. Towards evaluating the robustness of ne...
https://arxiv.org/abs/2505.21854v1
As shown in Table 5, the strategy consistently boosts ASR to 100% across all victim models (PointNet, PointNet++, DGCNN, CurveNet), while maintaining low attack time. Moreover, Table 6 presents the performance under various defense mechanisms using PointNet. Without the adaptive step size, ASR drops significantly; howe...
https://arxiv.org/abs/2505.21854v1
Extracting Research Instruments from Educational Literature Using LLMs Jiseung Yoo1, Curran Mahowald2, Meiyu Li3, and Wei Ai3 1College of Education, University of Maryland, College Park, USA 2Annenberg Institute, Brown University, USA curran_mahowald@brown.edu 3College of Information, University of Maryland, College Pa...
https://arxiv.org/abs/2505.21855v1
and classifies entities such as names and domain-specific terms, while RE detects and categorizes relationships between them. Although foundational LLMs and commercial platforms exhibit strong extraction and reasoning capa- bilities, they often struggle with hallucinations and domain specificity [12]. To mitigate these...
https://arxiv.org/abs/2505.21855v1
4 Yoo et al. 2.2 Prompt Design This study uses iterative prompts and targeted follow-up questions to enhance context understanding and filter more accurate responses [13,14]. The NER pro- cess for instrument extraction employs a three-step prompting approach. First, an extraction prompt instructs the model to retrieve ...
https://arxiv.org/abs/2505.21855v1
Method Excerpt Method Section Excerpt w/ Extraction + Summary + Decision Model Accuracy Precision Recall F1 Gpt-4o-mini 0.472 0.508 0.901 0.619 Gpt-4o 0.491 0.514 0.943 0.665 Gpt-o1 0.641 0.696 0.904 0.786 Claude-sonnet 0.615 0.644 0.929 0.761 Llama 3.3 70B 0.396 0.608 0.639 0.623 Table 2 highlights the system’s streng...
https://arxiv.org/abs/2505.21855v1
Feldstein, D., Houston, T.K., Kaatz, S., Whe- lan, C., Green, M.: Instruments for Evaluating Education in Evidence-Based Prac- tice: A Systematic Review. JAMA. 296, 1116 (2006). https://doi.org/10.1001/ jama.296.9.1116 6. Cox, J., Foster, B., Bamat, D.: A review of instruments for measuring social and emotional learnin...
https://arxiv.org/abs/2505.21855v1
arXiv:2505.21866v1 [eess.SP] 28 May 2025CSI-Bench: A Large-Scale In-the-Wild Dataset for Multi-task WiFi Sensing Guozhen Zhu, Yuqian Hu, Weihang Gao, Wei-Hsiang Wang, Beibei Wang, K. J. Ray Liu Origin Research Abstract WiFi sensing has emerged as a compelling contactless modality for human activity monitoring by captur...
https://arxiv.org/abs/2505.21866v1
of human-centric sensing tasks, enabling robust model development across diverse hardware setups and real-world scenarios. CSI-Bench captures real-world signal variability across diverse environments, including apartments, multi-room houses, offices, and public indoor spaces. Data is recorded continuously from a broad ...
https://arxiv.org/abs/2505.21866v1
including human actions (jumping, running, walking, hand waving, falling, breathing), non-human motions (pet movement, iRobot, fan), and empty environments. In each sample, the x-axis represents time, and the y-axis represents the subcarrier index. XRF55 [ 36] introduces a large corpus of RF-based activity data for act...
https://arxiv.org/abs/2505.21866v1
the data, which allows us to later align the streams in software. Routers handle CSI extraction, buffering, and data upload to cloud servers, running either Linux or FreeRTOS depending on their chipset. CSI format. Due to hardware diversity, the CSI data in CSI-Bench varies in subcarrier granularity, antenna configurat...
https://arxiv.org/abs/2505.21866v1
6.7k 17 6 2 70/15/15, easy/med/hard Breath Detection 2 ST 100k 3 3 6 70/15/15, easy/med/hard Motion Source Recognition 4 ST 60.9k 35 10 1 70/15/15, easy/med/hard Room-level Localization 6 ST 7.1k 8 6 8 70/15/15, easy/med/hard Proximity Recognition 4 MT 20.3k 6 6 11 70/15/15, user/env/device Human Activity Recognition 5...
https://arxiv.org/abs/2505.21866v1
200 400 Packet Index050100Interval (ms) Freq 47.62 Hz200 400 Packet Index050100Interval (ms) Freq 74.72 Hz200 400 Packet Index050100Interval (ms) Freq 98.23 Hz 20 40 60 Subcarrier Index0.050.10.150.2Amplitude20 40 60 Subcarrier Index00.1Amplitude 20 60 100 Subcarrier Index00.1Amplitude 200 400 Time Index30 60Subcarrier...
https://arxiv.org/abs/2505.21866v1
in the frequency dimension as needed. This ensures all samples have consistent input shapes across the dataset. 5 Benchmark Design 5.1 Task Suite and Metrics CSI-Bench supports a suite of supervised classification tasks for WiFi sensing, covering key applica- tions in health monitoring and ambient intelligence. Each ta...
https://arxiv.org/abs/2505.21866v1
mean ± std (%) over three runs. ModelFall Detection Breathing Detection Room-Level Localization Motion Source Recognition Acc F1 Acc F1 Acc F1 Acc F1 MLP [32] 92.16 ± 0.91 92.17 ± 0.92 97.59 ± 0.08 97.59 ± 0.08 87.14 ± 0.80 86.90 ± 0.83 98.86 ± 0.07 98.86 ± 0.07 ResNet-18 [16] 94.88 ± 0.26 94.89 ± 0.26 98.58 ± 0.17 98....
https://arxiv.org/abs/2505.21866v1
performance. Moreover, because all tasks are trained jointly in a single pass, the wall-clock training time is reduced by nearly 3×compared to training separate models for each task. These gains in model size and training efficiency make our approach especially suitable for deployment on resource-constrained edge devic...
https://arxiv.org/abs/2505.21866v1
time, performance drops under domain shifts highlight the need for future research on adaptive and generalizable sensing models. CSI-Bench provides a comprehensive testbed to support this work and offers a scalable, practical resource for advancing WiFi sensing systems in healthcare and beyond. We release the full data...
https://arxiv.org/abs/2505.21866v1
arXiv preprint arXiv:2106.09685 , 2021. [19] Pengli Hu, Chengpei Tang, Kang Yin, and Xie Zhang. Wigr: A practical wi-fi-based gesture recognition system with a lightweight few-shot network. Applied Sciences , 11(8):3329, 2021. [20] Yuqian Hu, Guozhen Zhu, Wei-Hsiang Wang, Beibei Wang, and K. J. Ray Liu. What you need i...
https://arxiv.org/abs/2505.21866v1
Fei Wang, Yizhe Lv, Mengdie Zhu, Han Ding, and Jinsong Han. Xrf55: A radio frequency dataset for human indoor action analysis. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies , 8(1), 2024. [37] Wei Wang, Alex X. Liu, Muhammad Shahzad, Kang Ling, and Sanglu Lu. Understanding and model...
https://arxiv.org/abs/2505.21866v1
A.2 Environments Data is collected across 26 distinct real-world environments (E01–E26), including studio apartments, multi-bedroom apartments, townhouses, and multi-floor single-family houses. These environments vary in layout complexity, room geometry, wall materials, and furniture density, introducing rich multipath...
https://arxiv.org/abs/2505.21866v1
E06 2M / 4F 26 - 41 163 - 173 45 - 90 MSRP01 - P20, PM01E11 - E13, E15 - E20- - - 6 - 40 U03 - U04, U08 - U10, U13, U18, U20, U25 - U27, U29 - U35, UM06E11 - E20 15M / 3F 23 - 35 155 - 185 50 - 90 F01 - F04 E11 - E13 - - - - AmazonPlugGoveePlugWyzePlug EightreePlugEchoPlusGoogleNest AppleHomePodEchoSpot EchoShow 8Echod...
https://arxiv.org/abs/2505.21866v1
2.4/5 GHz at 30 or 100 Hz. Bandwidths vary from 20–80 MHz. Data Collection Protocol. Users annotate their room presence and co-occupancy manually. Sessions reflect natural daily activities. Scale and Composition: 3,805 single-user samples; 3,257 multi-user samples; 6 diverse environ- ments; 8 device types. Difficulty-L...
https://arxiv.org/abs/2505.21866v1
Appendix C. B Model Architectures and Training Details To support rigorous evaluation across in-distribution, cross-domain, and few-shot generalization, we implement and benchmark a suite of neural network models representative of contemporary time-series and vision-inspired architectures. All models are implemented in...
https://arxiv.org/abs/2505.21866v1
to 100 epochs, with early stopping based on validation loss (patience = 15). We use categorical cross-entropy as the loss function. The hyperparameter are tuned based on models’ accuracy on validation dataset. Data is loaded from HDF5 using standardized splits as discussed in Section 5.2 and label mappings. Our experim...
https://arxiv.org/abs/2505.21866v1
± 1.06 80.03 ± 1.19 PatchTST [29] 100.00 ± 0.00 100.00 ± 0.00 99.90 ± 0.19 99.95 ± 0.10 99.86 ± 0.17 99.86 ± 0.18 ResNet18 [16] 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 TimeSformer-1D [8] 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 100.00 ± 0.00 Trans...
https://arxiv.org/abs/2505.21866v1
0.62 98.80 ± 0.55 98.63 ± 0.17 98.63 ± 0.17 98.08 ± 0.55 98.08 ± 0.55 ViT [11] 98.38 ± 0.87 98.41 ± 0.81 99.27 ± 0.32 99.27 ± 0.32 98.10 ± 0.45 98.10 ± 0.45 Table 12: Human Activity Recognition cross-domain performance. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelCross-Device Cross-...
https://arxiv.org/abs/2505.21866v1
1.61 Table 14: Proximity Recognition cross-domain performance. Accuracy (Acc) and F1-score are reported as mean ± std (%) over three runs. ModelCross-Device Cross-Env Cross-User Acc F1 Acc F1 Acc F1 LSTM [17] 24.89 ± 2.97 24.29 ± 3.02 28.64 ± 1.08 26.76 ± 1.02 29.20 ± 0.55 23.83 ± 1.34 MLP [32] 28.73 ± 0.89 27.31 ± 0.8...
https://arxiv.org/abs/2505.21866v1
Evaluating the Retrieval Robustness of Large Language Models Shuyang Cao♠*, Karthik Radhakrishnan♡, David Rosenberg♡, Steven Lu♡, Pengxiang Cheng♡, Lu Wang♠, Shiyue Zhang♡ Bloomberg♡University of Michigan♠ {kradhakris1, drosenberg44, slu126, pcheng134, szhang1061}@bloomberg.net {caoshuy, wangluxy}@umich.edu Abstract Re...
https://arxiv.org/abs/2505.21870v1
are usually drawn from credible corpora like Wikipedia and trusted news outlets. To bridge this gap, this work benchmarks LLMs’ robustness under realistic RAG setups. We con- sider an LLM retrieval robust if (1) its RAG perfor- mance is equal to or better than its non-RAG perfor- mance; (2) adding more retrieved docume...
https://arxiv.org/abs/2505.21870v1
retrieved doc- uments from Wikipedia obtained by widely- used and strong retrievers. •We conduct a comprehensive empirical study of 11 modern LLMs with 3 different prompt- ing strategies, revealing the generally good robustness of LLMs in more realistic settings and highlighting the consequences of their im- perfect ro...
https://arxiv.org/abs/2505.21870v1
a completely irrelevant set of documents and observed non-trivial hallucination rates. Yoran et al. (2024) introduced the concept of retrieval robustness , “retrieval-robust LLMs states that: (a) when relevant, the retrieved context should improve model performance; (b) when irrelevant, the re- trieved context should n...
https://arxiv.org/abs/2505.21870v1
the effect of NDR. Results for various ks are then aggregated across all task samples, formally defined as: RSR (q,k i,o)= 1 ∧j<i[f(q, ki, o)≥f(q, kj, o)] RSR=1 ZX q∈QX ki∈K,i> 1X o∈ORSR (q,k i,o) (2) where Z=|Q| ·(|K| −1)· |O|. A high RSR indicates that performance rarely degrades when adding more retrieved document...
https://arxiv.org/abs/2505.21870v1
articles. We split each article into chunks by double newlines, resulting in 20 million chunks. Each chunk is treated as an independent “docu- ment” for retrieval. 4.2 LLM Systems Backbone LLMs. 11 LLMs from three open- source families and two proprietary families are tested, including Llama-3 Instruct (3.1-8B, 3.1- 70...
https://arxiv.org/abs/2505.21870v1
low prior work and determine if the concatenated retrieved documents contain the gold answer if its substring is an exact match of any form of the gold answer (substring exact match) (Mallen et al., 2023). For reference, we also report the best model performance without RAG (Non-RAG Perf) to highlight the potential imp...
https://arxiv.org/abs/2505.21870v1
to knowledge-rich LLMs (usually models of larger sizes), we need to be cautious about whether it will lead to performance degrada- tion compared to non-RAG. Here, we use one example to show how low ro- bustness reduces RAG efficacy . In Figure 5, solid lines illustrate the actual performance of Mistral- Large and o3-mi...
https://arxiv.org/abs/2505.21870v1
enlarging the gap between the two setups. This implies that mod- els are constantly sacrificing some samples while enhancing others with larger retrieval sizes. We think that the increasing number of retrieved docu- ments challenges models’ ability to identify helpful documents and handle longer inputs, and thus leads ...
https://arxiv.org/abs/2505.21870v1
(see the prompt s2a.j2 in Ap- pendix C) fails to enhance retrieval robustness in our evaluations. We conjecture that, compared to synthetic noisy contexts, realistic retrievers provide models with harder negative contexts that are more challenging for the model to identify. As we look into the maximum task performance ...
https://arxiv.org/abs/2505.21870v1
Hannaneh Hajishirzi, and Wen-tau Yih. 2024. Reliable, adaptable, and at- tributable language models with retrieval. arXiv preprint arXiv:2403.03187 . Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Ai...
https://arxiv.org/abs/2505.21870v1
Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics , 7:452–466. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianm...
https://arxiv.org/abs/2505.21870v1
Conger, Char- lotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Clau- dia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherb...
https://arxiv.org/abs/2505.21870v1
Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Ra- jan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchan- dani, Roma...
https://arxiv.org/abs/2505.21870v1
Liu, Tei-Wei Kuo, Nan Guan, et al. 2024a. Retrieval-augmented generation for natural language processing: A survey. arXiv preprint arXiv:2407.13193 . Siye Wu, Jian Xie, Jiangjie Chen, Tinghui Zhu, Kai Zhang, and Yanghua Xiao. 2024b. How easily do irrelevant inputs skew the responses of large language models? In First C...
https://arxiv.org/abs/2505.21870v1
Size Robustness 1B 3B 8B70B 32B104B 12B123B 4osonnet Retrieval Order Robustness Performance Robustness Metric Llama Command Mistral GPT Claude Figure 11: The three retrieval robustness metrics and task performance of experimented LLMs using vanilla prompting on Hotpot QA. 1B 3B 8B70B 32B104B 12B123B 4osonnet0.00.20.40....
https://arxiv.org/abs/2505.21870v1
3.1 8B Llama 3.1 70B Command R Command R+ Mistral Nemo Mistral Large GPT-4o o3-mini Claude 3.5 SonnetFigure 19: Performance on ASQA with different retrievers and document orders. non_rag_qa.j2 1Answer the following question in a concise manner without explanation . Indicate your answer with " Answer :" and only include...
https://arxiv.org/abs/2505.21870v1
}} 12 13Is the predicted answer a correct answer to the question ? 14 15IMPORTANT : Please strictly follow the following format in your response : 16[ Start answer ] 17<Your answer . Choose from : Yes , No > 18[ End answer ] answer_evaluation_asqa.j2 1You will be given a question , gold answers to this question , and a...
https://arxiv.org/abs/2505.21870v1
arXiv:2505.21873v1 [q-bio.BM] 28 May 2025HELIX DESIGN -BINDER : A S CALABLE PRODUCTION -GRADE PLATFORM FOR BINDER DESIGN BUILT ON HELIX FOLD3 Jie Gao, Jun Li, Jing Hu, Shanzhuo Zhang, Kunrui Zhu, Yueyang Huang, Xiaonan Zhang, Xiaomin Fang∗ PaddleHelix team, Baidu Inc. ABSTRACT Protein binder design is central to therap...
https://arxiv.org/abs/2505.21873v1
demands substantial computational resources, and limited sampling can constrain sequence diversity, reducing the likelihood of identifying high-affinity binders. Empirical evidence [ 12,8,9,13] indicates that broader sequence exploration significantly enhances the probability of discovering stable, high-affinity candid...
https://arxiv.org/abs/2505.21873v1
is optimized for high-throughput workflows in high-performance computing (HPC) environments, enabling the rapid screening of thousands of candidate binders within a single iteration. It accommodates flexible design specifications, including both linear peptides and small proteins, making it well-suited for applications...
https://arxiv.org/abs/2505.21873v1
points for downstream sequence design. 2.2 Structure-Based Sequence Design At this stage, binder amino acid sequences are generated based on the target proteins and binder backbones produced in the previous step. We utilize the ESM-IF1 inverse folding model [ 8], which generates sequences conditioned on backbone struct...
https://arxiv.org/abs/2505.21873v1
to assess the physical feasibility and binding specificity of predicted protein-protein interfaces. Key metrics include the interface predicted TM- score (ipTM) for structural similarity assessment, inter-chain predicted aligned error (PAE) for confidence evaluation of inter-molecular interactions, and geometric contac...
https://arxiv.org/abs/2505.21873v1
shown in Figure 2(a) and (b), the designed binders consistently achieved ipTM scores above 0.8, indicating that the structure prediction model considered these binder–target complexes to be highly reliable. Previous studies have demonstrated that ipTM scores are positively correlated with binding capability, supporting...
https://arxiv.org/abs/2505.21873v1
allows for comprehensive exploration of sequence space, improving the chances of identifying candidates with strong biophysical and structural properties even for challenging targets. This extensive sampling is not only computationally feasible within our framework—it also yields clear performance benefits. As shown in...
https://arxiv.org/abs/2505.21873v1
HelixDesign-Binder References [1]Richard Evans, Michael O’Neill, Alexander Pritzel, Natasha Antropova, Andrew Senior, Tim Green, Augustin Žídek, Russ Bates, Sam Blackwell, Jason Yim, et al. Protein complex prediction with alphafold-multimer. biorxiv , pages 2021–10, 2021. [2]Josh Abramson, Jonas Adler, Jack Dunger, Ric...
https://arxiv.org/abs/2505.21873v1
[15] Stephen K Burley, Helen M Berman, Gerard J Kleywegt, John L Markley, Haruki Nakamura, and Sameer Velankar. Protein data bank (pdb): the single global macromolecular structure archive. Protein crystallography: methods and protocols , pages 627–641, 2017. [16] JunJie Wee and Guo-Wei Wei. Evaluation of alphafold 3’s ...
https://arxiv.org/abs/2505.21873v1