text string | source string |
|---|---|
ρ= 0.99999 .The high correlation ρ= 0.99999 ensures that the analytic score function ∇xlogpMoG(x)remains well-defined, despite the near-singular covariance. The physical constraint is defined as: F(x) =|x0−x1|2= 0. (17) Baselines. DPS and ECI both integrate the analytical score using 1000-step Euler discretization over... | https://arxiv.org/abs/2505.22391v1 |
arXiv:2505.22411v1 [cs.LG] 28 May 2025Mitigating Overthinking in Large Reasoning Models via Manifold Steering Yao Huang1, Huanran Chen2, Shouwei Ruan1, Yichi Zhang2, Xingxing Wei1∗, Yinpeng Dong2∗ 1Institute of Artificial Intelligence, Beihang University, Beijing 100191, China 2College of AI, Tsinghua University, Beiji... | https://arxiv.org/abs/2505.22411v1 |
in repetitive verification loops or unproductive reasoning paths [ 8,14,36]. To mitigate such overthinking in LRMs, several approaches [ 3,7,14,19] have recently been proposed. They often utilize external mechanisms to regulate reasoning and prevent overthinking, which can incur additional computational overhead for pr... | https://arxiv.org/abs/2505.22411v1 |
et al. [ 6] proposed Bi-directional Preference Optimization (BiPO), leveraging steering vectors derived from contrasting human preference pairs to customize attributes like truthfulness and hallucination. These approaches highlight the versatility of steering directions in manipulating models’ behaviors. Our work exten... | https://arxiv.org/abs/2505.22411v1 |
content following <think>\n comprises the model’s reasoning process and final answer, separated by </think> . Despite the excellent reasoning capabilities of these models, they often exhibit the overthinking phenomenon [ 8,11] during the reasoning process, characterized by repetitive validation or redundant deliberatio... | https://arxiv.org/abs/2505.22411v1 |
positions i. The parameter αallows adapting the extent of overthinking mitigation, balancing the reduction of redundant reasoning with the problem-solving accuracy. 4 Manifold Steering for Robust Intervention Following our mechanistic analysis in Sec. 3, which identifies a single direction capturing overthinking in the... | https://arxiv.org/abs/2505.22411v1 |
into this mani- fold. To verify this, we employ a simple linear method – Principal Component Analysis (PCA), on the activations from the complete reasoning dataset Dreasoning =DredundantSDconcise at layer l. Let A(l)= [h(l)(x1), . . . ,h(l)(xN)]∈Rd×Ndenote the matrix of activation vectors h(l)(xi)∈Rdfor inputs xi∈Dreas... | https://arxiv.org/abs/2505.22411v1 |
4.3 Manifold Steering The interference direction rother, quantified in Theorem 4.1, causes activation shifts that amplify through transformer layers and disrupt reasoning (Theorem 4.2). To eliminate this interference, a simple but effective approach is to set I−PMin Eq. (6) to 0. Based on this insight, we propose Manif... | https://arxiv.org/abs/2505.22411v1 |
generation) and GPQA-Diamond (disciplinary knowledge). 5 Experiments 5.1 Experimental Setups We begin by briefly outlining the baseline methods, target LRMs, evaluation datasets, and metrics. For more detailed descriptions of the experimental settings, please refer to Appendix A. Baseline Methods. We compare our manifo... | https://arxiv.org/abs/2505.22411v1 |
Hmm, arctan(3/0). Wait, division by zero is undefined. Hmm, that might be a problem……. Wait, is that correct? Let me double -check …… Wait, another way to think about it: if I were to draw a line…… But just to be extra sure, let me recall the conversion formulas …... Wait, just to make sure I didn‘t mix up anything , l... | https://arxiv.org/abs/2505.22411v1 |
reasonable, as complex problems inherently require larger token budgets and may exceed the models’ internal capabilities, thereby constraining mitigation effectiveness. 5.3 Cross-Domain Transferability for Overthinking Mitigation To further investigate the transferability of manifold steering for overthinking mitigatio... | https://arxiv.org/abs/2505.22411v1 |
to assess its cross-task transferability. Prior studies [ 2,40] demonstrate that while steering directions can suppress refusal fea- tures in models, some instances persist unless intervention strength is increased, which risks model collapse. Here, we apply our man- ifold steering method using the Qwen2.5-7B-Instruct ... | https://arxiv.org/abs/2505.22411v1 |
Zhuosheng Zhang, et al. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187 , 2024. [9]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Sch... | https://arxiv.org/abs/2505.22411v1 |
Hubinger, and Alexander Matt Turner. Steering llama 2 via contrastive activation addition. arXiv preprint arXiv:2312.06681 , 2023. [27] Xiao Pu, Michael Saxon, Wenyue Hua, and William Yang Wang. Thoughtterminator: Bench- marking, calibrating, and mitigating overthinking in reasoning models. arXiv preprint arXiv:2504.13... | https://arxiv.org/abs/2505.22411v1 |
Xiao Yang, Ranjie Duan, Dong Yan, Yinpeng Dong, and Jun Zhu. Stair: Improving safety alignment with introspective reasoning. arXiv preprint arXiv:2502.02384 , 2025. [43] Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. Deepresearcher: Scaling deep research via reinforcemen... | https://arxiv.org/abs/2505.22411v1 |
and SEAL [ 7], for their ability to preserve the original accuracy in reasoning tasks. Below, we detail the specific settings for them: General Setting. All large reasoning models adopt the official recommended settings with a temperature of 0.6, top-p of 0.95, and a maximum length of 16k tokens. Dynasor. We adopt the ... | https://arxiv.org/abs/2505.22411v1 |
d≫k.With k= 10 ,I−PMprojects onto d−k≈ddimensions. The trace: tr((I−PM)C(l)) =dX i=k+1λ(l) i, scaled by1 |Dredundant|+1 |Dconcise|, sums the eigenvalues in M⊥. Since 30% of the variance remains in d−kdimensions, and d≫N, the trace is large, indicating significant noise in rother. B.2 Proof of Theorem 4.2 Proof. We deri... | https://arxiv.org/abs/2505.22411v1 |
92.893.292.8 80.2 7271.83496 2930 20741670 961913 010002000300040005000 020406080100 0 0.10.30.50.70.9 AccuracyTokens 87.888.287.888 73.6 71.24008 3764 3428 2873 17611465 010002000300040005000 020406080100 0 0.10.30.50.70.9 AccuracyTokensAccuracy (%) Tokens R1-1.5B R1-7B R1-8B R1-14B 76.4 79.879.278.878.6 70.44762 4320... | https://arxiv.org/abs/2505.22411v1 |
Scaling Reasoning without Attention Xueliang Zhao♠⋆∗Wei Wu⋆†Lingpeng Kong♠† ♠The University of Hong Kong⋆Ant Group {xlzhao,lpk}@cs.hku.hk wuwei19850318@gmail.com Abstract Large language models (LLMs) have made significant advances in complex rea- soning tasks, yet they remain bottlenecked by two core challenges: archit... | https://arxiv.org/abs/2505.22425v1 |
and Gu, 2024], replacing traditional self-attention with state space dual (SSD) layers. This architectural choice delivers constant-time inference and fixed memory consumption while maintaining strong reasoning capabilities. We further enhance the model’s problem-solving abilities through a carefully designed two-phase... | https://arxiv.org/abs/2505.22425v1 |
et∈Rddenote its corresponding embedding, where dis the embedding dimension. A shared projection generates all time-dependent parameters: [at,bt,ct,ut] =Linear (et), where at∈[0,1]is a scalar decay factor, bt,ct∈RNare the input and output kernel vectors, and ut∈RPis a projected feature vector. The core recurrence consis... | https://arxiv.org/abs/2505.22425v1 |
examples. The majority of training instances are generated using automatic synthesis pipelines based on NUMINA MATH [Li et al., 2024] and MAMMOTH [Yue et al., 2023, 2024], which produce high-quality examples spanning diverse reasoning domains. In the advanced phase, the core of our training strategy centers on data syn... | https://arxiv.org/abs/2505.22425v1 |
provided in the benchmark. For AIME 24 and 25, and LiveCodeBench, we report avg@k accuracy by sampling k= 16 andk= 8 generations respectively and averaging over the per-problem correctness. This protocol accounts for the inherent sampling variance in open-ended generation and aligns with practices from prior literature... | https://arxiv.org/abs/2505.22425v1 |
8.6 Gemma3-12B 83.8 22.9 19.2 49.9 81.1 73.2 22.2 Gemma3-27B 89.0 32.6 24.0 54.2 86.0 78.0 26.9 Nemotron-H-8B 77.6 – – – 79.3 74.4 – M1-3B 81.7 23.0 22.0 43.6 – – – PROMPT COT-M AMBA -7B 84.6 35.2 24.6 50.7 81.7 75.0 29.9 Table 2: Ablation results on AIME 24, AIME 25, and Livecodebench-v5. “- PromptCoT” removes the cur... | https://arxiv.org/abs/2505.22425v1 |
35.2 24.6 50.7 81.7 75.0 29.9 and (3) removal of both PromptCoT and OpenCodeReasoning (“- PromptCoT & OCR”), using only OpenThoughts. The results show that the full PROMPT COT-M AMBA -7B achieves the best performance across all benchmarks, underscoring the importance of curriculum-driven synthesis for high-complexity t... | https://arxiv.org/abs/2505.22425v1 |
high-throughput inference under both memory-constrained and long-context conditions. The combination of architectural efficiency and curriculum-driven training positions it as a compelling alternative to attention-based models for scalable deployment in real-world scenarios. 5 Related Work 5.1 Reasoning with Large Lang... | https://arxiv.org/abs/2505.22425v1 |
Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jo- celyn Huang, Vahid Noroozi, and Boris Ginsburg. Opencodereasoning: Advancing data distillation for competitive coding. arXiv preprint arXiv:2504.01943, 2025. AIME-2024. https://huggingface.co/datasets/ai-mo/aimo-validation-aime. Aaron Blakeman... | https://arxiv.org/abs/2505.22425v1 |
, 13:9, 2024. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050, 2023. Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt... | https://arxiv.org/abs/2505.22425v1 |
Urmish Thakker, and Lingpeng Kong. Subgoalxl: Subgoal-based expert learning for theorem proving. arXiv preprint arXiv:2408.11172, 2024. Xueliang Zhao, Wei Wu, Jian Guan, and Lingpeng Kong. Promptcot: Synthesizing olympiad-level problems for mathematical reasoning in large language models. arXiv preprint arXiv:2503.0232... | https://arxiv.org/abs/2505.22425v1 |
arXiv:2505.22438v1 [cs.IT] 28 May 2025Synonymous Variational Inference for Perceptual Image Compression Zijian Liang1Kai Niu1 2Changshuo Wang1Jin Xu1Ping Zhang3 Abstract Recent contributions of semantic information the- ory reveal the set-element relationship between semantic and syntactic information, represented as s... | https://arxiv.org/abs/2505.22438v1 |
2020; He et al., 2022b; Theis et al., 2022; Agustsson et al., 2023; Muckley et al., 2023; Hoogeboom et al., 2023; Xu et al., 2023; Careil et al., 2024) demonstrate the effec- tiveness of this new optimization direction and suggest that perceptual image compression (PIC) with high perceptual quality at low bitrates can ... | https://arxiv.org/abs/2505.22438v1 |
existing percep- tual image compression schemes . 2.We establish Synonymous Image Compression (SIC), a new image compression scheme that corresponds to the analytical process of SVI. By solely encoding the latent synonymous representation partially, SIC inter- prets this information as an equivalent quantized latent sy... | https://arxiv.org/abs/2505.22438v1 |
model using the following loss function form: LRDP =λr·I X;ˆX +λd·Ex,ˆx∼p(x,ˆx)[d(x,ˆx)] +λp·dp(px, pˆx). (4) 2.3. Semantic Information Theory As the optimization towards perceptual quality is more in- clined to the accuracy of conveying meaning (the semantic problem) instead of the symbol-level accuracy that classi-... | https://arxiv.org/abs/2505.22438v1 |
the original image can be 1Refer to as ˜Uin Niu and Zhang’s paper (2024). The ring hat symbol “˚”is appplied to distinguish it from the tilde hat symbol “˜” commonly used in variational inference. 2The relationship between the partial semantic KL diver- gence and the standard KL divergence satisfies DKL,s[q||ps]≤ DKL[q... | https://arxiv.org/abs/2505.22438v1 |
density q(˜y|x)works at the syntactic level, while the true posterior p˜ys|X(˜ys|X)operates at the semantic level, represented in the form of synsets. However, since ˜ycan be decomposed into a combination or concatenation of a synonymous repre- sentation ˜ysand a detailed representation ˜yϵ, it is possible to effective... | https://arxiv.org/abs/2505.22438v1 |
a determined inference and generative model, the coding rate of the synonymous representation Ex∼p(x)E˜y∼q −logp˜ys(˜ys) is equal to I X;ˆ˚X , as stated in Appendix A.2. By Lemma 3.2, the minimization of the second term is equivalent to minimizing a weighted expected distortion Ex∼p(x)E˜y∼qE˜xi∈˜X|˜ys[d(x,˜xi)]plus... | https://arxiv.org/abs/2505.22438v1 |
model can be optimized for approaching different ideal synsets X(l). After training, synonymous representations at each level can be encoded progressively and fed to the decoder, making SIC a progressive image codec that can produce images at diverse synonymous levels (correspond- ing to varying coding rates) using a s... | https://arxiv.org/abs/2505.22438v1 |
mechanism. We set the number of latent representation channels to C= 512 and the number of the equally partitioned synonymous levels to L= 16 , giving each synonymous level a channel dimension of 32. This allows a single progressive SIC codec to support 16 coding rates and their corresponding image quality levels. Mode... | https://arxiv.org/abs/2505.22438v1 |
the comparison No- GAN schemes, and the LPIPS quality remains very similar, thus verifying a comparable rate-distortion-perception per- formance, shown as Figure 11 in Appendix C.2. This sur- passing reflects the advantage of incorporating the concept of synset in our proposed SVI. However, this advantage is modest com... | https://arxiv.org/abs/2505.22438v1 |
0.5 0.6 0.7 Bits Per Pixel (BPP)0.050.100.150.20DISTS (↓)DIV2K validation 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Bits Per Pixel (BPP)0.050.100.150.200.25DISTS (↓)KodakHiFiC Progressive SIC (With GAN, M=1)MS-ILLM (with GAN) Progressive SIC (No-GAN, M=5)MS-ILLM (No-GAN) Progressive SIC (With GAN, M=5)Progressive SIC (No-GAN, M=... | https://arxiv.org/abs/2505.22438v1 |
results demon- strate full-rate rate-distortion perception performance and notable advantages on DISTS, thereby verifying the effec- tiveness of our proposed analysis method. Software and Data We will upload code for reproducing our results to the repos- itory at https://github.com/ZJLiang6412/ SynonymousImageCompressi... | https://arxiv.org/abs/2505.22438v1 |
Blau, Y . and Michaeli, T. Rethinking lossy compression: The rate-distortion-perception tradeoff. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning (ICML) , pp. 675–685. PMLR, 2019. Careil, M., Muckley, M. J., Verbeek, J., and Lathuili `ere, S. Towards i... | https://arxiv.org/abs/2505.22438v1 |
image compres- sion with score-based generative models. arXiv preprint arXiv:2305.18231 , 2023. Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 , 2013. Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Kolesn... | https://arxiv.org/abs/2505.22438v1 |
A coding theorem for the rate- distortion-perception function. In Neural Compression: From Information Theory to Applications – Workshop @ ICLR 2021 , 2021. Theis, L., Salimans, T., Hoffman, M. D., and Mentzer, F. Lossy compression with gaussian diffusion. arXiv preprint arXiv:2206.08889 , 2022. Thomas, M. and Joy, A. ... | https://arxiv.org/abs/2505.22438v1 |
probability or the integral of the density of each sample within the set. Herein, we consider the integral form because image samples within an ideal synset can typically be transformed into one another through continuous changes. Based on the above factors, we can expand the expression on the left side of (17) as foll... | https://arxiv.org/abs/2505.22438v1 |
the dimension of the original image x), this term will be equivalent to Ex∼p(x)E˜y∼qE˜xj∈˜X|˜ys −logpx|˜xj(x|˜xj) =1 2σ2·Ex∼p(x)E˜y∼qE˜xj∈˜X|˜ys||x−˜x||2+d 2log 2πσ2 ,(22) in which the σ2is the variance term of the set Gaussian distribution, i.e., the power of the quantization noise. In this case, the term can be c... | https://arxiv.org/abs/2505.22438v1 |
can be further derived as Ex∼p(x)E˜y∼qE˜xj∈˜X|˜ys logpx(x) p˜xj(˜xj) =E˜y∼qE˜xj∈˜X|˜ys Ex∼p(x)logpx(x) p˜xj(˜xj) =E˜y∼qE˜xj∈˜X|˜ysDKL px||p˜xj ,(26) i.e., an Expected KL Divergence (E-KLD) between the distribution of the original image pxand the distribution of the reconstructed sample p˜xjthat averaged on the re... | https://arxiv.org/abs/2505.22438v1 |
semantic variableˆ˚Xcorresponds to the reconstructed synset ˆX. Proof. The key point to prove this problem is to consider an ideal scenario, in which there are multiple image samples xiat the source with similar perceptual similarities to the original image x. In this scenario, each sample can be assumed to be potentia... | https://arxiv.org/abs/2505.22438v1 |
of the ideal synset Xfor the original image x, it is a constant that cannot be optimized. To summarize, the minimization of (32) is equivalent to the following optimization directions LX=λd·Ex∼p(x)E˜y∼qE˜xi∈˜X|˜ys[d(x,˜xi)] +λp·E˜y∼qE˜xi∈˜X|˜ysDKL[px||p˜xi] +Ex∼p(x)E˜y∼q −logp˜ys(˜ys) .(36) At the convergence point, ... | https://arxiv.org/abs/2505.22438v1 |
mutual information, this can be expressed as I X;ˆ˚X ≤I X;ˆX , in which the condition for the inequality to hold as equality is that the reconstruction of the synonym set is restricted to producing only one sample, meaning the decoder is not allowed to sample ˆyϵ,j, nor is it permitted to use ˆyϵ,jas input to the g... | https://arxiv.org/abs/2505.22438v1 |
DISTS, instead of the KL divergence calculation, or using adversarial losses in the GAN training process, such as Wasserstein loss, as a substitute for KL divergence. B. Relevant Thoughts on Semantic Information Theory In this appendix section, we will briefly provide relevant thoughts on semantic information theory ba... | https://arxiv.org/abs/2505.22438v1 |
SIC model should also serve as an upper bound for the lower semantic mutual information, expressed as: Ex∼p(x)[−logp(ˆys)](a)=Hsˆ˚Y(b)=Hsˆ˚X(c)=Hsˆ˚X −Hsˆ˚X|X (d)=H X +Hsˆ˚X −Hs X,ˆ˚X (e) ≥Hs ˚X +Hsˆ˚X −H X,ˆX =Is ˚X;ˆ˚X ,(43) in which the established conditions of (a)∼(c)is the same as the condit... | https://arxiv.org/abs/2505.22438v1 |
of the progressive SIC model and supplementary results for Section 5. C.1. Implementation details The auto-encoder architecture, including an analysis transform as the encoder and a synthesis transform as the decoder, is implemented based on the Swin Transformer (Liu et al., 2021). The implementation details are shown ... | https://arxiv.org/abs/2505.22438v1 |
are employed to integrate the input µh s,σh sandµc s,σc sto an output accurate estimation µs,σs. And for the detailed representation ˆyϵ, a uniform sampling based on the following equation is utilized: ˆyϵ,j=Q µh ϵ+U(−2,2) , (46) in which the uniform distribution U(−2,2)is set empirically. We realize that this sampli... | https://arxiv.org/abs/2505.22438v1 |
1664 1792 1920 2048 λ(l) d 231/8230/8229/8228/8227/8226/8225/8224/8 λ(l) p 221/8218/8215/8212/829/826/823/820/8 1,2,···, L, each synonymous level lneeds to learn different levels of information during model training. However, due to the limitations of computing resources during training, it is not possible to cover all... | https://arxiv.org/abs/2505.22438v1 |
results for Figure 4, which are quality assessment measures corresponding to the distortion and perceptual terms in the training loss function. As shown in the figure, for the distortion evaluation measure PSNR, as the coding rate increases, PSNR progressively approaches the performance of No-GAN MS-ILLM and even that ... | https://arxiv.org/abs/2505.22438v1 |
0.1021 Rate = 0.3281 bpp / DISTS = 0.0935 Figure 17. Visualization comparison of reconstructed images at synonymous level l= 7using progressive SIC with M= 1andM= 5. Image from the Kodat dataset. 27 Synonymous Variational Inference for Perceptual Image Compression Conv12 LRelu NN Ė16 Conv64Ę2-4×4 Lrelu Conv128Ę2-4×4 Lr... | https://arxiv.org/abs/2505.22438v1 |
all synonymous levels. 28 Synonymous Variational Inference for Perceptual Image Compression D.2. Supplementary Results Figure 19 shows the performance of the fine-tuned model (labeled as “with GAN”) using PSNR, LPIPS, DISTS, and FID, compared with the original model (labeled as “no-GAN”) and other comparison schemes ac... | https://arxiv.org/abs/2505.22438v1 |
arXiv:2505.22441v1 [cs.CV] 28 May 2025Can NeRFs See without Cameras? Chaitanya Amballa1Sattwik Basu1∗Yu-Lin Wei1∗ Zhijian Yang2Mehmet Ergezer2Romit Roy Choudhury1,2 1University of Illinois Urbana-Champaign2Amazon Abstract Neural Radiance Fields (NeRFs) have been remarkably successful at synthesizing novel views of 3D s... | https://arxiv.org/abs/2505.22441v1 |
As measurements, we use the received signal power . Thus, the input to our EchoNeRF model is the transmitter ( Tx) location, a sequence of known receiver ( Rx) locations, and the signal power measured at each Rxlocation. The output of EchoNeRF is an (implicitly learnt) floorplan of the indoor space. We expect to visual... | https://arxiv.org/abs/2505.22441v1 |
predict the wireless channel impulse response (CIR) [ 35] at unknown locations inside a room. Drawing a parallel to optical NeRFs, a voxel’s color in optics becomes a voxel’s transmit power in wireless. The voxel’s density in optics remains the same in wireless, modeling how that voxel attenuates signals passing throug... | https://arxiv.org/abs/2505.22441v1 |
up from all possible directions in the environment. Hence, EchoNeRF must solve a many-component decomposition problem by leveraging the physics of multipath signal propagation. 3EchoNeRF Model Setup and Overview. At aRxlocation, we model the received signal power ψas ψ=ψLoS+ψref1+···+ψrefn where ψLoSis the power from t... | https://arxiv.org/abs/2505.22441v1 |
the received power at Rxdue to the signal that reflected off voxel vj. ψref(vj) =δjf(θ, β)Y k∈{Rx:vj}(1−δk)Y l∈{vj:Tx}(1−δl) dTx:vj+dvj:Rx2(2) Let us explain this equation briefly. The leading δjensures that voxel vjis not a reflector when δj= 0. The f(θ, β)term models the wave-surface interactions, i.e., how signal... | https://arxiv.org/abs/2505.22441v1 |
model in Stage 2 which refines the learned voxel densities and orientation. much smaller compared to LoS. We formalize this explanation below by considering the LoS and reflection losses individually4. LLoS= ˜ψLoS−ψ∗ LoS 2 2,Lref= ˜ψref−ψ∗ ref 2 2 Consider the gradient of LLoSw.r.t the density of vi. ∇δiLLoS= 2( ˜ψLoS−... | https://arxiv.org/abs/2505.22441v1 |
same as in Stage 1. ■Regularization : Floorplans demonstrate significant local similarity in orientation, hence we penalize differences in orientation among neighbors, using a regularization (Eq. 8) similar to Total Variation [ 29]. This can be achieved without additional computational cost to the neural network by dir... | https://arxiv.org/abs/2505.22441v1 |
(C) RSSI Prediction Error (RPE) : We split all Rxlocations into a training and test set. RPE reports the average median RSSI error over all the test locations across floorplans. 4.1 Overall Summarized Results Table 1 reports comparative results between EchoNeRF and baselines, averaged over 20different experiments, usin... | https://arxiv.org/abs/2505.22441v1 |
to construct the bottom of the left wall in this floorplan. Finally, note that areas outside the floorplan (e.g., the regions on the right of 6thfloorplan) cannot be estimated correctly since no measurements are available from those regions (hence, those voxels do not influence the gradients). ■RSSI prediction. Figure ... | https://arxiv.org/abs/2505.22441v1 |
Follow ups and Conclusion Follow-ups. (1) The ability to model 2ndorder reflections will boost EchoNeRF ’s accuracy, allowing it to sharpen the scene and decode smaller objects. For short range applications, such as non-intrusive medical imaging, 2ndand3rdorder reflections would be crucial. This remains an important di... | https://arxiv.org/abs/2505.22441v1 |
3d scene reconstruction with the manhattan-world assumption. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 5511–5520, June 2022. [12] Yuan-Chen Guo, Di Kang, Linchao Bao, Yu He, and Song-Hai Zhang. Nerfren: Neural radiance fields with reflections. In Proceedings of ... | https://arxiv.org/abs/2505.22441v1 |
I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena , 60(1):259–268, 1992. [30] Viktor Rudnev, Mohamed Elgharib, William Smith, Lingjie Liu, Vladislav Golyanik, and Christian Theobalt. Nerf for outdoor scene relighting. In European Conferenc... | https://arxiv.org/abs/2505.22441v1 |
and delayed impulses as shown in Eqn 9. h(t) =NX i=1aiejϕiδ(t−τi), (9) where Nis the number of multipath components, aidenotes the amplitude (attenuation factor) of the i-th path, ϕirepresents the phase shift of the i-th path, and τiis the delay of the i-th path. For an input signal x(t)transmitted through the channel ... | https://arxiv.org/abs/2505.22441v1 |
more spread out, ψLoS+ψref1accounts to 95% of the total power, with a reduced spread. Moreover, secondary reflections ψref2only contribute to less than 3% of the total power. Hence, EchoNeRF models the first-order reflections along with the line-of-sight. 5We use * to denote complex conjugate 14 Figure 9: Histograms il... | https://arxiv.org/abs/2505.22441v1 |
been collected from approximately 2000 Rxs positioned in the floorplan, with data gathered from five Txs. E.1 Linear-Scale RSSI Loss: For the training of EchoNeRF , we optimize on the linear-scale RSSI values. Linear loss ensures that the receivers that capture stronger signals are given more importance during training... | https://arxiv.org/abs/2505.22441v1 |
k=10) while training the reflection model. We use the ADAM optimizer [ 17] with 1.0−4learning rate. We train our models on NVIDIA A100 GPUs. 17 Figure 13: Comparison of Ground truth Txlocations indicated in red in the first column with the estimated Txlocations shown in blue from starting from column two. The Rxpositio... | https://arxiv.org/abs/2505.22441v1 |
Seg. NeRF2 EchoNeRF LoS EchoNeRF EchoNeRF Signal Predic- tion Figure 15: Additional qualitative comparisons of Ground Truth floorplans against those inferred by baselines Heatmap Segmentation andNeRF2 . The 4th and 5th rows show floorplans by our proposed models EchoNeRF _LoSandEchoNeRF with clearly identified walls an... | https://arxiv.org/abs/2505.22441v1 |
arXiv:2505.22442v1 [cs.LG] 28 May 2025SOReL and TOReL: Two Methods for Fully Offline Reinforcement Learning Mattie Fellows∗,1Clarisse Wibault∗,1,2 Uljad Berdica1Johannes Forkel1Jakob N. Foerster1Michael A. Osborne2 1Foerster Lab for AI Research (FLAIR)2Machine Learning Research Group Department of Engineering Science U... | https://arxiv.org/abs/2505.22442v1 |
O F F L I N E(a) Existing Model-Based Offline RL Offline Data Bank Bayesian World ModelPosterior over Environment Dynamics Approximate Regret Low Enough? Agent Succeeds Predictive Regret Number of Datapoints0 10-0101102103104Learn Behavioural Policy Retune/ Change model Use More Data O F F L I N E Deploy Agent YESNO (b... | https://arxiv.org/abs/2505.22442v1 |
that UCB typically requires about a dataset’s worth of online samples to match TOReL’s performance. We summarise our key contributions: I In Section 4, we develop a Bayesian framework for model-based offline RL; IIIn Section 5 we carry out a regret analysis for our framework, demonstrating regret is controlled by the P... | https://arxiv.org/abs/2505.22442v1 |
a priori. Once deployed, the agent is faced with the exploration/exploitation dilemma in that it must balance exploring to learn about the unknown environment dynamics with exploiting. In offline RL [ 41,44,49], an agent has access to a dataset of histories of various lengths collected from the true environment. The po... | https://arxiv.org/abs/2505.22442v1 |
data. Finally, understanding of offline RL from a Bayesian perspective is limited. To the authors’ knowledge, only Chen et al. [15] have framed solving offline model-based RL as solving a BAMDP, however no regret analysis of the Bayes-optimal policy is carried out, a continuous BAMCP [ 28] approximation is used to lear... | https://arxiv.org/abs/2505.22442v1 |
that a simple RL2[20] style algorithm can be applied: Jπ Bayes(ˆPΘ(DN)):=Eθ∼ˆPΘ(DN)" Eh∞∼P∞,π(θ)"∞X i=0γiri## . (2) Solving Eq. (2) is known as solving a Bayes-adaptive MDP (BAMDP) [ 21]. We optimise the objective in Eq. (2) by sampling a hypothesis environment from the approximate posterior θ∼ˆPΘ(DN)then rolling out t... | https://arxiv.org/abs/2505.22442v1 |
how much errors in the model influence the regret at each state. Regions of state-action space that require more timesteps to reach from initial states are weighted significantly less than those that are encountered earlier and more frequently, as state errors encountered early accumulate in each prediction from that t... | https://arxiv.org/abs/2505.22442v1 |
1Theorem 2 applies to the Gaussian world model introduced in Section 5.3 with neural network mean functions with C2-continuous activations (tanh, identity, sigmoid, softplus, SiLU, SELU, GELU...) using a Gaussian or uniform prior truncated to a compact parameter space and similarly well-behaved parametric models. The O... | https://arxiv.org/abs/2505.22442v1 |
the predictive variance as: V(DN):=E(s,a)∼ρ⋆π Eθ∼PΘ(DN)∥r(s, a,DN)−rθ(s, a)∥2 2 2σ2r,(s, a)+∥s′(s, a,DN)−s′ θ(s, a)∥2 2 2σ2s(s, a) . (8) We now re-write the PIL for the Gaussian world model using these two terms: Proposition 1. Using the Gaussian world model in Eq. (6), it follows: Iπ N=E(DN,M⋆) +V(DN). (9) Eq. (9)... | https://arxiv.org/abs/2505.22442v1 |
in the model, especially as γ→1, which is an artifact of model errors accumulating over all future timesteps in the regret analysis. Instead, we approximate the regret using the posterior predictive median: Regret (M⋆,DN)≈ˆRmax−ˆMθ∼PΘ(DN),h∞∼Pπ∞(θ)[R(h∞)], (10) where ˆMθ∼PΘ(DN),h∞∼Pπ∞(θ)[R(h∞)]denotes the median predic... | https://arxiv.org/abs/2505.22442v1 |
approximation method to more general methods in TOReL will not yield an accurate estimate of the regret in terms of its absolute value . Instead, we treat the approximate regret in Eq. (10) as a regret metric that is positively correlated with true regret, and use this to tune ORL parameters ϕIII. Our empirical evaluat... | https://arxiv.org/abs/2505.22442v1 |
prior to deployment. We also highlight the generalisability of our algorithm: while the policy used to collect the halfcheetah dataset achieves an expected episodic return of around 1800 (Fig. 11 in Appendix E), SOReL’s policy (learned on a subset of the offline dataset) achieves a normalised regret of around 0.28 in t... | https://arxiv.org/abs/2505.22442v1 |
Task Algo. Oracle TOReL Oracle Mean TOReL Mean True brax-halfcheetah-full-replay ReBRAC 0.089 0.089 0.262 0.264 0.417 brax-hopper-full-replay ReBRAC 0.070 0.070 0.193 0.209 0.554 brax-walker-full-replay ReBRAC 0.000 0.000 0.241 0.317 0.425 d4rl-halfcheetah-medium-expert-v2 ReBRAC 0.000 0.036 0.176 0.268 0.336 d4rl-hopp... | https://arxiv.org/abs/2505.22442v1 |
Jul 2021. URL https://proceedings.mlr.press/v139/ball21a.html . 1, 3 [6]Andrew R Barron. Information-theoretic characterization of bayes performance and the choice of priors in parametric and nonparametric problems. In Bayesian Statistics 6: Proceedings of the Sixth Valencia International Meeting June 6-10, 1998 . Oxfo... | https://arxiv.org/abs/2505.22442v1 |
and Shixiang (Shane) Gu. A minimalist approach to offline reinforcement learn- ing. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 20132–20145. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc... | https://arxiv.org/abs/2505.22442v1 |
valuation problems on the witwater- sand. Journal of the Chemical, Metallurgical and Mining Society of South Africa , 52:119–139, 1951. 4.1, B 14 [40] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conserva- tive q-learning for offline reinforcement learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F.... | https://arxiv.org/abs/2505.22442v1 |
[56] Gareth O. Roberts and Jeffrey S. Rosenthal. General state space Markov chains and MCMC algorithms. Probability Surveys , 1(none):20 – 71, 2004. doi: 10.1214/154957804100000024. URL https://doi.org/10.1214/154957804100000024 . D.3 [57] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. P... | https://arxiv.org/abs/2505.22442v1 |
rl with linear function approximation?, 2020. URL https://arxiv.org/abs/2010. 11895 . 3 [72] Norbert Wiener. Differential-space. Journal of Mathematics and Physics , 2(1-4):131–174, 1923. doi: https://doi.org/10.1002/sapm192321131. URL https://onlinelibrary.wiley. com/doi/abs/10.1002/sapm192321131 . 4.1, B [73] Yuhong ... | https://arxiv.org/abs/2505.22442v1 |
can easily be generalised to non-parametric methods like Gaussian process regression [ 55,72,39]. A prior distribution over the parameter space PΘis specified, which represents the initial a priori belief in the true value of P⋆ R,S(s, a)before the agent has observed any transitions. Priors are a powerful aspect of Bay... | https://arxiv.org/abs/2505.22442v1 |
{0,1, . . . N −1}. Our final line means equality up to a constant, as we can ignore theD 2log(2π)term for optimisation because it is independent of θ. We use RP ensembles for our approximate posterior [ 51,16]; here an ensemble of Mseparate model weights {θ0, θi, . . . θ M−1}are randomly initialised and are optimised i... | https://arxiv.org/abs/2505.22442v1 |
PIL, the space spanned by the approximate posterior via model ensemble is approximately large enough, relative to the model error. Below we order different ensemble statistics from least to most conservative: Regret (M⋆,DN)≈ˆRmax−ˆRθ∼PΘ(DN),h∞∼Pπ∞(θ)[R(h∞)], ≤ˆRmax−ˆEθ∼PΘ(DN),h∞∼Pπ∞(θ)[R(h∞)], ≈ˆRmax−ˆMθ∼PΘ(DN),h∞∼Pπ∞(... | https://arxiv.org/abs/2505.22442v1 |
from Jπ⋆ Bayes(PΦ(DN))≤Jπ⋆ Bayes Bayes(PΦ(DN))by definition. Now our goal is to bound Jπ(M⋆)−Jπ Bayes(PΘ(DN)) : Jπ(M⋆)−Jπ Bayes(PΘ(DN)) = Eh∞∼P⋆∞,π"∞X i=0γiri# −Eh∞∼Pπ∞(DN)"∞X i=0γiri# , = ∞X i=0γiEhi+1∼P⋆ i+1,π[ri]−∞X i=0γiEhi+1∼Pπ i+1(DN)[ri] , = ∞X i=0γi Ehi+1∼P⋆ i+1,π[ri]−Ehi+1∼Pπ i+1(DN)[ri] , ≤∞X i=0γi Ehi+1∼P⋆... | https://arxiv.org/abs/2505.22442v1 |
the TV distance terms using the KL divergence: Regret (M⋆,DN)≤2Rmax·sup πEi∼G(γ) TV P⋆ i+1,π∥Pπ i+1(DN)) , ≤2Rmax·sup πEi∼G(γ)q 1−exp −KL P⋆ i+1,π∥Pπ i+1(DN)) .(17) We make two observations. Firstly, as the KL divergence is convex in its second argument and Pi+1,π(DN) =Eθ∼PΘ(DN) Pπ i+1(θ) , we can bound eac... | https://arxiv.org/abs/2505.22442v1 |
∥rθ(s, a)−r∥2 2− ∥r⋆(s, a)−r∥2 2 2σ2r +∥s′ θ(s, a)−s′∥2 2− ∥s⋆′(s, a)−s′∥2 2 2σ2s# , =Er,s′∼P⋆ R,S(s,a)" rθ(s, a)2−2rrθ(s, a)−r⋆(s, a)2+ 2rr⋆(s, a) 2σ2r +∥s′ θ(s, a)∥2 2−2s′⊤s′ θ(s, a)− ∥s⋆′(s, a)∥2 2+ 2s′⊤s⋆′(s, a) 2σ2s# , =rθ(s, a)2−2r⋆(s, a)rθ(s, a)−r⋆(s, a)2+ 2r⋆(s, a)2 2σ2r +∥s′ θ(s, a)∥2 2−2s⋆′(s, a)⊤s′ θ(s, a)− ... | https://arxiv.org/abs/2505.22442v1 |
a, θ⋆ i)⊤ with ∥Σg i∥<∞. Our assumptions are mild. Assumption 1i is our strictest assumption, however our theory should not deviate from practice if the model space is slightly misspecified. Moreover, model capacity can always be increased and tuned if misspecification is affecting convergence. In addition to introduc... | https://arxiv.org/abs/2505.22442v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.