Upload folder using huggingface_hub
Browse files- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/appendix_chunks.jsonl +208 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/appendix_text_v3.txt +623 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/assets.json +24 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/assets/_page_1_Figure_1.jpeg +3 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/assets/_page_4_Figure_8.jpeg +3 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/chunks_v3_anonymized.jsonl +0 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/dataset_meta.json +61 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/main_body_chunks.jsonl +101 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/marker_meta.json +2512 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/model_text_v3.txt +302 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/paper.blocks.json +0 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/paper.md +0 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/parse_report.json +76 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/reference_chunks.jsonl +4 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/reference_text_v3.txt +11 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/sanitization_report.json +59 -0
- icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/sanitized_v3.txt +658 -0
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/appendix_chunks.jsonl
ADDED
|
@@ -0,0 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0105", "section": "A. BSZO Basic Algorithm", "page_start": 10, "page_end": 10, "type": "Text", "text": "515516", "source": "marker_v2", "marker_block_id": "/page/9/Text/33"}
|
| 2 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0106", "section": "A. BSZO Basic Algorithm", "page_start": 10, "page_end": 10, "type": "Text", "text": "537538", "source": "marker_v2", "marker_block_id": "/page/9/Text/48"}
|
| 3 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0107", "section": "A. BSZO Basic Algorithm", "page_start": 10, "page_end": 10, "type": "Text", "text": "We provide a basic version of BSZO without caching optimization, which supports arbitrary sampling directions when m > k.", "source": "marker_v2", "marker_block_id": "/page/9/Text/2"}
|
| 4 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0108", "section": "Algorithm 2 Bayesian Subspace Zeroth-Order Optimization (Basic Version)", "page_start": 10, "page_end": 10, "type": "Code", "text": "Input: parameters \\theta, learning rate \\eta, perturbation scale \\varepsilon, subspace dimension k, sampling steps m, prior variance \\sigma_p^2, noise variance \\sigma_e^2, smoothing factor \\alpha, max step T for t = 1 to T do Sample k random seeds \\{s_i\\}_{i=1}^k Initialize \\mu \\leftarrow \\mathbf{0}_k, \\Sigma \\leftarrow \\sigma_p^2 I_k, f_0 \\leftarrow \\mathcal{L}(\\theta) for \\tau = 1 to m do d \\leftarrow d_{\\tau} \\text{ if } \\tau \\leq k, \\text{ else } d \\leftarrow \\arg \\max_{\\|v\\|=1} v^{\\top} \\Sigma v for i = 1 to k do \\theta \\leftarrow \\theta + \\varepsilon \\cdot d_i \\cdot \\text{RANDN}(n, s_i) \\text{ if } d_i > 10^{-10} end for y \\leftarrow (\\mathcal{L}(\\theta) - f_0)/\\varepsilon for i = 1 to k do \\theta \\leftarrow \\theta - \\varepsilon \\cdot d_i \\cdot \\text{RANDN}(n, s_i) \\text{ if } d_i > 10^{-10} \\begin{aligned} r &\\leftarrow (y - d^{\\top} \\mu) / \\|d\\|, \\quad \\sigma_e^2 \\leftarrow (1 - \\alpha) \\sigma_e^2 + \\alpha r^2 \\\\ K &\\leftarrow \\Sigma d / (d^{\\top} \\Sigma d + \\sigma_e^2) \\end{aligned} \\mu \\leftarrow \\mu + K(y - d^{\\mathsf{T}}\\mu), \\quad \\Sigma \\leftarrow \\Sigma - Kd^{\\mathsf{T}}\\Sigma end for for i = 1 to k do \\theta \\leftarrow \\theta - \\eta \\cdot \\mu_i \\cdot \\text{RANDN}(n, s_i) end for end for return \\theta RANDN(n, s): returns n-dim Gaussian vector seeded by s", "source": "marker_v2", "marker_block_id": "/page/9/Code/4"}
|
| 5 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0109", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Text", "text": "Lemma B.1 (Expectation of Direction Derivative). Under Assumption 3.1, the one-sided difference satisfies:", "source": "marker_v2", "marker_block_id": "/page/9/Text/7"}
|
| 6 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0110", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Equation", "text": "\\mathbb{E}[\\hat{y}(d)] = d^{\\mathsf{T}}B^{\\mathsf{T}}g + O(\\varepsilon L) \\tag{17}", "source": "marker_v2", "marker_block_id": "/page/9/Equation/8"}
|
| 7 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0111", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Text", "text": "Proof. By \\mathbb{E}[\\mathcal{L}(\\theta;\\xi)] = \\mathcal{L}(\\theta) :", "source": "marker_v2", "marker_block_id": "/page/9/Text/9"}
|
| 8 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0112", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Equation", "text": "\\mathbb{E}[\\hat{y}(d)] = \\frac{\\mathcal{L}(\\theta_0 + \\varepsilon B d) - \\mathcal{L}(\\theta_0)}{\\varepsilon} (18)", "source": "marker_v2", "marker_block_id": "/page/9/Equation/10"}
|
| 9 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0113", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Text", "text": "By Taylor expansion at \\theta_0 :", "source": "marker_v2", "marker_block_id": "/page/9/Text/11"}
|
| 10 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0114", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Equation", "text": "\\mathcal{L}(\\theta_0 + \\varepsilon Bd) = \\mathcal{L}(\\theta_0) + \\varepsilon \\langle \\nabla \\mathcal{L}(\\theta_0), Bd \\rangle + \\frac{\\varepsilon^2}{2} (Bd)^\\top H(Bd) + O(\\varepsilon^3) (19)", "source": "marker_v2", "marker_block_id": "/page/9/Equation/12"}
|
| 11 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0115", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Text", "text": "where H = \\nabla^2 \\mathcal{L}(\\theta_0) is the Hessian.", "source": "marker_v2", "marker_block_id": "/page/9/Text/13"}
|
| 12 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0116", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Text", "text": "Substituting:", "source": "marker_v2", "marker_block_id": "/page/9/Text/14"}
|
| 13 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0117", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Equation", "text": "\\mathbb{E}[\\hat{y}(d)] = \\frac{\\varepsilon d^{\\top} B^{\\top} g + \\frac{\\varepsilon^2}{2} d^{\\top} B^{\\top} H B d + O(\\varepsilon^3)}{\\varepsilon} = d^{\\top} B^{\\top} g + \\frac{\\varepsilon}{2} d^{\\top} B^{\\top} H B d + O(\\varepsilon^2) \\tag{20}", "source": "marker_v2", "marker_block_id": "/page/9/Equation/15"}
|
| 14 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0118", "section": "B.1. Auxiliary Lemmas", "page_start": 10, "page_end": 10, "type": "Text", "text": "By Assumption 3.1, ||H|| \\le L , so |d^{\\top}B^{\\top}HBd| \\le L||Bd||^2 . Thus the bias is O(\\varepsilon L) .", "source": "marker_v2", "marker_block_id": "/page/9/Text/16"}
|
| 15 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0119", "section": "B.1. Auxiliary Lemmas", "page_start": 11, "page_end": 11, "type": "ListGroup", "text": "Lemma B.2 (Variance of Direction Derivative). Let \\Sigma = Cov(\\zeta) where \\zeta = \\nabla \\mathcal{L}(\\theta; \\xi) \\nabla \\mathcal{L}(\\theta) . When using the same mini-batch \\xi for both function evaluations: (a) Conditional variance: Var(\\hat{y}(d)|B) = (Bd)^{\\top}\\Sigma(Bd) + O(\\varepsilon^4) (b) Unconditional variance: \\mathbb{E}_B[Var(\\hat{y}(d)|B)] = tr(\\Sigma) + O(\\varepsilon^4) Proof. Key insight: Using the same mini-batch \\xi for both evaluations causes the noise to be correlated, not independent.", "source": "marker_v2", "marker_block_id": "/page/10/ListGroup/318"}
|
| 16 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0120", "section": "(a) Conditional variance derivation", "page_start": 11, "page_end": 11, "type": "Text", "text": "For fixed \\xi , Taylor expand the random loss \\mathcal{L}(\\theta; \\xi) at \\theta_0 :", "source": "marker_v2", "marker_block_id": "/page/10/Text/6"}
|
| 17 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0121", "section": "(a) Conditional variance derivation", "page_start": 11, "page_end": 11, "type": "Equation", "text": "\\mathcal{L}(\\theta_0 + \\varepsilon Bd; \\xi) = \\mathcal{L}(\\theta_0; \\xi) + \\varepsilon \\langle \\nabla \\mathcal{L}(\\theta_0; \\xi), Bd \\rangle + O(\\varepsilon^2) (21)", "source": "marker_v2", "marker_block_id": "/page/10/Equation/7"}
|
| 18 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0122", "section": "(a) Conditional variance derivation", "page_start": 11, "page_end": 11, "type": "Text", "text": "Since both evaluations use the same \\xi , the base term \\mathcal{L}(\\theta_0; \\xi) cancels:", "source": "marker_v2", "marker_block_id": "/page/10/Text/8"}
|
| 19 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0123", "section": "(a) Conditional variance derivation", "page_start": 11, "page_end": 11, "type": "Equation", "text": "\\hat{y}(d) = \\frac{\\mathcal{L}(\\theta_0 + \\varepsilon Bd; \\xi) - \\mathcal{L}(\\theta_0; \\xi)}{\\varepsilon} = \\langle \\nabla \\mathcal{L}(\\theta_0; \\xi), Bd \\rangle + O(\\varepsilon) (22)", "source": "marker_v2", "marker_block_id": "/page/10/Equation/9"}
|
| 20 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0124", "section": "(a) Conditional variance derivation", "page_start": 11, "page_end": 11, "type": "Text", "text": "Let \\nabla \\mathcal{L}(\\theta_0; \\xi) = \\nabla \\mathcal{L}(\\theta_0) + \\zeta where \\zeta is zero-mean noise with Cov(\\zeta) = \\Sigma . Given B:", "source": "marker_v2", "marker_block_id": "/page/10/Text/10"}
|
| 21 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0125", "section": "(a) Conditional variance derivation", "page_start": 11, "page_end": 11, "type": "Equation", "text": "\\operatorname{Var}(\\hat{y}(d)|B) = \\operatorname{Var}(\\langle \\zeta, Bd \\rangle | B) = (Bd)^{\\top} \\operatorname{Cov}(\\zeta)(Bd) = (Bd)^{\\top} \\Sigma(Bd) + O(\\varepsilon^{2}) (23)", "source": "marker_v2", "marker_block_id": "/page/10/Equation/11"}
|
| 22 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0126", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "Text", "text": "For coordinate-axis sampling d = e_i , we have Bd = z_i \\sim \\mathcal{N}(0, I_n) .", "source": "marker_v2", "marker_block_id": "/page/10/Text/13"}
|
| 23 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0127", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "Text", "text": "Taking expectation over B:", "source": "marker_v2", "marker_block_id": "/page/10/Text/14"}
|
| 24 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0128", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "Equation", "text": "\\mathbb{E}_B[(Bd)^{\\top}\\Sigma(Bd)] = \\mathbb{E}[z_i^{\\top}\\Sigma z_i] (24)", "source": "marker_v2", "marker_block_id": "/page/10/Equation/15"}
|
| 25 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0129", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "Text", "text": "By the trace trick:", "source": "marker_v2", "marker_block_id": "/page/10/Text/16"}
|
| 26 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0130", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "Equation", "text": "\\mathbb{E}[z_i^{\\top} \\Sigma z_i] = \\mathbb{E}[\\operatorname{tr}(\\Sigma z_i z_i^{\\top})] = \\operatorname{tr}(\\Sigma \\cdot \\mathbb{E}[z_i z_i^{\\top}]) = \\operatorname{tr}(\\Sigma \\cdot I_n) = \\operatorname{tr}(\\Sigma) (25)", "source": "marker_v2", "marker_block_id": "/page/10/Equation/17"}
|
| 27 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0131", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "Equation", "text": "By Assumption 3.2, \\operatorname{tr}(\\Sigma) = \\mathbb{E}[\\|\\zeta\\|^2] \\le \\sigma_q^2 .", "source": "marker_v2", "marker_block_id": "/page/10/Equation/18"}
|
| 28 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0132", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "Text", "text": "Lemma B.3 (High-Dimensional Approximate Orthogonality). Let z_1, \\ldots, z_k \\stackrel{iid}{\\sim} \\mathcal{N}(0, I_n) . When n \\gg k^2 :", "source": "marker_v2", "marker_block_id": "/page/10/Text/19"}
|
| 29 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0133", "section": "(b) Unconditional variance derivation", "page_start": 11, "page_end": 11, "type": "ListGroup", "text": "(a) ||z_i||^2 = n \\pm O(\\sqrt{n}) (b) For i \\neq j : \\frac{z_i^\\top z_j}{\\|z_i\\| \\|z_j\\|} = O(1/\\sqrt{n})", "source": "marker_v2", "marker_block_id": "/page/10/ListGroup/319"}
|
| 30 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0134", "section": "Proof. (a) Norm concentration", "page_start": 11, "page_end": 11, "type": "Text", "text": "Since ||z_i||^2 = \\sum_{j=1}^n z_{ij}^2 \\sim \\chi^2(n) , we have:", "source": "marker_v2", "marker_block_id": "/page/10/Text/23"}
|
| 31 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0135", "section": "Proof. (a) Norm concentration", "page_start": 11, "page_end": 11, "type": "Equation", "text": "\\mathbb{E}[\\|z_i\\|^2] = n, \\quad \\text{Var}(\\|z_i\\|^2) = 2n \\tag{26}", "source": "marker_v2", "marker_block_id": "/page/10/Equation/24"}
|
| 32 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0136", "section": "Proof. (a) Norm concentration", "page_start": 11, "page_end": 11, "type": "Text", "text": "By Chebyshev inequality or sub-Gaussian concentration:", "source": "marker_v2", "marker_block_id": "/page/10/Text/25"}
|
| 33 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0137", "section": "Proof. (a) Norm concentration", "page_start": 11, "page_end": 11, "type": "Equation", "text": "\\mathbb{P}\\left(\\left|\\|z_i\\|^2 - n\\right| > t\\sqrt{n}\\right) \\le 2e^{-ct^2} \\tag{27}", "source": "marker_v2", "marker_block_id": "/page/10/Equation/26"}
|
| 34 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0138", "section": "Proof. (a) Norm concentration", "page_start": 11, "page_end": 11, "type": "Text", "text": "Thus ||z_i||^2 = n \\pm O(\\sqrt{n}) with high probability.", "source": "marker_v2", "marker_block_id": "/page/10/Text/27"}
|
| 35 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0139", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "605 For independent z i , z j ∼ N (0, In), the inner product z ⊤ i z j = P n l =1 zilzjl is a sum of n independent random variables with:", "source": "marker_v2", "marker_block_id": "/page/11/Text/1"}
|
| 36 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0140", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\mathbb{E}[z_i^\\top z_j] = 0, \\quad \\operatorname{Var}(z_i^\\top z_j) = \\sum_{l=1}^n \\operatorname{Var}(z_{il} z_{jl}) = n (28)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/2"}
|
| 37 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0141", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "Thus z ⊤ i z j = O( √ n) with high probability. Since ∥zi∥∥zj∥ = O(n):", "source": "marker_v2", "marker_block_id": "/page/11/Text/3"}
|
| 38 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0142", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\cos \\theta_{ij} = \\frac{z_i^\\top z_j}{\\|z_i\\| \\|z_j\\|} = O\\left(\\frac{\\sqrt{n}}{n}\\right) = O\\left(\\frac{1}{\\sqrt{n}}\\right) \\to 0 (29)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/4"}
|
| 39 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0143", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "This shows that random Gaussian vectors are approximately orthogonal in high dimensions.", "source": "marker_v2", "marker_block_id": "/page/11/Text/5"}
|
| 40 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0144", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "Lemma B.4 (Isserlis' Theorem Application). For z ∼ N (0, In) and symmetric matrices A, B :", "source": "marker_v2", "marker_block_id": "/page/11/Text/6"}
|
| 41 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0145", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\mathbb{E}[(z^{\\top}Az)(z^{\\top}Bz)] = tr(A)tr(B) + 2tr(AB) (30)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/7"}
|
| 42 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0146", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "In particular, for A = I n and B = Σ :", "source": "marker_v2", "marker_block_id": "/page/11/Text/8"}
|
| 43 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0147", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\mathbb{E}[\\|z\\|^2 \\cdot z^{\\mathsf{T}} \\Sigma z] = (n+2) tr(\\Sigma) \\tag{31}", "source": "marker_v2", "marker_block_id": "/page/11/Equation/9"}
|
| 44 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0148", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "Proof. By Isserlis' theorem (Wick's theorem), for z ∼ N (0, In):", "source": "marker_v2", "marker_block_id": "/page/11/Text/10"}
|
| 45 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0149", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\mathbb{E}[z_i z_j z_k z_l] = \\delta_{ij} \\delta_{kl} + \\delta_{ik} \\delta_{jl} + \\delta_{il} \\delta_{jk} (32)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/11"}
|
| 46 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0150", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "Expanding the quadratic forms:", "source": "marker_v2", "marker_block_id": "/page/11/Text/12"}
|
| 47 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0151", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "(z^{\\top}Az)(z^{\\top}Bz) = \\sum_{i,j,k,l} A_{ij}B_{kl}z_iz_jz_kz_l (33)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/13"}
|
| 48 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0152", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "Taking expectation:", "source": "marker_v2", "marker_block_id": "/page/11/Text/14"}
|
| 49 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0153", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\mathbb{E}[(z^{\\top}Az)(z^{\\top}Bz)] = \\sum_{i,j,k,l} A_{ij}B_{kl}(\\delta_{ij}\\delta_{kl} + \\delta_{ik}\\delta_{jl} + \\delta_{il}\\delta_{jk}) (34)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/15"}
|
| 50 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0154", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "= \\sum_{i,k} A_{ii} B_{kk} + \\sum_{i,j} A_{ij} B_{ij} + \\sum_{i,j} A_{ij} B_{ji} (35)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/16"}
|
| 51 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0155", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "= \\operatorname{tr}(A)\\operatorname{tr}(B) + \\operatorname{tr}(AB) + \\operatorname{tr}(AB^{\\top}) (36)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/17"}
|
| 52 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0156", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "For symmetric A, B: tr(AB ⊤ ) = tr(AB), thus:", "source": "marker_v2", "marker_block_id": "/page/11/Text/18"}
|
| 53 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0157", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\mathbb{E}[(z^{\\top}Az)(z^{\\top}Bz)] = \\operatorname{tr}(A)\\operatorname{tr}(B) + 2\\operatorname{tr}(AB) (37)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/19"}
|
| 54 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0158", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Text", "text": "Setting A = In, B = Σ:", "source": "marker_v2", "marker_block_id": "/page/11/Text/20"}
|
| 55 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0159", "section": "(b) Approximate orthogonality", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\mathbb{E}[\\|z\\|^2 \\cdot z^{\\top} \\Sigma z] = n \\cdot \\operatorname{tr}(\\Sigma) + 2\\operatorname{tr}(\\Sigma) = (n+2)\\operatorname{tr}(\\Sigma) (38)", "source": "marker_v2", "marker_block_id": "/page/11/Equation/21"}
|
| 56 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0160", "section": "B.2. Noise Variance Justification", "page_start": 12, "page_end": 12, "type": "Text", "text": "The observation model yˆ(d) = d ⊤ g˜ + ν with ν ∼ N (0, σ 2 e ∥d∥ 2 ) is justified as follows.", "source": "marker_v2", "marker_block_id": "/page/11/Text/23"}
|
| 57 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0161", "section": "B.2. Noise Variance Justification", "page_start": 12, "page_end": 12, "type": "Text", "text": "Lemma B.5 (Effective Noise Decomposition). The effective noise variance decomposes as:", "source": "marker_v2", "marker_block_id": "/page/11/Text/24"}
|
| 58 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0162", "section": "B.2. Noise Variance Justification", "page_start": 12, "page_end": 12, "type": "Equation", "text": "\\sigma_e^2 = \\sigma_\\varepsilon^2 + tr(\\Sigma) \\tag{39}", "source": "marker_v2", "marker_block_id": "/page/11/Equation/25"}
|
| 59 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0163", "section": "B.2. Noise Variance Justification", "page_start": 12, "page_end": 12, "type": "Text", "text": "where σ 2 ε is the finite-difference approximation error and tr (Σ) is the gradient noise variance.", "source": "marker_v2", "marker_block_id": "/page/11/Text/26"}
|
| 60 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0164", "section": "Proof. Step 1: Decomposition of the observation.", "page_start": 13, "page_end": 13, "type": "Text", "text": "For coordinate-axis sampling with d = e_i , the direction in parameter space is z_i = Bd = Be_i (the i -th column of B ), where z_i \\sim \\mathcal{N}(0, I_n) .", "source": "marker_v2", "marker_block_id": "/page/12/Text/2"}
|
| 61 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0165", "section": "Proof. Step 1: Decomposition of the observation.", "page_start": 13, "page_end": 13, "type": "Text", "text": "The observation can be decomposed as:", "source": "marker_v2", "marker_block_id": "/page/12/Text/3"}
|
| 62 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0166", "section": "Proof. Step 1: Decomposition of the observation.", "page_start": 13, "page_end": 13, "type": "Equation", "text": "y_i = \\underbrace{z_i^{\\mathsf{T}} g}_{\\text{true signal gradient noise}} + \\underbrace{z_i^{\\mathsf{T}} \\zeta}_{\\text{finite-diff error}} + \\underbrace{\\epsilon_i}_{\\text{finite-diff error}} \\tag{40}", "source": "marker_v2", "marker_block_id": "/page/12/Equation/4"}
|
| 63 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0167", "section": "Proof. Step 1: Decomposition of the observation.", "page_start": 13, "page_end": 13, "type": "Text", "text": "where:", "source": "marker_v2", "marker_block_id": "/page/12/Text/5"}
|
| 64 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0168", "section": "Proof. Step 1: Decomposition of the observation.", "page_start": 13, "page_end": 13, "type": "ListGroup", "text": "q = \\nabla \\mathcal{L}(\\theta) is the true gradient \\zeta = \\nabla \\mathcal{L}(\\theta; \\xi) \\nabla \\mathcal{L}(\\theta) is the stochastic gradient noise with \\mathbb{E}[\\zeta] = 0 and \\text{Cov}(\\zeta) = \\Sigma \\epsilon_i \\sim \\mathcal{N}(0, \\sigma_{\\varepsilon}^2) is the finite-difference truncation error, independent of \\zeta and z_i", "source": "marker_v2", "marker_block_id": "/page/12/ListGroup/350"}
|
| 65 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0169", "section": "Step 2: Identifying the noise term.", "page_start": 13, "page_end": 13, "type": "Text", "text": "The observation noise is defined as \\nu_i := y_i - z_i^{\\top} g = z_i^{\\top} \\zeta + \\epsilon_i .", "source": "marker_v2", "marker_block_id": "/page/12/Text/10"}
|
| 66 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0170", "section": "Step 2: Identifying the noise term.", "page_start": 13, "page_end": 13, "type": "Text", "text": "Since \\mathbb{E}[\\zeta] = 0 and \\mathbb{E}[\\epsilon_i] = 0 , we have \\mathbb{E}[\\nu_i | z_i] = 0 .", "source": "marker_v2", "marker_block_id": "/page/12/Text/11"}
|
| 67 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0171", "section": "Step 3: Conditional variance (given z_i ).", "page_start": 13, "page_end": 13, "type": "Text", "text": "Since \\zeta and \\epsilon_i are independent:", "source": "marker_v2", "marker_block_id": "/page/12/Text/13"}
|
| 68 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0172", "section": "Step 3: Conditional variance (given z_i ).", "page_start": 13, "page_end": 13, "type": "Equation", "text": "Var(\\nu_i|z_i) = Var(z_i^{\\top} \\zeta|z_i) + Var(\\epsilon_i) = z_i^{\\top} \\Sigma z_i + \\sigma_{\\varepsilon}^2 (41)", "source": "marker_v2", "marker_block_id": "/page/12/Equation/14"}
|
| 69 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0173", "section": "Step 4: Unconditional variance (taking expectation over z_i ).", "page_start": 13, "page_end": 13, "type": "Text", "text": "Using the trace trick from Lemma B.2(b):", "source": "marker_v2", "marker_block_id": "/page/12/Text/16"}
|
| 70 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0174", "section": "Step 4: Unconditional variance (taking expectation over z_i ).", "page_start": 13, "page_end": 13, "type": "Equation", "text": "\\mathbb{E}_{z_i}[z_i^{\\top} \\Sigma z_i] = \\mathbb{E}[\\operatorname{tr}(\\Sigma z_i z_i^{\\top})] = \\operatorname{tr}(\\Sigma \\cdot \\mathbb{E}[z_i z_i^{\\top}]) = \\operatorname{tr}(\\Sigma \\cdot I_n) = \\operatorname{tr}(\\Sigma) (42)", "source": "marker_v2", "marker_block_id": "/page/12/Equation/17"}
|
| 71 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0175", "section": "Step 4: Unconditional variance (taking expectation over z_i ).", "page_start": 13, "page_end": 13, "type": "Text", "text": "Therefore, the effective noise variance is:", "source": "marker_v2", "marker_block_id": "/page/12/Text/18"}
|
| 72 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0176", "section": "Step 4: Unconditional variance (taking expectation over z_i ).", "page_start": 13, "page_end": 13, "type": "Equation", "text": "\\sigma_{\\varepsilon}^{2} := \\mathbb{E}_{z_{i}}[\\operatorname{Var}(\\nu_{i}|z_{i})] = \\operatorname{tr}(\\Sigma) + \\sigma_{\\varepsilon}^{2} \\tag{43}", "source": "marker_v2", "marker_block_id": "/page/12/Equation/19"}
|
| 73 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0177", "section": "Step 4: Unconditional variance (taking expectation over z_i ).", "page_start": 13, "page_end": 13, "type": "Equation", "text": "By Assumption 3.2, \\operatorname{tr}(\\Sigma) = \\mathbb{E}[\\|\\zeta\\|^2] \\leq \\sigma_g^2 , so \\sigma_e^2 \\leq \\sigma_g^2 + \\sigma_\\varepsilon^2 .", "source": "marker_v2", "marker_block_id": "/page/12/Equation/20"}
|
| 74 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0178", "section": "B.3. Proof of Main Convergence Theorem", "page_start": 13, "page_end": 13, "type": "Text", "text": "Proof of Theorem 4.2. Step 1: Single-step descent.", "source": "marker_v2", "marker_block_id": "/page/12/Text/22"}
|
| 75 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0179", "section": "B.3. Proof of Main Convergence Theorem", "page_start": 13, "page_end": 13, "type": "Text", "text": "By Assumption 3.1 ( L -smoothness):", "source": "marker_v2", "marker_block_id": "/page/12/Text/23"}
|
| 76 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0180", "section": "B.3. Proof of Main Convergence Theorem", "page_start": 13, "page_end": 13, "type": "Equation", "text": "\\mathcal{L}(\\theta_{t+1}) \\le \\mathcal{L}(\\theta_t) + \\langle g_t, \\Delta \\theta_t \\rangle + \\frac{L}{2} ||\\Delta \\theta_t||^2 (44)", "source": "marker_v2", "marker_block_id": "/page/12/Equation/24"}
|
| 77 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0181", "section": "B.3. Proof of Main Convergence Theorem", "page_start": 13, "page_end": 13, "type": "Text", "text": "where g_t = \\nabla \\mathcal{L}(\\theta_t) and \\Delta \\theta_t = -\\eta B_t \\mu_t^{(k)} .", "source": "marker_v2", "marker_block_id": "/page/12/Text/25"}
|
| 78 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0182", "section": "Step 2: Inner product term.", "page_start": 13, "page_end": 13, "type": "Text", "text": "By Lemma B.2, the observation model is y_i = z_i^\\top g_t + z_i^\\top \\zeta + \\epsilon_i , where \\zeta is gradient noise with \\text{Cov}(\\zeta) = \\Sigma , and \\epsilon_i \\sim \\mathcal{N}(0, \\sigma_{\\varepsilon}^2) is finite-difference error.", "source": "marker_v2", "marker_block_id": "/page/12/Text/27"}
|
| 79 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0183", "section": "Step 2: Inner product term.", "page_start": 13, "page_end": 13, "type": "Text", "text": "Let \\tilde{g} = g_t + \\zeta and \\epsilon = [\\epsilon_1, \\dots, \\epsilon_k]^{\\top} . Then:", "source": "marker_v2", "marker_block_id": "/page/12/Text/28"}
|
| 80 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0184", "section": "Step 2: Inner product term.", "page_start": 13, "page_end": 13, "type": "Equation", "text": "Y = B_t^{\\top} \\tilde{g} + \\epsilon, \\quad \\mu_t^{(k)} = \\gamma Y = \\gamma (B_t^{\\top} \\tilde{g} + \\epsilon) (45)", "source": "marker_v2", "marker_block_id": "/page/12/Equation/29"}
|
| 81 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0185", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Text", "text": "The parameter update becomes:", "source": "marker_v2", "marker_block_id": "/page/13/Text/1"}
|
| 82 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0186", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Equation", "text": "B_t \\mu_t^{(k)} = \\gamma B_t B_t^{\\top} (g_t + \\zeta) + \\gamma \\sum_{i=1}^k z_i \\epsilon_i \\tag{46}", "source": "marker_v2", "marker_block_id": "/page/13/Equation/2"}
|
| 83 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0187", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Text", "text": "Computing the expectation of the inner product. Since \\mathbb{E}[\\zeta] = 0 , \\mathbb{E}[\\epsilon_i] = 0 , and \\zeta , \\epsilon_i are independent of B_t :", "source": "marker_v2", "marker_block_id": "/page/13/Text/3"}
|
| 84 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0188", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Equation", "text": "\\mathbb{E}[\\langle g_t, B_t \\mu_t^{(k)} \\rangle | \\theta_t] = \\gamma \\mathbb{E}[g_t^\\top B_t B_t^\\top g_t] + \\gamma \\mathbb{E}[g_t^\\top B_t B_t^\\top \\zeta] + \\gamma \\sum_{i=1}^k \\mathbb{E}[(g_t^\\top z_i) \\epsilon_i] (47)", "source": "marker_v2", "marker_block_id": "/page/13/Equation/4"}
|
| 85 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0189", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Text", "text": "The second term: \\mathbb{E}[g_t^{\\top} B_t B_t^{\\top} \\zeta] = g_t^{\\top} \\mathbb{E}[B_t B_t^{\\top}] \\mathbb{E}[\\zeta] = 0.", "source": "marker_v2", "marker_block_id": "/page/13/Text/5"}
|
| 86 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0190", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Text", "text": "The third term: \\mathbb{E}[(g_t^{\\top} z_i) \\epsilon_i] = \\mathbb{E}[g_t^{\\top} z_i] \\mathbb{E}[\\epsilon_i] = 0 (independence).", "source": "marker_v2", "marker_block_id": "/page/13/Text/6"}
|
| 87 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0191", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Text", "text": "The first term: \\mathbb{E}[g_t^{\\top} B_t B_t^{\\top} g_t] = \\mathbb{E}[\\|B_t^{\\top} g_t\\|^2] = \\sum_{i=1}^k \\mathbb{E}[(z_i^{\\top} g_t)^2] = k \\|g_t\\|^2 .", "source": "marker_v2", "marker_block_id": "/page/13/Text/7"}
|
| 88 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0192", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Text", "text": "Therefore:", "source": "marker_v2", "marker_block_id": "/page/13/Text/8"}
|
| 89 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0193", "section": "Step 2: Inner product term.", "page_start": 14, "page_end": 14, "type": "Equation", "text": "\\mathbb{E}[\\langle g_t, \\Delta \\theta_t \\rangle | \\theta_t] = -\\eta \\gamma k \\|g_t\\|^2 \\tag{48}", "source": "marker_v2", "marker_block_id": "/page/13/Equation/9"}
|
| 90 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0194", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "From Step 2, B_t \\mu_t^{(k)} = \\gamma B_t B_t^{\\top} \\tilde{g} + \\gamma \\sum_i z_i \\epsilon_i . Thus:", "source": "marker_v2", "marker_block_id": "/page/13/Text/11"}
|
| 91 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0195", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Equation", "text": "||B_t \\mu_t^{(k)}||^2 = \\gamma^2 ||B_t B_t^\\top \\tilde{g}||^2 + \\gamma^2 \\left\\| \\sum_i z_i \\epsilon_i \\right\\|^2 + 2\\gamma^2 \\left\\langle B_t B_t^\\top \\tilde{g}, \\sum_i z_i \\epsilon_i \\right\\rangle (49)", "source": "marker_v2", "marker_block_id": "/page/13/Equation/12"}
|
| 92 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0196", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "Cross term vanishes: Since \\epsilon_i is independent of B_t and \\tilde{g} , and \\mathbb{E}[\\epsilon_i] = 0 :", "source": "marker_v2", "marker_block_id": "/page/13/Text/13"}
|
| 93 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0197", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Equation", "text": "\\mathbb{E}\\left[\\left\\langle B_t B_t^{\\top} \\tilde{g}, \\sum_i z_i \\epsilon_i \\right\\rangle\\right] = \\sum_i \\mathbb{E}[\\epsilon_i] \\cdot \\mathbb{E}[\\left\\langle B_t B_t^{\\top} \\tilde{g}, z_i \\right\\rangle] = 0 \\tag{50}", "source": "marker_v2", "marker_block_id": "/page/13/Equation/14"}
|
| 94 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0198", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "First term: We compute \\mathbb{E}[\\|B_tB_t^{\\top}\\tilde{g}\\|^2] by first conditioning on B_t , then taking expectation over B_t .", "source": "marker_v2", "marker_block_id": "/page/13/Text/15"}
|
| 95 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0199", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "(A) Given B_t , taking expectation over \\zeta (using \\mathbb{E}[\\zeta] = 0 ):", "source": "marker_v2", "marker_block_id": "/page/13/Text/16"}
|
| 96 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0200", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Equation", "text": "\\mathbb{E}[\\|B_t B_t^{\\top} \\tilde{g}\\|^2 | B_t] = \\|B_t B_t^{\\top} g_t\\|^2 + \\mathbb{E}[\\|B_t B_t^{\\top} \\zeta\\|^2 | B_t] (51)", "source": "marker_v2", "marker_block_id": "/page/13/Equation/17"}
|
| 97 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0201", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "where \\mathbb{E}[\\|B_t B_t^\\top \\zeta\\|^2 | B_t] = \\operatorname{tr}((B_t B_t^\\top)^2 \\Sigma).", "source": "marker_v2", "marker_block_id": "/page/13/Text/18"}
|
| 98 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0202", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "(B) Taking expectation over B_t . For \\mathbb{E}[\\|B_t B_t^\\top g\\|^2] :", "source": "marker_v2", "marker_block_id": "/page/13/Text/19"}
|
| 99 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0203", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Equation", "text": "||B_t B_t^{\\top} g||^2 = \\sum_{i,j=1}^k (z_i^{\\top} g) (z_j^{\\top} g) (z_i^{\\top} z_j) (52)", "source": "marker_v2", "marker_block_id": "/page/13/Equation/20"}
|
| 100 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0204", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "Diagonal terms (i = j): \\mathbb{E}[(z_i^\\top g)^2 ||z_i||^2] = (n+2)||g||^2 (by Lemma B.4).", "source": "marker_v2", "marker_block_id": "/page/13/Text/21"}
|
| 101 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0205", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "Off-diagonal terms (i \\neq j) : By independence of z_i and z_j :", "source": "marker_v2", "marker_block_id": "/page/13/Text/22"}
|
| 102 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0206", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Equation", "text": "\\mathbb{E}[(z_i^\\top g)(z_j^\\top g)(z_i^\\top z_j)] = \\sum_{a,b,c} g_a g_b \\mathbb{E}[(z_i)_a(z_i)_c] \\mathbb{E}[(z_j)_b(z_j)_c] (53)", "source": "marker_v2", "marker_block_id": "/page/13/Equation/23"}
|
| 103 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0207", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Equation", "text": "= \\sum_{a,b,c} g_a g_b \\delta_{ac} \\delta_{bc} = \\sum_c g_c^2 = ||g||^2 (54)", "source": "marker_v2", "marker_block_id": "/page/13/Equation/24"}
|
| 104 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0208", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Text", "text": "Thus:", "source": "marker_v2", "marker_block_id": "/page/13/Text/25"}
|
| 105 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0209", "section": "Step 3: Second moment (detailed computation).", "page_start": 14, "page_end": 14, "type": "Equation", "text": "\\mathbb{E}[\\|B_t B_t^\\top q\\|^2] = k(n+2)\\|q\\|^2 + k(k-1)\\|q\\|^2 = k(n+k+1)\\|q\\|^2 = k\\tilde{n}\\|q\\|^2 (55)", "source": "marker_v2", "marker_block_id": "/page/13/Equation/26"}
|
| 106 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0210", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Text", "text": "For \\mathbb{E}[\\operatorname{tr}((B_t B_t^{\\top})^2 \\Sigma)] , let P = B_t B_t^{\\top} :", "source": "marker_v2", "marker_block_id": "/page/14/Text/1"}
|
| 107 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0211", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\operatorname{tr}(P^{2}\\Sigma) = \\sum_{i,j} (z_{i}^{\\top} z_{j})(z_{i}^{\\top} \\Sigma z_{j}) \\tag{56}", "source": "marker_v2", "marker_block_id": "/page/14/Equation/2"}
|
| 108 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0212", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Text", "text": "Diagonal (i = j): By Lemma B.4, \\mathbb{E}[||z||^2 \\cdot z^{\\top} \\Sigma z] = (n + 2) \\operatorname{tr}(\\Sigma) .", "source": "marker_v2", "marker_block_id": "/page/14/Text/3"}
|
| 109 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0213", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Text", "text": "Off-diagonal (i \\neq j) : By independence of z_i and z_i :", "source": "marker_v2", "marker_block_id": "/page/14/Text/4"}
|
| 110 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0214", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\mathbb{E}[(z_i^\\top z_j)(z_i^\\top \\Sigma z_j)] = \\sum_{a,b,c} \\Sigma_{bc} \\mathbb{E}[(z_i)_a(z_i)_b] \\mathbb{E}[(z_j)_a(z_j)_c] (57)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/5"}
|
| 111 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0215", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Equation", "text": "= \\sum_{a,b,c} \\Sigma_{bc} \\delta_{ab} \\delta_{ac} = \\sum_{a} \\Sigma_{aa} = \\text{tr}(\\Sigma) (58)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/6"}
|
| 112 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0216", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Text", "text": "Thus:", "source": "marker_v2", "marker_block_id": "/page/14/Text/7"}
|
| 113 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0217", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\mathbb{E}[\\operatorname{tr}(P^2\\Sigma)] = k(n+2)\\operatorname{tr}(\\Sigma) + k(k-1)\\operatorname{tr}(\\Sigma) = k\\tilde{n} \\cdot \\operatorname{tr}(\\Sigma) (59)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/8"}
|
| 114 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0218", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Text", "text": "Second term: For the finite-difference noise:", "source": "marker_v2", "marker_block_id": "/page/14/Text/9"}
|
| 115 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0219", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\mathbb{E}\\left[\\left\\|\\sum_{i=1}^{k} z_{i} \\epsilon_{i}\\right\\|^{2}\\right] = \\sum_{i,j} \\mathbb{E}[\\epsilon_{i} \\epsilon_{j}] \\mathbb{E}[z_{i}^{\\top} z_{j}] = \\sum_{i} \\sigma_{\\varepsilon}^{2} \\cdot n = kn\\sigma_{\\varepsilon}^{2} (60)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/10"}
|
| 116 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0220", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Text", "text": "Total second moment:", "source": "marker_v2", "marker_block_id": "/page/14/Text/11"}
|
| 117 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0221", "section": "Step 3: Second moment (detailed computation).", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\mathbb{E}[\\|B_t \\mu_t^{(k)}\\|^2] = \\gamma^2 k \\left( \\tilde{n}(\\|g_t\\|^2 + \\operatorname{tr}(\\Sigma)) + n\\sigma_{\\varepsilon}^2 \\right) (61)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/12"}
|
| 118 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0222", "section": "Step 4: Combining.", "page_start": 15, "page_end": 15, "type": "Text", "text": "Substituting into the descent inequality:", "source": "marker_v2", "marker_block_id": "/page/14/Text/14"}
|
| 119 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0223", "section": "Step 4: Combining.", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\mathbb{E}[\\|\\Delta\\theta_t\\|^2] = \\eta^2 \\gamma^2 k\\tilde{n} \\|g_t\\|^2 + \\eta^2 \\gamma^2 k(\\tilde{n} \\cdot \\text{tr}(\\Sigma) + n\\sigma_{\\varepsilon}^2) (62)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/15"}
|
| 120 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0224", "section": "Step 4: Combining.", "page_start": 15, "page_end": 15, "type": "Text", "text": "Thus:", "source": "marker_v2", "marker_block_id": "/page/14/Text/16"}
|
| 121 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0225", "section": "Step 4: Combining.", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\mathbb{E}[\\mathcal{L}(\\theta_{t+1})] \\le \\mathbb{E}[\\mathcal{L}(\\theta_t)] - \\eta \\gamma k \\|g_t\\|^2 + \\frac{L\\eta^2 \\gamma^2 k}{2} \\left[ \\tilde{n} \\|g_t\\|^2 + \\tilde{n} \\cdot \\operatorname{tr}(\\Sigma) + n\\sigma_{\\varepsilon}^2 \\right] (63)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/17"}
|
| 122 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0226", "section": "Step 4: Combining.", "page_start": 15, "page_end": 15, "type": "Text", "text": "Collecting terms in ||g_t||^2 :", "source": "marker_v2", "marker_block_id": "/page/14/Text/18"}
|
| 123 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0227", "section": "Step 4: Combining.", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\mathbb{E}[\\mathcal{L}(\\theta_{t+1})] \\leq \\mathbb{E}[\\mathcal{L}(\\theta_t)] - \\eta \\gamma k \\underbrace{\\left(1 - \\frac{L\\eta \\gamma \\tilde{n}}{2}\\right)}_{:=\\beta(\\eta)} \\|g_t\\|^2 + \\frac{L\\eta^2 \\gamma^2 k(\\tilde{n} \\cdot \\operatorname{tr}(\\Sigma) + n\\sigma_{\\varepsilon}^2)}{2} (64)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/19"}
|
| 124 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0228", "section": "Step 5: Telescoping sum.", "page_start": 15, "page_end": 15, "type": "Text", "text": "When \\eta < \\frac{2}{L\\gamma\\tilde{n}} , we have \\beta(\\eta) > 0 . Rearranging:", "source": "marker_v2", "marker_block_id": "/page/14/Text/21"}
|
| 125 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0229", "section": "Step 5: Telescoping sum.", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\|g_t\\|^2 \\le \\frac{1}{\\beta(\\eta)\\eta\\gamma k} \\left( \\mathcal{L}(\\theta_t) - \\mathcal{L}(\\theta_{t+1}) \\right) + \\frac{L\\eta\\gamma(\\tilde{n} \\cdot \\operatorname{tr}(\\Sigma) + n\\sigma_{\\varepsilon}^2)}{2\\beta(\\eta)} (65)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/22"}
|
| 126 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0230", "section": "Step 5: Telescoping sum.", "page_start": 15, "page_end": 15, "type": "Text", "text": "Summing over t = 0, ..., T - 1 and dividing by T:", "source": "marker_v2", "marker_block_id": "/page/14/Text/23"}
|
| 127 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0231", "section": "Step 5: Telescoping sum.", "page_start": 15, "page_end": 15, "type": "Equation", "text": "\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb{E}[\\|g_t\\|^2] \\le \\frac{\\mathcal{L}(\\theta_0) - \\mathcal{L}^*}{\\beta(\\eta)\\eta\\gamma kT} + \\frac{L\\eta\\gamma(\\tilde{n} \\cdot \\operatorname{tr}(\\Sigma) + n\\sigma_{\\varepsilon}^2)}{2\\beta(\\eta)} (66)", "source": "marker_v2", "marker_block_id": "/page/14/Equation/24"}
|
| 128 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0232", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Text", "text": "Proof of Theorem 3.7. Step 1: Posterior mean unbiasedness.", "source": "marker_v2", "marker_block_id": "/page/15/Text/2"}
|
| 129 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0233", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Text", "text": "By Corollary 3.6, for coordinate-axis sampling (d^{(i)} = e_i) , the posterior mean is:", "source": "marker_v2", "marker_block_id": "/page/15/Text/3"}
|
| 130 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0234", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mu^{(k)} = \\gamma Y = \\gamma [y^{(1)}, \\dots, y^{(k)}]^{\\top} (67)", "source": "marker_v2", "marker_block_id": "/page/15/Equation/4"}
|
| 131 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0235", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Text", "text": "where \\gamma = \\frac{\\sigma_p^2}{\\sigma_p^2 + \\sigma_e^2} .", "source": "marker_v2", "marker_block_id": "/page/15/Text/5"}
|
| 132 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0236", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Text", "text": "Each observation satisfies y^{(i)} = e_i^{\\top} \\tilde{g} + \\nu^{(i)} = \\tilde{g}_i + \\nu^{(i)} , where \\tilde{g} = B^{\\top} g is the true normalized subspace gradient and \\nu^{(i)} is zero-mean noise.", "source": "marker_v2", "marker_block_id": "/page/15/Text/6"}
|
| 133 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0237", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Text", "text": "Taking conditional expectation given B and g (so \\tilde{g}^* = B^{\\top}g is fixed):", "source": "marker_v2", "marker_block_id": "/page/15/Text/7"}
|
| 134 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0238", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[y^{(i)}|B,g] = \\tilde{g}_i^* + \\mathbb{E}[\\nu^{(i)}] = \\tilde{g}_i^* (68)", "source": "marker_v2", "marker_block_id": "/page/15/Equation/8"}
|
| 135 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0239", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Text", "text": "Thus:", "source": "marker_v2", "marker_block_id": "/page/15/Text/9"}
|
| 136 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0240", "section": "B.4. Proof of Expected Update Direction", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[\\mu^{(k)}|B,g] = \\gamma \\mathbb{E}[Y|B,g] = \\gamma \\tilde{g}^* = \\gamma B^\\top g \\tag{69}", "source": "marker_v2", "marker_block_id": "/page/15/Equation/10"}
|
| 137 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0241", "section": "Step 2: Conditional expectation of update.", "page_start": 16, "page_end": 16, "type": "Text", "text": "The parameter update is \\Delta\\theta = -\\eta B\\mu^{(k)} . Taking conditional expectation:", "source": "marker_v2", "marker_block_id": "/page/15/Text/12"}
|
| 138 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0242", "section": "Step 2: Conditional expectation of update.", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[\\Delta \\theta | B] = -\\eta B \\mathbb{E}[\\mu^{(k)} | B] = -\\eta \\gamma B B^{\\mathsf{T}} g \\tag{70}", "source": "marker_v2", "marker_block_id": "/page/15/Equation/13"}
|
| 139 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0243", "section": "Step 3: Expectation over subspace basis.", "page_start": 16, "page_end": 16, "type": "Text", "text": "Taking expectation over B = [z_1, \\dots, z_k] where z_i \\stackrel{iid}{\\sim} \\mathcal{N}(0, I_n) :", "source": "marker_v2", "marker_block_id": "/page/15/Text/15"}
|
| 140 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0244", "section": "Step 3: Expectation over subspace basis.", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[\\Delta \\theta] = -\\eta \\gamma \\mathbb{E}[BB^{\\top}]g \\tag{71}", "source": "marker_v2", "marker_block_id": "/page/15/Equation/16"}
|
| 141 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0245", "section": "Step 3: Expectation over subspace basis.", "page_start": 16, "page_end": 16, "type": "Text", "text": "Computing \\mathbb{E}[BB^{\\top}] :", "source": "marker_v2", "marker_block_id": "/page/15/Text/17"}
|
| 142 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0246", "section": "Step 3: Expectation over subspace basis.", "page_start": 16, "page_end": 16, "type": "Equation", "text": "BB^{\\top} = \\sum_{i=1}^{k} z_i z_i^{\\top} \\tag{72}", "source": "marker_v2", "marker_block_id": "/page/15/Equation/18"}
|
| 143 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0247", "section": "Step 3: Expectation over subspace basis.", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[BB^{\\top}] = \\sum_{i=1}^{k} \\mathbb{E}[z_i z_i^{\\top}] = \\sum_{i=1}^{k} I_n = kI_n (73)", "source": "marker_v2", "marker_block_id": "/page/15/Equation/19"}
|
| 144 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0248", "section": "Step 3: Expectation over subspace basis.", "page_start": 16, "page_end": 16, "type": "Text", "text": "Therefore:", "source": "marker_v2", "marker_block_id": "/page/15/Text/20"}
|
| 145 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0249", "section": "Step 3: Expectation over subspace basis.", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[\\Delta \\theta] = -\\eta \\gamma k \\cdot I_n \\cdot q = -\\eta \\gamma k \\cdot \\nabla \\mathcal{L}(\\theta_0) \\tag{74}", "source": "marker_v2", "marker_block_id": "/page/15/Equation/21"}
|
| 146 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0250", "section": "Step 4: Higher-order bias.", "page_start": 16, "page_end": 16, "type": "Text", "text": "By Lemma B.1, the finite-difference estimator has O(\\varepsilon L) bias. After multiplication by \\varepsilon in the update, this becomes O(\\varepsilon^2 L) . Since \\varepsilon is typically small ( \\sim 10^{-3} ), we write:", "source": "marker_v2", "marker_block_id": "/page/15/Text/23"}
|
| 147 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0251", "section": "Step 4: Higher-order bias.", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[\\Delta\\theta] = -\\eta \\gamma k \\cdot \\nabla \\mathcal{L}(\\theta_0) + O(\\varepsilon^3) \\tag{75}", "source": "marker_v2", "marker_block_id": "/page/15/Equation/24"}
|
| 148 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0252", "section": "Step 4: Higher-order bias.", "page_start": 16, "page_end": 16, "type": "Text", "text": "This proves that the expected update direction aligns with the negative gradient, with effective learning rate \\eta_{eff} = \\eta \\gamma k . \\Box", "source": "marker_v2", "marker_block_id": "/page/15/Text/25"}
|
| 149 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0253", "section": "B.5. Adaptive Sampling Analysis", "page_start": 16, "page_end": 16, "type": "Text", "text": "Theorem B.6 (Conditional Unbiasedness of Posterior Mean under Adaptive Sampling). Let \\mu^{(m)} denote the posterior mean after m adaptive sampling steps. Given the subspace basis B and the true gradient g, for any adaptive sampling strategy \\pi (where d^{(j)} is \\mathcal{D}_{j-1} -measurable), we have:", "source": "marker_v2", "marker_block_id": "/page/15/Text/27"}
|
| 150 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0254", "section": "B.5. Adaptive Sampling Analysis", "page_start": 16, "page_end": 16, "type": "Equation", "text": "\\mathbb{E}[\\mu^{(m)}|B,g] = \\mathbb{E}\\left[\\Sigma^{(m)}D_m^{\\top}R_m^{-1}D_m \\mid B\\right]\\tilde{g}^* = \\mathbb{E}[\\Gamma_m|B] \\cdot \\tilde{g}^* (76)", "source": "marker_v2", "marker_block_id": "/page/15/Equation/28"}
|
| 151 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0255", "section": "B.5. Adaptive Sampling Analysis", "page_start": 17, "page_end": 17, "type": "Text", "text": "In particular, if \\Sigma^{(m)} is deterministic given B (e.g., coordinate-axis sampling or any strategy that depends only on \\mathcal{D}_{m-1} ), then:", "source": "marker_v2", "marker_block_id": "/page/16/Text/1"}
|
| 152 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0256", "section": "B.5. Adaptive Sampling Analysis", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\mathbb{E}[\\mu^{(m)}|B, g, \\mathcal{D}_m] = \\Gamma_m \\cdot \\tilde{g}^* \\tag{77}", "source": "marker_v2", "marker_block_id": "/page/16/Equation/2"}
|
| 153 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0257", "section": "B.5. Adaptive Sampling Analysis", "page_start": 17, "page_end": 17, "type": "Text", "text": "where \\Gamma_m := I_k - \\sigma_n^{-2} \\Sigma^{(m)} is the shrinkage matrix.", "source": "marker_v2", "marker_block_id": "/page/16/Text/3"}
|
| 154 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0258", "section": "Proof. Step 1: Expression for the posterior mean.", "page_start": 17, "page_end": 17, "type": "Text", "text": "By the standard Bayesian linear regression formula:", "source": "marker_v2", "marker_block_id": "/page/16/Text/5"}
|
| 155 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0259", "section": "Proof. Step 1: Expression for the posterior mean.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\mu^{(m)} = \\Sigma^{(m)} D_m^{\\top} R_m^{-1} Y_m \\tag{78}", "source": "marker_v2", "marker_block_id": "/page/16/Equation/6"}
|
| 156 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0260", "section": "Proof. Step 1: Expression for the posterior mean.", "page_start": 17, "page_end": 17, "type": "Text", "text": "where Y_m = [y^{(1)}, \\dots, y^{(m)}]^{\\top} .", "source": "marker_v2", "marker_block_id": "/page/16/Text/7"}
|
| 157 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0261", "section": "Step 2: Computing the conditional expectation.", "page_start": 17, "page_end": 17, "type": "Text", "text": "Note that \\Sigma^{(m)} and D_m are both \\mathcal{D}_m -measurable. The key is to compute \\mathbb{E}[Y_m|B,g,\\mathcal{D}_m] .", "source": "marker_v2", "marker_block_id": "/page/16/Text/9"}
|
| 158 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0262", "section": "Step 2: Computing the conditional expectation.", "page_start": 17, "page_end": 17, "type": "Text", "text": "For each y^{(j)} :", "source": "marker_v2", "marker_block_id": "/page/16/Text/10"}
|
| 159 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0263", "section": "Step 2: Computing the conditional expectation.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\mathbb{E}[y^{(j)}|B, g, \\mathcal{D}_m] = \\mathbb{E}[y^{(j)}|B, g, d^{(j)}] = d^{(j)\\top}\\tilde{g}^* (79)", "source": "marker_v2", "marker_block_id": "/page/16/Equation/11"}
|
| 160 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0264", "section": "Step 2: Computing the conditional expectation.", "page_start": 17, "page_end": 17, "type": "Text", "text": "The first equality holds because given d^{(j)} , y^{(j)} is conditionally independent of other d^{(i)} ( i \\neq j ).", "source": "marker_v2", "marker_block_id": "/page/16/Text/12"}
|
| 161 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0265", "section": "Step 2: Computing the conditional expectation.", "page_start": 17, "page_end": 17, "type": "Text", "text": "Therefore:", "source": "marker_v2", "marker_block_id": "/page/16/Text/13"}
|
| 162 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0266", "section": "Step 2: Computing the conditional expectation.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\mathbb{E}[Y_m|B,g,\\mathcal{D}_m] = D_m \\tilde{g}^* \\tag{80}", "source": "marker_v2", "marker_block_id": "/page/16/Equation/14"}
|
| 163 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0267", "section": "Step 3: Substituting into the posterior mean.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\mathbb{E}[\\mu^{(m)}|B, g, \\mathcal{D}_m] = \\Sigma^{(m)} D_m^{\\top} R_m^{-1} \\mathbb{E}[Y_m | B, g, \\mathcal{D}_m] = \\Sigma^{(m)} D_m^{\\top} R_m^{-1} D_m \\tilde{g}^* (81)", "source": "marker_v2", "marker_block_id": "/page/16/Equation/16"}
|
| 164 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0268", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Text", "text": "By the definition of \\Sigma^{(m)} :", "source": "marker_v2", "marker_block_id": "/page/16/Text/18"}
|
| 165 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0269", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "(\\Sigma^{(m)})^{-1} = \\sigma_p^{-2} I_k + D_m^{\\top} R_m^{-1} D_m (82)", "source": "marker_v2", "marker_block_id": "/page/16/Equation/19"}
|
| 166 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0270", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Text", "text": "Therefore:", "source": "marker_v2", "marker_block_id": "/page/16/Text/20"}
|
| 167 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0271", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "D_m^{\\top} R_m^{-1} D_m = (\\Sigma^{(m)})^{-1} - \\sigma_p^{-2} I_k (83)", "source": "marker_v2", "marker_block_id": "/page/16/Equation/21"}
|
| 168 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0272", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Text", "text": "Substituting:", "source": "marker_v2", "marker_block_id": "/page/16/Text/22"}
|
| 169 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0273", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\mathbb{E}[\\mu^{(m)}|B,g,\\mathcal{D}_m] = \\Sigma^{(m)} \\left[ (\\Sigma^{(m)})^{-1} - \\sigma_p^{-2} I_k \\right] \\tilde{g}^* = \\left( I_k - \\sigma_p^{-2} \\Sigma^{(m)} \\right) \\tilde{g}^* (84)", "source": "marker_v2", "marker_block_id": "/page/16/Equation/23"}
|
| 170 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0274", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Text", "text": "Defining the shrinkage matrix \\Gamma_m := I_k - \\sigma_p^{-2} \\Sigma^{(m)} , we obtain:", "source": "marker_v2", "marker_block_id": "/page/16/Text/24"}
|
| 171 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0275", "section": "Step 4: Simplifying the shrinkage matrix.", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\mathbb{E}[\\mu^{(m)}|B,g,\\mathcal{D}_m] = \\Gamma_m \\tilde{g}^* \\tag{85}", "source": "marker_v2", "marker_block_id": "/page/16/Equation/25"}
|
| 172 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0276", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 17, "page_end": 17, "type": "Text", "text": "Theorem B.7 (Convergence Rate under Adaptive Sampling). Under Assumptions 3.1, 3.2, and isotropic noise, consider the BSZO algorithm with adaptive sampling (m samples, where the first k samples use coordinate-axis sampling). Let \\tilde{n} = n + k + 1 be the effective dimension. Suppose \\eta < \\frac{2}{L\\tilde{\\gamma}\\tilde{n}} , and define \\beta(\\eta) := 1 - \\frac{L\\eta\\tilde{\\gamma}\\tilde{n}}{2} . Then, after T iterations, the following inequality holds:", "source": "marker_v2", "marker_block_id": "/page/16/Text/27"}
|
| 173 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0277", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 17, "page_end": 17, "type": "Equation", "text": "\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\nabla \\mathcal{L}(\\theta_t)\\|^2] \\le \\frac{\\Delta_0}{\\beta(\\eta)\\eta \\bar{\\gamma}kT} + \\frac{L\\eta \\bar{\\gamma}(\\tilde{n}\\sigma_g^2 + n\\sigma_n^2)}{2\\beta(\\eta)},\\tag{86}", "source": "marker_v2", "marker_block_id": "/page/16/Equation/28"}
|
| 174 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0278", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 17, "page_end": 17, "type": "Text", "text": "where:", "source": "marker_v2", "marker_block_id": "/page/16/Text/29"}
|
| 175 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0279", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 18, "page_end": 18, "type": "ListGroup", "text": "\\bar{\\gamma} := \\min_t \\bar{\\gamma}_t \\geq \\gamma is the minimum effective shrinkage factor, \\sigma_g^2 is the gradient noise variance, \\sigma_n^2 is the finite-difference approximation noise variance, \\Delta_0 := \\mathcal{L}(\\theta_0) \\mathcal{L}^* .", "source": "marker_v2", "marker_block_id": "/page/17/ListGroup/345"}
|
| 176 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0280", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 18, "page_end": 18, "type": "Text", "text": "Corollary B.8. Let \\eta = \\frac{1}{L\\bar{\\gamma}\\tilde{n}} . Then \\beta = 1/2 , and the convergence bound simplifies to:", "source": "marker_v2", "marker_block_id": "/page/17/Text/4"}
|
| 177 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0281", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 18, "page_end": 18, "type": "Equation", "text": "\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\nabla \\mathcal{L}(\\theta_t)\\|^2] \\le \\frac{2L\\bar{\\gamma}\\tilde{n}\\Delta_0}{kT} + \\sigma_g^2 + \\frac{n}{\\tilde{n}}\\sigma_n^2. \\tag{87}", "source": "marker_v2", "marker_block_id": "/page/17/Equation/5"}
|
| 178 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0282", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 18, "page_end": 18, "type": "Text", "text": "Remark B.9. When n \\gg k , we have \\tilde{n} \\approx n , so the noise floor \\sigma_g^2 + \\frac{n}{\\tilde{n}} \\sigma_n^2 \\approx \\sigma_e^2 becomes decoupled from the dimension n.", "source": "marker_v2", "marker_block_id": "/page/17/Text/6"}
|
| 179 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0283", "section": "B.6. Convergence Rate under Adaptive Sampling", "page_start": 18, "page_end": 18, "type": "Text", "text": "Proof. The proof follows the same structure as Theorem 4.2, with the fixed \\gamma replaced by the adaptive effective shrinkage factor \\bar{\\gamma}_t .", "source": "marker_v2", "marker_block_id": "/page/17/Text/7"}
|
| 180 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0284", "section": "Step 1: Single-step descent.", "page_start": 18, "page_end": 18, "type": "Text", "text": "By Assumption 3.1 ( L -smoothness):", "source": "marker_v2", "marker_block_id": "/page/17/Text/9"}
|
| 181 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0285", "section": "Step 1: Single-step descent.", "page_start": 18, "page_end": 18, "type": "Equation", "text": "\\mathcal{L}(\\theta_{t+1}) \\le \\mathcal{L}(\\theta_t) + \\langle g_t, \\Delta \\theta_t \\rangle + \\frac{L}{2} ||\\Delta \\theta_t||^2 (88)", "source": "marker_v2", "marker_block_id": "/page/17/Equation/10"}
|
| 182 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0286", "section": "Step 2: Inner product term under adaptive sampling.", "page_start": 18, "page_end": 18, "type": "Text", "text": "By the adaptive sampling theorem (Theorem B.6), the expected update direction satisfies:", "source": "marker_v2", "marker_block_id": "/page/17/Text/12"}
|
| 183 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0287", "section": "Step 2: Inner product term under adaptive sampling.", "page_start": 18, "page_end": 18, "type": "Equation", "text": "\\mathbb{E}[\\langle g_t, \\Delta \\theta_t \\rangle | \\theta_t] = -\\eta \\mathbb{E}[\\operatorname{tr}(\\Gamma_m^{(t)})] \\|g_t\\|^2 = -\\eta \\bar{\\gamma}_t k \\|g_t\\|^2 (89)", "source": "marker_v2", "marker_block_id": "/page/17/Equation/13"}
|
| 184 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0288", "section": "Step 2: Inner product term under adaptive sampling.", "page_start": 18, "page_end": 18, "type": "Text", "text": "where \\bar{\\gamma}_t = \\frac{1}{k} \\text{tr}(\\Gamma_m^{(t)}) = 1 - \\frac{U_m^{(t)}}{k \\sigma_n^2} is the effective shrinkage factor at iteration t.", "source": "marker_v2", "marker_block_id": "/page/17/Text/14"}
|
| 185 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0289", "section": "Step 3: Second moment (same structure as main theorem).", "page_start": 18, "page_end": 18, "type": "Text", "text": "Following the same derivation as Theorem 4.2, with \\gamma replaced by \\bar{\\gamma}_t :", "source": "marker_v2", "marker_block_id": "/page/17/Text/16"}
|
| 186 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0290", "section": "Step 3: Second moment (same structure as main theorem).", "page_start": 18, "page_end": 18, "type": "Equation", "text": "\\mathbb{E}[\\|\\Delta\\theta_t\\|^2 | \\theta_t] = \\eta^2 \\bar{\\gamma}_t^2 k \\tilde{n} \\|g_t\\|^2 + \\eta^2 \\bar{\\gamma}_t^2 k (\\tilde{n}\\sigma_g^2 + n\\sigma_n^2) (90)", "source": "marker_v2", "marker_block_id": "/page/17/Equation/17"}
|
| 187 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0291", "section": "Step 3: Second moment (same structure as main theorem).", "page_start": 18, "page_end": 18, "type": "Text", "text": "The key observation is that the second moment structure remains unchanged because:", "source": "marker_v2", "marker_block_id": "/page/17/Text/18"}
|
| 188 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0292", "section": "Step 3: Second moment (same structure as main theorem).", "page_start": 18, "page_end": 18, "type": "ListGroup", "text": "The gradient noise \\sigma_g^2 interacts with B_t to produce the \\tilde{n} factor The finite-difference noise \\sigma_n^2 is independent of B_t , producing only the n factor", "source": "marker_v2", "marker_block_id": "/page/17/ListGroup/346"}
|
| 189 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0293", "section": "Step 4: Combining and bounding.", "page_start": 18, "page_end": 18, "type": "Text", "text": "Substituting into the descent inequality:", "source": "marker_v2", "marker_block_id": "/page/17/Text/22"}
|
| 190 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0294", "section": "Step 4: Combining and bounding.", "page_start": 18, "page_end": 18, "type": "Equation", "text": "\\mathbb{E}[\\mathcal{L}(\\theta_{t+1})] \\le \\mathbb{E}[\\mathcal{L}(\\theta_t)] - \\eta \\bar{\\gamma}_t k \\left(1 - \\frac{L\\eta \\bar{\\gamma}_t \\tilde{n}}{2}\\right) \\mathbb{E}[\\|g_t\\|^2] + \\frac{L\\eta^2 \\bar{\\gamma}_t^2 k (\\tilde{n} \\sigma_g^2 + n \\sigma_n^2)}{2} (91)", "source": "marker_v2", "marker_block_id": "/page/17/Equation/23"}
|
| 191 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0295", "section": "Step 4: Combining and bounding.", "page_start": 18, "page_end": 18, "type": "Text", "text": "Since \\bar{\\gamma}_t \\geq \\bar{\\gamma} := \\min_t \\bar{\\gamma}_t \\geq \\gamma (by Lemma in Theorem B.6), and assuming \\eta < \\frac{2}{L\\bar{\\gamma}\\bar{n}} , we define \\beta(\\eta) = 1 - \\frac{L\\eta\\bar{\\gamma}\\bar{n}}{2} > 0 .", "source": "marker_v2", "marker_block_id": "/page/17/Text/24"}
|
| 192 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0296", "section": "Step 4: Combining and bounding.", "page_start": 18, "page_end": 18, "type": "Text", "text": "Rearranging:", "source": "marker_v2", "marker_block_id": "/page/17/Text/25"}
|
| 193 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0297", "section": "Step 4: Combining and bounding.", "page_start": 18, "page_end": 18, "type": "Equation", "text": "\\mathbb{E}[\\|g_t\\|^2] \\le \\frac{1}{\\beta(\\eta)\\eta\\bar{\\gamma}k} \\left( \\mathbb{E}[\\mathcal{L}(\\theta_t)] - \\mathbb{E}[\\mathcal{L}(\\theta_{t+1})] \\right) + \\frac{L\\eta\\bar{\\gamma}(\\tilde{n}\\sigma_g^2 + n\\sigma_n^2)}{2\\beta(\\eta)} (92)", "source": "marker_v2", "marker_block_id": "/page/17/Equation/26"}
|
| 194 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0298", "section": "Step 5: Telescoping sum.", "page_start": 19, "page_end": 19, "type": "Text", "text": "Summing over t = 0, ..., T - 1 and dividing by T:", "source": "marker_v2", "marker_block_id": "/page/18/Text/1"}
|
| 195 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0299", "section": "Step 5: Telescoping sum.", "page_start": 19, "page_end": 19, "type": "Equation", "text": "\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\nabla \\mathcal{L}(\\theta_t)\\|^2] \\le \\frac{\\Delta_0}{\\beta(\\eta)\\eta \\bar{\\gamma}kT} + \\frac{L\\eta \\bar{\\gamma}(\\tilde{n}\\sigma_g^2 + n\\sigma_n^2)}{2\\beta(\\eta)} (93)", "source": "marker_v2", "marker_block_id": "/page/18/Equation/2"}
|
| 196 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0300", "section": "Step 5: Telescoping sum.", "page_start": 19, "page_end": 19, "type": "Text", "text": "For the special learning rate \\eta=\\frac{1}{L\\bar{\\gamma}\\tilde{n}} , we have \\beta=1/2 , and the bound simplifies to:", "source": "marker_v2", "marker_block_id": "/page/18/Text/3"}
|
| 197 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0301", "section": "Step 5: Telescoping sum.", "page_start": 19, "page_end": 19, "type": "Equation", "text": "\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\nabla \\mathcal{L}(\\theta_t)\\|^2] \\le \\frac{2L\\bar{\\gamma}\\tilde{n}\\Delta_0}{kT} + \\sigma_g^2 + \\frac{n}{\\tilde{n}}\\sigma_n^2 (94)", "source": "marker_v2", "marker_block_id": "/page/18/Equation/4"}
|
| 198 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0302", "section": "Step 5: Telescoping sum.", "page_start": 19, "page_end": 19, "type": "Text", "text": "When n\\gg k , we have \\tilde{n}\\approx n , so the noise floor \\sigma_g^2+\\frac{n}{\\tilde{n}}\\sigma_n^2\\approx\\sigma_e^2 becomes decoupled from dimension n.", "source": "marker_v2", "marker_block_id": "/page/18/Text/5"}
|
| 199 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0303", "section": "C. Experiment Details", "page_start": 19, "page_end": 19, "type": "TableGroup", "text": "Table 6. Number of training and validation samples for each dataset. SPLIT SST-2 BoolQ RTE COPA WIC WSC СВ TREC TRAINING VALIDATION 1000 1000 1000 300 1000 450 200 1000 500 500 500 100 500 100 50 500 Table 7. Hyperparameter configurations for fine-tuning RoBERTa-large.", "source": "marker_v2", "marker_block_id": "/page/18/TableGroup/212"}
|
| 200 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0304", "section": "C. Experiment Details", "page_start": 19, "page_end": 19, "type": "Table", "text": "Algorithm Hyperparameter Values MeZO Batch size Learning rate \\varepsilon MeZO-Adam Batch size Learning rate \\varepsilon HiZOO Batch size Learning rate \\varepsilon \\{1 \\times 10^{-4}, 1 \\times 10^{-5}, 1 \\times 10^{-6}, 1 \\times 10^{-7}, 1 \\times 10^{-8} \\} LOZO Batch size Learning rate ε Rank Interval BSZO Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000", "source": "marker_v2", "marker_block_id": "/page/18/Table/10"}
|
| 201 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0305", "section": "C. Experiment Details", "page_start": 20, "page_end": 20, "type": "TableGroup", "text": "Table 8. Hyperparameter configurations for fine-tuning OPT-1.3B. Algorithm Hyperparameter Values MeZO Batch size Learning rate \\varepsilon MeZO-Adam Batch size Learning rate \\varepsilon HiZOO Batch size Learning rate \\varepsilon LOZO Batch size Learning rate ε Rank Interval BSZO Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000", "source": "marker_v2", "marker_block_id": "/page/19/TableGroup/128"}
|
| 202 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0306", "section": "C. Experiment Details", "page_start": 21, "page_end": 21, "type": "Caption", "text": "Table 9. Hyperparameter configurations for fine-tuning Mistral-7B.", "source": "marker_v2", "marker_block_id": "/page/20/Caption/1"}
|
| 203 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0307", "section": "C. Experiment Details", "page_start": 21, "page_end": 21, "type": "Table", "text": "Algorithm Hyperparameter Values MeZO Batch size Learning rate \\varepsilon HiZOO Batch size Learning rate \\varepsilon LOZO Batch size Learning rate \\varepsilon Rank Interval BSZO Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000", "source": "marker_v2", "marker_block_id": "/page/20/Table/2"}
|
| 204 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0308", "section": "C. Experiment Details", "page_start": 22, "page_end": 22, "type": "TableGroup", "text": "Table 10. Hyperparameter configurations for fine-tuning OPT-13B. Algorithm Hyperparameter Values MeZO Batch size Learning rate \\varepsilon MeZO-Adam Batch size Learning rate \\varepsilon HiZOO Batch size Learning rate \\varepsilon LOZO Batch size Learning rate ε Rank Interval BSZO Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \\varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000", "source": "marker_v2", "marker_block_id": "/page/21/TableGroup/139"}
|
| 205 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0309", "section": "D. Raw Experimental Results", "page_start": 22, "page_end": 22, "type": "Text", "text": "We provide the complete raw results of 5 independent runs for each method on RoBERTa-large in Table 11. The mean and standard deviation reported in Table 1 are computed from these results.", "source": "marker_v2", "marker_block_id": "/page/21/Text/4"}
|
| 206 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0310", "section": "D. Raw Experimental Results", "page_start": 23, "page_end": 23, "type": "Caption", "text": "Table 11. Raw test accuracy (%) of 5 runs on RoBERTa-large (355M).", "source": "marker_v2", "marker_block_id": "/page/22/Caption/1"}
|
| 207 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0311", "section": "D. Raw Experimental Results", "page_start": 23, "page_end": 23, "type": "TableGroup", "text": "MEZO 92.43 92.32 91.74 92.78 91.86 MEZO-ADAM 92.32 92.66 91.51 92.43 92.78 HIZOO 91.97 91.86 91.28 91.17 90.94 SST-2 LOZO 91.63 92.09 91.74 91.51 92.20 BSZO 92.89 92.43 92.78 92.43 92.78 BSZO-B 92.66 91.74 92.32 91.97 92.66 MEZO 69.68 68.95 65.70 65.34 62.09 MEZO-ADAM 62.09 64.26 64.62 64.98 62.09 HIZOO 59.21 63.18 57.76 56.68 59.21 RTE LOZO 59.57 63.90 61.01 65.34 63.18 BSZO 68.59 68.23 69.68 66.07 66.43 BSZO-B 67.87 70.04 70.76 66.43 66.79 MEZO 87.50 85.71 91.07 76.79 89.29 MEZO-ADAM 82.14 83.93 80.36 82.14 76.79 HIZOO 78.57 75.00 75.00 78.57 75.00 CB LOZO 87.50 82.14 82.14 89.29 80.36 BSZO 83.93 85.71 87.50 87.50 83.93 BSZO-B 85.71 83.93 82.14 85.71 83.93 MEZO 49.69 58.31 52.98 57.52 57.52 MEZO-ADAM 54.39 51.10 54.70 46.55 57.52 HIZOO 50.31 54.39 51.10 54.70 57.52 WIC LOZO 54.55 54.55 51.88 55.17 54.86 BSZO 57.68 57.52 55.64 54.70 54.70 BSZO-B 57.99 57.99 55.80 56.58 57.68 MEZO 81.20 86.40 86.40 86.20 86.60 MEZO-ADAM 84.80 74.20 71.40 83.00 80.60 HIZOO 65.20 65.20 65.20 62.20 59.40 TREC LOZO 80.40 74.80 77.20 79.20 77.20 BSZO 83.40 84.60 84.40 83.80 84.60 DATASET METHOD RUN 1 RUN 2 RUN 3 RUN 4 RUN 5 BSZO-B 85.80 84.60 85.20 82.20 86.20 Table 12. Full ablation studies on OPT-1.3B (fp32). (a) Effect of subspace dimension k with m = k. (b) Effect of observation count m with m = k + 1. (c) Noise-free adaptive sampling. Best per row in bold.", "source": "marker_v2", "marker_block_id": "/page/22/TableGroup/447"}
|
| 208 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0312", "section": "D. Raw Experimental Results", "page_start": 23, "page_end": 23, "type": "Table", "text": "(a) Effect of k (b) Effect of m (c) NF-Adaptive k SST-2 RTE k SST-2 RTE k SST-2 RTE 1 92.32 60.29 1 91.74 61.37 1 91.28 63.58 2 92.78 64.26 2 92.43 66.79 2 92.43 65.34 4 92.66 67.51 4 93.58 66.43 4 93.12 66.07 8 93.23 66.07 8 93.23 68.59 8 93.35 69.31", "source": "marker_v2", "marker_block_id": "/page/22/Table/4"}
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/appendix_text_v3.txt
ADDED
|
@@ -0,0 +1,623 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[p. 10 | section: A. BSZO Basic Algorithm | type: Text]
|
| 2 |
+
515516
|
| 3 |
+
|
| 4 |
+
[p. 10 | section: A. BSZO Basic Algorithm | type: Text]
|
| 5 |
+
537538
|
| 6 |
+
|
| 7 |
+
[p. 10 | section: A. BSZO Basic Algorithm | type: Text]
|
| 8 |
+
We provide a basic version of BSZO without caching optimization, which supports arbitrary sampling directions when m > k.
|
| 9 |
+
|
| 10 |
+
[p. 10 | section: Algorithm 2 Bayesian Subspace Zeroth-Order Optimization (Basic Version) | type: Code]
|
| 11 |
+
Input: parameters \theta, learning rate \eta, perturbation scale \varepsilon, subspace dimension k, sampling steps m, prior variance \sigma_p^2, noise variance \sigma_e^2, smoothing factor \alpha, max step T for t = 1 to T do Sample k random seeds \{s_i\}_{i=1}^k Initialize \mu \leftarrow \mathbf{0}_k, \Sigma \leftarrow \sigma_p^2 I_k, f_0 \leftarrow \mathcal{L}(\theta) for \tau = 1 to m do d \leftarrow d_{\tau} \text{ if } \tau \leq k, \text{ else } d \leftarrow \arg \max_{\|v\|=1} v^{\top} \Sigma v for i = 1 to k do \theta \leftarrow \theta + \varepsilon \cdot d_i \cdot \text{RANDN}(n, s_i) \text{ if } d_i > 10^{-10} end for y \leftarrow (\mathcal{L}(\theta) - f_0)/\varepsilon for i = 1 to k do \theta \leftarrow \theta - \varepsilon \cdot d_i \cdot \text{RANDN}(n, s_i) \text{ if } d_i > 10^{-10} \begin{aligned} r &\leftarrow (y - d^{\top} \mu) / \|d\|, \quad \sigma_e^2 \leftarrow (1 - \alpha) \sigma_e^2 + \alpha r^2 \\ K &\leftarrow \Sigma d / (d^{\top} \Sigma d + \sigma_e^2) \end{aligned} \mu \leftarrow \mu + K(y - d^{\mathsf{T}}\mu), \quad \Sigma \leftarrow \Sigma - Kd^{\mathsf{T}}\Sigma end for for i = 1 to k do \theta \leftarrow \theta - \eta \cdot \mu_i \cdot \text{RANDN}(n, s_i) end for end for return \theta RANDN(n, s): returns n-dim Gaussian vector seeded by s
|
| 12 |
+
|
| 13 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Text]
|
| 14 |
+
Lemma B.1 (Expectation of Direction Derivative). Under Assumption 3.1, the one-sided difference satisfies:
|
| 15 |
+
|
| 16 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Equation]
|
| 17 |
+
\mathbb{E}[\hat{y}(d)] = d^{\mathsf{T}}B^{\mathsf{T}}g + O(\varepsilon L) \tag{17}
|
| 18 |
+
|
| 19 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Text]
|
| 20 |
+
Proof. By \mathbb{E}[\mathcal{L}(\theta;\xi)] = \mathcal{L}(\theta) :
|
| 21 |
+
|
| 22 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Equation]
|
| 23 |
+
\mathbb{E}[\hat{y}(d)] = \frac{\mathcal{L}(\theta_0 + \varepsilon B d) - \mathcal{L}(\theta_0)}{\varepsilon} (18)
|
| 24 |
+
|
| 25 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Text]
|
| 26 |
+
By Taylor expansion at \theta_0 :
|
| 27 |
+
|
| 28 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Equation]
|
| 29 |
+
\mathcal{L}(\theta_0 + \varepsilon Bd) = \mathcal{L}(\theta_0) + \varepsilon \langle \nabla \mathcal{L}(\theta_0), Bd \rangle + \frac{\varepsilon^2}{2} (Bd)^\top H(Bd) + O(\varepsilon^3) (19)
|
| 30 |
+
|
| 31 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Text]
|
| 32 |
+
where H = \nabla^2 \mathcal{L}(\theta_0) is the Hessian.
|
| 33 |
+
|
| 34 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Text]
|
| 35 |
+
Substituting:
|
| 36 |
+
|
| 37 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Equation]
|
| 38 |
+
\mathbb{E}[\hat{y}(d)] = \frac{\varepsilon d^{\top} B^{\top} g + \frac{\varepsilon^2}{2} d^{\top} B^{\top} H B d + O(\varepsilon^3)}{\varepsilon} = d^{\top} B^{\top} g + \frac{\varepsilon}{2} d^{\top} B^{\top} H B d + O(\varepsilon^2) \tag{20}
|
| 39 |
+
|
| 40 |
+
[p. 10 | section: B.1. Auxiliary Lemmas | type: Text]
|
| 41 |
+
By Assumption 3.1, ||H|| \le L , so |d^{\top}B^{\top}HBd| \le L||Bd||^2 . Thus the bias is O(\varepsilon L) .
|
| 42 |
+
|
| 43 |
+
[p. 11 | section: B.1. Auxiliary Lemmas | type: ListGroup]
|
| 44 |
+
Lemma B.2 (Variance of Direction Derivative). Let \Sigma = Cov(\zeta) where \zeta = \nabla \mathcal{L}(\theta; \xi) \nabla \mathcal{L}(\theta) . When using the same mini-batch \xi for both function evaluations: (a) Conditional variance: Var(\hat{y}(d)|B) = (Bd)^{\top}\Sigma(Bd) + O(\varepsilon^4) (b) Unconditional variance: \mathbb{E}_B[Var(\hat{y}(d)|B)] = tr(\Sigma) + O(\varepsilon^4) Proof. Key insight: Using the same mini-batch \xi for both evaluations causes the noise to be correlated, not independent.
|
| 45 |
+
|
| 46 |
+
[p. 11 | section: (a) Conditional variance derivation | type: Text]
|
| 47 |
+
For fixed \xi , Taylor expand the random loss \mathcal{L}(\theta; \xi) at \theta_0 :
|
| 48 |
+
|
| 49 |
+
[p. 11 | section: (a) Conditional variance derivation | type: Equation]
|
| 50 |
+
\mathcal{L}(\theta_0 + \varepsilon Bd; \xi) = \mathcal{L}(\theta_0; \xi) + \varepsilon \langle \nabla \mathcal{L}(\theta_0; \xi), Bd \rangle + O(\varepsilon^2) (21)
|
| 51 |
+
|
| 52 |
+
[p. 11 | section: (a) Conditional variance derivation | type: Text]
|
| 53 |
+
Since both evaluations use the same \xi , the base term \mathcal{L}(\theta_0; \xi) cancels:
|
| 54 |
+
|
| 55 |
+
[p. 11 | section: (a) Conditional variance derivation | type: Equation]
|
| 56 |
+
\hat{y}(d) = \frac{\mathcal{L}(\theta_0 + \varepsilon Bd; \xi) - \mathcal{L}(\theta_0; \xi)}{\varepsilon} = \langle \nabla \mathcal{L}(\theta_0; \xi), Bd \rangle + O(\varepsilon) (22)
|
| 57 |
+
|
| 58 |
+
[p. 11 | section: (a) Conditional variance derivation | type: Text]
|
| 59 |
+
Let \nabla \mathcal{L}(\theta_0; \xi) = \nabla \mathcal{L}(\theta_0) + \zeta where \zeta is zero-mean noise with Cov(\zeta) = \Sigma . Given B:
|
| 60 |
+
|
| 61 |
+
[p. 11 | section: (a) Conditional variance derivation | type: Equation]
|
| 62 |
+
\operatorname{Var}(\hat{y}(d)|B) = \operatorname{Var}(\langle \zeta, Bd \rangle | B) = (Bd)^{\top} \operatorname{Cov}(\zeta)(Bd) = (Bd)^{\top} \Sigma(Bd) + O(\varepsilon^{2}) (23)
|
| 63 |
+
|
| 64 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: Text]
|
| 65 |
+
For coordinate-axis sampling d = e_i , we have Bd = z_i \sim \mathcal{N}(0, I_n) .
|
| 66 |
+
|
| 67 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: Text]
|
| 68 |
+
Taking expectation over B:
|
| 69 |
+
|
| 70 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: Equation]
|
| 71 |
+
\mathbb{E}_B[(Bd)^{\top}\Sigma(Bd)] = \mathbb{E}[z_i^{\top}\Sigma z_i] (24)
|
| 72 |
+
|
| 73 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: Text]
|
| 74 |
+
By the trace trick:
|
| 75 |
+
|
| 76 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: Equation]
|
| 77 |
+
\mathbb{E}[z_i^{\top} \Sigma z_i] = \mathbb{E}[\operatorname{tr}(\Sigma z_i z_i^{\top})] = \operatorname{tr}(\Sigma \cdot \mathbb{E}[z_i z_i^{\top}]) = \operatorname{tr}(\Sigma \cdot I_n) = \operatorname{tr}(\Sigma) (25)
|
| 78 |
+
|
| 79 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: Equation]
|
| 80 |
+
By Assumption 3.2, \operatorname{tr}(\Sigma) = \mathbb{E}[\|\zeta\|^2] \le \sigma_q^2 .
|
| 81 |
+
|
| 82 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: Text]
|
| 83 |
+
Lemma B.3 (High-Dimensional Approximate Orthogonality). Let z_1, \ldots, z_k \stackrel{iid}{\sim} \mathcal{N}(0, I_n) . When n \gg k^2 :
|
| 84 |
+
|
| 85 |
+
[p. 11 | section: (b) Unconditional variance derivation | type: ListGroup]
|
| 86 |
+
(a) ||z_i||^2 = n \pm O(\sqrt{n}) (b) For i \neq j : \frac{z_i^\top z_j}{\|z_i\| \|z_j\|} = O(1/\sqrt{n})
|
| 87 |
+
|
| 88 |
+
[p. 11 | section: Proof. (a) Norm concentration | type: Text]
|
| 89 |
+
Since ||z_i||^2 = \sum_{j=1}^n z_{ij}^2 \sim \chi^2(n) , we have:
|
| 90 |
+
|
| 91 |
+
[p. 11 | section: Proof. (a) Norm concentration | type: Equation]
|
| 92 |
+
\mathbb{E}[\|z_i\|^2] = n, \quad \text{Var}(\|z_i\|^2) = 2n \tag{26}
|
| 93 |
+
|
| 94 |
+
[p. 11 | section: Proof. (a) Norm concentration | type: Text]
|
| 95 |
+
By Chebyshev inequality or sub-Gaussian concentration:
|
| 96 |
+
|
| 97 |
+
[p. 11 | section: Proof. (a) Norm concentration | type: Equation]
|
| 98 |
+
\mathbb{P}\left(\left|\|z_i\|^2 - n\right| > t\sqrt{n}\right) \le 2e^{-ct^2} \tag{27}
|
| 99 |
+
|
| 100 |
+
[p. 11 | section: Proof. (a) Norm concentration | type: Text]
|
| 101 |
+
Thus ||z_i||^2 = n \pm O(\sqrt{n}) with high probability.
|
| 102 |
+
|
| 103 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 104 |
+
605 For independent z i , z j ∼ N (0, In), the inner product z ⊤ i z j = P n l =1 zilzjl is a sum of n independent random variables with:
|
| 105 |
+
|
| 106 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 107 |
+
\mathbb{E}[z_i^\top z_j] = 0, \quad \operatorname{Var}(z_i^\top z_j) = \sum_{l=1}^n \operatorname{Var}(z_{il} z_{jl}) = n (28)
|
| 108 |
+
|
| 109 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 110 |
+
Thus z ⊤ i z j = O( √ n) with high probability. Since ∥zi∥∥zj∥ = O(n):
|
| 111 |
+
|
| 112 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 113 |
+
\cos \theta_{ij} = \frac{z_i^\top z_j}{\|z_i\| \|z_j\|} = O\left(\frac{\sqrt{n}}{n}\right) = O\left(\frac{1}{\sqrt{n}}\right) \to 0 (29)
|
| 114 |
+
|
| 115 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 116 |
+
This shows that random Gaussian vectors are approximately orthogonal in high dimensions.
|
| 117 |
+
|
| 118 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 119 |
+
Lemma B.4 (Isserlis' Theorem Application). For z ∼ N (0, In) and symmetric matrices A, B :
|
| 120 |
+
|
| 121 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 122 |
+
\mathbb{E}[(z^{\top}Az)(z^{\top}Bz)] = tr(A)tr(B) + 2tr(AB) (30)
|
| 123 |
+
|
| 124 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 125 |
+
In particular, for A = I n and B = Σ :
|
| 126 |
+
|
| 127 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 128 |
+
\mathbb{E}[\|z\|^2 \cdot z^{\mathsf{T}} \Sigma z] = (n+2) tr(\Sigma) \tag{31}
|
| 129 |
+
|
| 130 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 131 |
+
Proof. By Isserlis' theorem (Wick's theorem), for z ∼ N (0, In):
|
| 132 |
+
|
| 133 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 134 |
+
\mathbb{E}[z_i z_j z_k z_l] = \delta_{ij} \delta_{kl} + \delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk} (32)
|
| 135 |
+
|
| 136 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 137 |
+
Expanding the quadratic forms:
|
| 138 |
+
|
| 139 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 140 |
+
(z^{\top}Az)(z^{\top}Bz) = \sum_{i,j,k,l} A_{ij}B_{kl}z_iz_jz_kz_l (33)
|
| 141 |
+
|
| 142 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 143 |
+
Taking expectation:
|
| 144 |
+
|
| 145 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 146 |
+
\mathbb{E}[(z^{\top}Az)(z^{\top}Bz)] = \sum_{i,j,k,l} A_{ij}B_{kl}(\delta_{ij}\delta_{kl} + \delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) (34)
|
| 147 |
+
|
| 148 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 149 |
+
= \sum_{i,k} A_{ii} B_{kk} + \sum_{i,j} A_{ij} B_{ij} + \sum_{i,j} A_{ij} B_{ji} (35)
|
| 150 |
+
|
| 151 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 152 |
+
= \operatorname{tr}(A)\operatorname{tr}(B) + \operatorname{tr}(AB) + \operatorname{tr}(AB^{\top}) (36)
|
| 153 |
+
|
| 154 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 155 |
+
For symmetric A, B: tr(AB ⊤ ) = tr(AB), thus:
|
| 156 |
+
|
| 157 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 158 |
+
\mathbb{E}[(z^{\top}Az)(z^{\top}Bz)] = \operatorname{tr}(A)\operatorname{tr}(B) + 2\operatorname{tr}(AB) (37)
|
| 159 |
+
|
| 160 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Text]
|
| 161 |
+
Setting A = In, B = Σ:
|
| 162 |
+
|
| 163 |
+
[p. 12 | section: (b) Approximate orthogonality | type: Equation]
|
| 164 |
+
\mathbb{E}[\|z\|^2 \cdot z^{\top} \Sigma z] = n \cdot \operatorname{tr}(\Sigma) + 2\operatorname{tr}(\Sigma) = (n+2)\operatorname{tr}(\Sigma) (38)
|
| 165 |
+
|
| 166 |
+
[p. 12 | section: B.2. Noise Variance Justification | type: Text]
|
| 167 |
+
The observation model yˆ(d) = d ⊤ g˜ + ν with ν ∼ N (0, σ 2 e ∥d∥ 2 ) is justified as follows.
|
| 168 |
+
|
| 169 |
+
[p. 12 | section: B.2. Noise Variance Justification | type: Text]
|
| 170 |
+
Lemma B.5 (Effective Noise Decomposition). The effective noise variance decomposes as:
|
| 171 |
+
|
| 172 |
+
[p. 12 | section: B.2. Noise Variance Justification | type: Equation]
|
| 173 |
+
\sigma_e^2 = \sigma_\varepsilon^2 + tr(\Sigma) \tag{39}
|
| 174 |
+
|
| 175 |
+
[p. 12 | section: B.2. Noise Variance Justification | type: Text]
|
| 176 |
+
where σ 2 ε is the finite-difference approximation error and tr (Σ) is the gradient noise variance.
|
| 177 |
+
|
| 178 |
+
[p. 13 | section: Proof. Step 1: Decomposition of the observation. | type: Text]
|
| 179 |
+
For coordinate-axis sampling with d = e_i , the direction in parameter space is z_i = Bd = Be_i (the i -th column of B ), where z_i \sim \mathcal{N}(0, I_n) .
|
| 180 |
+
|
| 181 |
+
[p. 13 | section: Proof. Step 1: Decomposition of the observation. | type: Text]
|
| 182 |
+
The observation can be decomposed as:
|
| 183 |
+
|
| 184 |
+
[p. 13 | section: Proof. Step 1: Decomposition of the observation. | type: Equation]
|
| 185 |
+
y_i = \underbrace{z_i^{\mathsf{T}} g}_{\text{true signal gradient noise}} + \underbrace{z_i^{\mathsf{T}} \zeta}_{\text{finite-diff error}} + \underbrace{\epsilon_i}_{\text{finite-diff error}} \tag{40}
|
| 186 |
+
|
| 187 |
+
[p. 13 | section: Proof. Step 1: Decomposition of the observation. | type: Text]
|
| 188 |
+
where:
|
| 189 |
+
|
| 190 |
+
[p. 13 | section: Proof. Step 1: Decomposition of the observation. | type: ListGroup]
|
| 191 |
+
q = \nabla \mathcal{L}(\theta) is the true gradient \zeta = \nabla \mathcal{L}(\theta; \xi) \nabla \mathcal{L}(\theta) is the stochastic gradient noise with \mathbb{E}[\zeta] = 0 and \text{Cov}(\zeta) = \Sigma \epsilon_i \sim \mathcal{N}(0, \sigma_{\varepsilon}^2) is the finite-difference truncation error, independent of \zeta and z_i
|
| 192 |
+
|
| 193 |
+
[p. 13 | section: Step 2: Identifying the noise term. | type: Text]
|
| 194 |
+
The observation noise is defined as \nu_i := y_i - z_i^{\top} g = z_i^{\top} \zeta + \epsilon_i .
|
| 195 |
+
|
| 196 |
+
[p. 13 | section: Step 2: Identifying the noise term. | type: Text]
|
| 197 |
+
Since \mathbb{E}[\zeta] = 0 and \mathbb{E}[\epsilon_i] = 0 , we have \mathbb{E}[\nu_i | z_i] = 0 .
|
| 198 |
+
|
| 199 |
+
[p. 13 | section: Step 3: Conditional variance (given z_i ). | type: Text]
|
| 200 |
+
Since \zeta and \epsilon_i are independent:
|
| 201 |
+
|
| 202 |
+
[p. 13 | section: Step 3: Conditional variance (given z_i ). | type: Equation]
|
| 203 |
+
Var(\nu_i|z_i) = Var(z_i^{\top} \zeta|z_i) + Var(\epsilon_i) = z_i^{\top} \Sigma z_i + \sigma_{\varepsilon}^2 (41)
|
| 204 |
+
|
| 205 |
+
[p. 13 | section: Step 4: Unconditional variance (taking expectation over z_i ). | type: Text]
|
| 206 |
+
Using the trace trick from Lemma B.2(b):
|
| 207 |
+
|
| 208 |
+
[p. 13 | section: Step 4: Unconditional variance (taking expectation over z_i ). | type: Equation]
|
| 209 |
+
\mathbb{E}_{z_i}[z_i^{\top} \Sigma z_i] = \mathbb{E}[\operatorname{tr}(\Sigma z_i z_i^{\top})] = \operatorname{tr}(\Sigma \cdot \mathbb{E}[z_i z_i^{\top}]) = \operatorname{tr}(\Sigma \cdot I_n) = \operatorname{tr}(\Sigma) (42)
|
| 210 |
+
|
| 211 |
+
[p. 13 | section: Step 4: Unconditional variance (taking expectation over z_i ). | type: Text]
|
| 212 |
+
Therefore, the effective noise variance is:
|
| 213 |
+
|
| 214 |
+
[p. 13 | section: Step 4: Unconditional variance (taking expectation over z_i ). | type: Equation]
|
| 215 |
+
\sigma_{\varepsilon}^{2} := \mathbb{E}_{z_{i}}[\operatorname{Var}(\nu_{i}|z_{i})] = \operatorname{tr}(\Sigma) + \sigma_{\varepsilon}^{2} \tag{43}
|
| 216 |
+
|
| 217 |
+
[p. 13 | section: Step 4: Unconditional variance (taking expectation over z_i ). | type: Equation]
|
| 218 |
+
By Assumption 3.2, \operatorname{tr}(\Sigma) = \mathbb{E}[\|\zeta\|^2] \leq \sigma_g^2 , so \sigma_e^2 \leq \sigma_g^2 + \sigma_\varepsilon^2 .
|
| 219 |
+
|
| 220 |
+
[p. 13 | section: B.3. Proof of Main Convergence Theorem | type: Text]
|
| 221 |
+
Proof of Theorem 4.2. Step 1: Single-step descent.
|
| 222 |
+
|
| 223 |
+
[p. 13 | section: B.3. Proof of Main Convergence Theorem | type: Text]
|
| 224 |
+
By Assumption 3.1 ( L -smoothness):
|
| 225 |
+
|
| 226 |
+
[p. 13 | section: B.3. Proof of Main Convergence Theorem | type: Equation]
|
| 227 |
+
\mathcal{L}(\theta_{t+1}) \le \mathcal{L}(\theta_t) + \langle g_t, \Delta \theta_t \rangle + \frac{L}{2} ||\Delta \theta_t||^2 (44)
|
| 228 |
+
|
| 229 |
+
[p. 13 | section: B.3. Proof of Main Convergence Theorem | type: Text]
|
| 230 |
+
where g_t = \nabla \mathcal{L}(\theta_t) and \Delta \theta_t = -\eta B_t \mu_t^{(k)} .
|
| 231 |
+
|
| 232 |
+
[p. 13 | section: Step 2: Inner product term. | type: Text]
|
| 233 |
+
By Lemma B.2, the observation model is y_i = z_i^\top g_t + z_i^\top \zeta + \epsilon_i , where \zeta is gradient noise with \text{Cov}(\zeta) = \Sigma , and \epsilon_i \sim \mathcal{N}(0, \sigma_{\varepsilon}^2) is finite-difference error.
|
| 234 |
+
|
| 235 |
+
[p. 13 | section: Step 2: Inner product term. | type: Text]
|
| 236 |
+
Let \tilde{g} = g_t + \zeta and \epsilon = [\epsilon_1, \dots, \epsilon_k]^{\top} . Then:
|
| 237 |
+
|
| 238 |
+
[p. 13 | section: Step 2: Inner product term. | type: Equation]
|
| 239 |
+
Y = B_t^{\top} \tilde{g} + \epsilon, \quad \mu_t^{(k)} = \gamma Y = \gamma (B_t^{\top} \tilde{g} + \epsilon) (45)
|
| 240 |
+
|
| 241 |
+
[p. 14 | section: Step 2: Inner product term. | type: Text]
|
| 242 |
+
The parameter update becomes:
|
| 243 |
+
|
| 244 |
+
[p. 14 | section: Step 2: Inner product term. | type: Equation]
|
| 245 |
+
B_t \mu_t^{(k)} = \gamma B_t B_t^{\top} (g_t + \zeta) + \gamma \sum_{i=1}^k z_i \epsilon_i \tag{46}
|
| 246 |
+
|
| 247 |
+
[p. 14 | section: Step 2: Inner product term. | type: Text]
|
| 248 |
+
Computing the expectation of the inner product. Since \mathbb{E}[\zeta] = 0 , \mathbb{E}[\epsilon_i] = 0 , and \zeta , \epsilon_i are independent of B_t :
|
| 249 |
+
|
| 250 |
+
[p. 14 | section: Step 2: Inner product term. | type: Equation]
|
| 251 |
+
\mathbb{E}[\langle g_t, B_t \mu_t^{(k)} \rangle | \theta_t] = \gamma \mathbb{E}[g_t^\top B_t B_t^\top g_t] + \gamma \mathbb{E}[g_t^\top B_t B_t^\top \zeta] + \gamma \sum_{i=1}^k \mathbb{E}[(g_t^\top z_i) \epsilon_i] (47)
|
| 252 |
+
|
| 253 |
+
[p. 14 | section: Step 2: Inner product term. | type: Text]
|
| 254 |
+
The second term: \mathbb{E}[g_t^{\top} B_t B_t^{\top} \zeta] = g_t^{\top} \mathbb{E}[B_t B_t^{\top}] \mathbb{E}[\zeta] = 0.
|
| 255 |
+
|
| 256 |
+
[p. 14 | section: Step 2: Inner product term. | type: Text]
|
| 257 |
+
The third term: \mathbb{E}[(g_t^{\top} z_i) \epsilon_i] = \mathbb{E}[g_t^{\top} z_i] \mathbb{E}[\epsilon_i] = 0 (independence).
|
| 258 |
+
|
| 259 |
+
[p. 14 | section: Step 2: Inner product term. | type: Text]
|
| 260 |
+
The first term: \mathbb{E}[g_t^{\top} B_t B_t^{\top} g_t] = \mathbb{E}[\|B_t^{\top} g_t\|^2] = \sum_{i=1}^k \mathbb{E}[(z_i^{\top} g_t)^2] = k \|g_t\|^2 .
|
| 261 |
+
|
| 262 |
+
[p. 14 | section: Step 2: Inner product term. | type: Text]
|
| 263 |
+
Therefore:
|
| 264 |
+
|
| 265 |
+
[p. 14 | section: Step 2: Inner product term. | type: Equation]
|
| 266 |
+
\mathbb{E}[\langle g_t, \Delta \theta_t \rangle | \theta_t] = -\eta \gamma k \|g_t\|^2 \tag{48}
|
| 267 |
+
|
| 268 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 269 |
+
From Step 2, B_t \mu_t^{(k)} = \gamma B_t B_t^{\top} \tilde{g} + \gamma \sum_i z_i \epsilon_i . Thus:
|
| 270 |
+
|
| 271 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 272 |
+
||B_t \mu_t^{(k)}||^2 = \gamma^2 ||B_t B_t^\top \tilde{g}||^2 + \gamma^2 \left\| \sum_i z_i \epsilon_i \right\|^2 + 2\gamma^2 \left\langle B_t B_t^\top \tilde{g}, \sum_i z_i \epsilon_i \right\rangle (49)
|
| 273 |
+
|
| 274 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 275 |
+
Cross term vanishes: Since \epsilon_i is independent of B_t and \tilde{g} , and \mathbb{E}[\epsilon_i] = 0 :
|
| 276 |
+
|
| 277 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 278 |
+
\mathbb{E}\left[\left\langle B_t B_t^{\top} \tilde{g}, \sum_i z_i \epsilon_i \right\rangle\right] = \sum_i \mathbb{E}[\epsilon_i] \cdot \mathbb{E}[\left\langle B_t B_t^{\top} \tilde{g}, z_i \right\rangle] = 0 \tag{50}
|
| 279 |
+
|
| 280 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 281 |
+
First term: We compute \mathbb{E}[\|B_tB_t^{\top}\tilde{g}\|^2] by first conditioning on B_t , then taking expectation over B_t .
|
| 282 |
+
|
| 283 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 284 |
+
(A) Given B_t , taking expectation over \zeta (using \mathbb{E}[\zeta] = 0 ):
|
| 285 |
+
|
| 286 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 287 |
+
\mathbb{E}[\|B_t B_t^{\top} \tilde{g}\|^2 | B_t] = \|B_t B_t^{\top} g_t\|^2 + \mathbb{E}[\|B_t B_t^{\top} \zeta\|^2 | B_t] (51)
|
| 288 |
+
|
| 289 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 290 |
+
where \mathbb{E}[\|B_t B_t^\top \zeta\|^2 | B_t] = \operatorname{tr}((B_t B_t^\top)^2 \Sigma).
|
| 291 |
+
|
| 292 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 293 |
+
(B) Taking expectation over B_t . For \mathbb{E}[\|B_t B_t^\top g\|^2] :
|
| 294 |
+
|
| 295 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 296 |
+
||B_t B_t^{\top} g||^2 = \sum_{i,j=1}^k (z_i^{\top} g) (z_j^{\top} g) (z_i^{\top} z_j) (52)
|
| 297 |
+
|
| 298 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 299 |
+
Diagonal terms (i = j): \mathbb{E}[(z_i^\top g)^2 ||z_i||^2] = (n+2)||g||^2 (by Lemma B.4).
|
| 300 |
+
|
| 301 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 302 |
+
Off-diagonal terms (i \neq j) : By independence of z_i and z_j :
|
| 303 |
+
|
| 304 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 305 |
+
\mathbb{E}[(z_i^\top g)(z_j^\top g)(z_i^\top z_j)] = \sum_{a,b,c} g_a g_b \mathbb{E}[(z_i)_a(z_i)_c] \mathbb{E}[(z_j)_b(z_j)_c] (53)
|
| 306 |
+
|
| 307 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 308 |
+
= \sum_{a,b,c} g_a g_b \delta_{ac} \delta_{bc} = \sum_c g_c^2 = ||g||^2 (54)
|
| 309 |
+
|
| 310 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 311 |
+
Thus:
|
| 312 |
+
|
| 313 |
+
[p. 14 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 314 |
+
\mathbb{E}[\|B_t B_t^\top q\|^2] = k(n+2)\|q\|^2 + k(k-1)\|q\|^2 = k(n+k+1)\|q\|^2 = k\tilde{n}\|q\|^2 (55)
|
| 315 |
+
|
| 316 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 317 |
+
For \mathbb{E}[\operatorname{tr}((B_t B_t^{\top})^2 \Sigma)] , let P = B_t B_t^{\top} :
|
| 318 |
+
|
| 319 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 320 |
+
\operatorname{tr}(P^{2}\Sigma) = \sum_{i,j} (z_{i}^{\top} z_{j})(z_{i}^{\top} \Sigma z_{j}) \tag{56}
|
| 321 |
+
|
| 322 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 323 |
+
Diagonal (i = j): By Lemma B.4, \mathbb{E}[||z||^2 \cdot z^{\top} \Sigma z] = (n + 2) \operatorname{tr}(\Sigma) .
|
| 324 |
+
|
| 325 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 326 |
+
Off-diagonal (i \neq j) : By independence of z_i and z_i :
|
| 327 |
+
|
| 328 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 329 |
+
\mathbb{E}[(z_i^\top z_j)(z_i^\top \Sigma z_j)] = \sum_{a,b,c} \Sigma_{bc} \mathbb{E}[(z_i)_a(z_i)_b] \mathbb{E}[(z_j)_a(z_j)_c] (57)
|
| 330 |
+
|
| 331 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 332 |
+
= \sum_{a,b,c} \Sigma_{bc} \delta_{ab} \delta_{ac} = \sum_{a} \Sigma_{aa} = \text{tr}(\Sigma) (58)
|
| 333 |
+
|
| 334 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 335 |
+
Thus:
|
| 336 |
+
|
| 337 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 338 |
+
\mathbb{E}[\operatorname{tr}(P^2\Sigma)] = k(n+2)\operatorname{tr}(\Sigma) + k(k-1)\operatorname{tr}(\Sigma) = k\tilde{n} \cdot \operatorname{tr}(\Sigma) (59)
|
| 339 |
+
|
| 340 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 341 |
+
Second term: For the finite-difference noise:
|
| 342 |
+
|
| 343 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 344 |
+
\mathbb{E}\left[\left\|\sum_{i=1}^{k} z_{i} \epsilon_{i}\right\|^{2}\right] = \sum_{i,j} \mathbb{E}[\epsilon_{i} \epsilon_{j}] \mathbb{E}[z_{i}^{\top} z_{j}] = \sum_{i} \sigma_{\varepsilon}^{2} \cdot n = kn\sigma_{\varepsilon}^{2} (60)
|
| 345 |
+
|
| 346 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Text]
|
| 347 |
+
Total second moment:
|
| 348 |
+
|
| 349 |
+
[p. 15 | section: Step 3: Second moment (detailed computation). | type: Equation]
|
| 350 |
+
\mathbb{E}[\|B_t \mu_t^{(k)}\|^2] = \gamma^2 k \left( \tilde{n}(\|g_t\|^2 + \operatorname{tr}(\Sigma)) + n\sigma_{\varepsilon}^2 \right) (61)
|
| 351 |
+
|
| 352 |
+
[p. 15 | section: Step 4: Combining. | type: Text]
|
| 353 |
+
Substituting into the descent inequality:
|
| 354 |
+
|
| 355 |
+
[p. 15 | section: Step 4: Combining. | type: Equation]
|
| 356 |
+
\mathbb{E}[\|\Delta\theta_t\|^2] = \eta^2 \gamma^2 k\tilde{n} \|g_t\|^2 + \eta^2 \gamma^2 k(\tilde{n} \cdot \text{tr}(\Sigma) + n\sigma_{\varepsilon}^2) (62)
|
| 357 |
+
|
| 358 |
+
[p. 15 | section: Step 4: Combining. | type: Text]
|
| 359 |
+
Thus:
|
| 360 |
+
|
| 361 |
+
[p. 15 | section: Step 4: Combining. | type: Equation]
|
| 362 |
+
\mathbb{E}[\mathcal{L}(\theta_{t+1})] \le \mathbb{E}[\mathcal{L}(\theta_t)] - \eta \gamma k \|g_t\|^2 + \frac{L\eta^2 \gamma^2 k}{2} \left[ \tilde{n} \|g_t\|^2 + \tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2 \right] (63)
|
| 363 |
+
|
| 364 |
+
[p. 15 | section: Step 4: Combining. | type: Text]
|
| 365 |
+
Collecting terms in ||g_t||^2 :
|
| 366 |
+
|
| 367 |
+
[p. 15 | section: Step 4: Combining. | type: Equation]
|
| 368 |
+
\mathbb{E}[\mathcal{L}(\theta_{t+1})] \leq \mathbb{E}[\mathcal{L}(\theta_t)] - \eta \gamma k \underbrace{\left(1 - \frac{L\eta \gamma \tilde{n}}{2}\right)}_{:=\beta(\eta)} \|g_t\|^2 + \frac{L\eta^2 \gamma^2 k(\tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2)}{2} (64)
|
| 369 |
+
|
| 370 |
+
[p. 15 | section: Step 5: Telescoping sum. | type: Text]
|
| 371 |
+
When \eta < \frac{2}{L\gamma\tilde{n}} , we have \beta(\eta) > 0 . Rearranging:
|
| 372 |
+
|
| 373 |
+
[p. 15 | section: Step 5: Telescoping sum. | type: Equation]
|
| 374 |
+
\|g_t\|^2 \le \frac{1}{\beta(\eta)\eta\gamma k} \left( \mathcal{L}(\theta_t) - \mathcal{L}(\theta_{t+1}) \right) + \frac{L\eta\gamma(\tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2)}{2\beta(\eta)} (65)
|
| 375 |
+
|
| 376 |
+
[p. 15 | section: Step 5: Telescoping sum. | type: Text]
|
| 377 |
+
Summing over t = 0, ..., T - 1 and dividing by T:
|
| 378 |
+
|
| 379 |
+
[p. 15 | section: Step 5: Telescoping sum. | type: Equation]
|
| 380 |
+
\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|g_t\|^2] \le \frac{\mathcal{L}(\theta_0) - \mathcal{L}^*}{\beta(\eta)\eta\gamma kT} + \frac{L\eta\gamma(\tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2)}{2\beta(\eta)} (66)
|
| 381 |
+
|
| 382 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Text]
|
| 383 |
+
Proof of Theorem 3.7. Step 1: Posterior mean unbiasedness.
|
| 384 |
+
|
| 385 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Text]
|
| 386 |
+
By Corollary 3.6, for coordinate-axis sampling (d^{(i)} = e_i) , the posterior mean is:
|
| 387 |
+
|
| 388 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Equation]
|
| 389 |
+
\mu^{(k)} = \gamma Y = \gamma [y^{(1)}, \dots, y^{(k)}]^{\top} (67)
|
| 390 |
+
|
| 391 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Text]
|
| 392 |
+
where \gamma = \frac{\sigma_p^2}{\sigma_p^2 + \sigma_e^2} .
|
| 393 |
+
|
| 394 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Text]
|
| 395 |
+
Each observation satisfies y^{(i)} = e_i^{\top} \tilde{g} + \nu^{(i)} = \tilde{g}_i + \nu^{(i)} , where \tilde{g} = B^{\top} g is the true normalized subspace gradient and \nu^{(i)} is zero-mean noise.
|
| 396 |
+
|
| 397 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Text]
|
| 398 |
+
Taking conditional expectation given B and g (so \tilde{g}^* = B^{\top}g is fixed):
|
| 399 |
+
|
| 400 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Equation]
|
| 401 |
+
\mathbb{E}[y^{(i)}|B,g] = \tilde{g}_i^* + \mathbb{E}[\nu^{(i)}] = \tilde{g}_i^* (68)
|
| 402 |
+
|
| 403 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Text]
|
| 404 |
+
Thus:
|
| 405 |
+
|
| 406 |
+
[p. 16 | section: B.4. Proof of Expected Update Direction | type: Equation]
|
| 407 |
+
\mathbb{E}[\mu^{(k)}|B,g] = \gamma \mathbb{E}[Y|B,g] = \gamma \tilde{g}^* = \gamma B^\top g \tag{69}
|
| 408 |
+
|
| 409 |
+
[p. 16 | section: Step 2: Conditional expectation of update. | type: Text]
|
| 410 |
+
The parameter update is \Delta\theta = -\eta B\mu^{(k)} . Taking conditional expectation:
|
| 411 |
+
|
| 412 |
+
[p. 16 | section: Step 2: Conditional expectation of update. | type: Equation]
|
| 413 |
+
\mathbb{E}[\Delta \theta | B] = -\eta B \mathbb{E}[\mu^{(k)} | B] = -\eta \gamma B B^{\mathsf{T}} g \tag{70}
|
| 414 |
+
|
| 415 |
+
[p. 16 | section: Step 3: Expectation over subspace basis. | type: Text]
|
| 416 |
+
Taking expectation over B = [z_1, \dots, z_k] where z_i \stackrel{iid}{\sim} \mathcal{N}(0, I_n) :
|
| 417 |
+
|
| 418 |
+
[p. 16 | section: Step 3: Expectation over subspace basis. | type: Equation]
|
| 419 |
+
\mathbb{E}[\Delta \theta] = -\eta \gamma \mathbb{E}[BB^{\top}]g \tag{71}
|
| 420 |
+
|
| 421 |
+
[p. 16 | section: Step 3: Expectation over subspace basis. | type: Text]
|
| 422 |
+
Computing \mathbb{E}[BB^{\top}] :
|
| 423 |
+
|
| 424 |
+
[p. 16 | section: Step 3: Expectation over subspace basis. | type: Equation]
|
| 425 |
+
BB^{\top} = \sum_{i=1}^{k} z_i z_i^{\top} \tag{72}
|
| 426 |
+
|
| 427 |
+
[p. 16 | section: Step 3: Expectation over subspace basis. | type: Equation]
|
| 428 |
+
\mathbb{E}[BB^{\top}] = \sum_{i=1}^{k} \mathbb{E}[z_i z_i^{\top}] = \sum_{i=1}^{k} I_n = kI_n (73)
|
| 429 |
+
|
| 430 |
+
[p. 16 | section: Step 3: Expectation over subspace basis. | type: Text]
|
| 431 |
+
Therefore:
|
| 432 |
+
|
| 433 |
+
[p. 16 | section: Step 3: Expectation over subspace basis. | type: Equation]
|
| 434 |
+
\mathbb{E}[\Delta \theta] = -\eta \gamma k \cdot I_n \cdot q = -\eta \gamma k \cdot \nabla \mathcal{L}(\theta_0) \tag{74}
|
| 435 |
+
|
| 436 |
+
[p. 16 | section: Step 4: Higher-order bias. | type: Text]
|
| 437 |
+
By Lemma B.1, the finite-difference estimator has O(\varepsilon L) bias. After multiplication by \varepsilon in the update, this becomes O(\varepsilon^2 L) . Since \varepsilon is typically small ( \sim 10^{-3} ), we write:
|
| 438 |
+
|
| 439 |
+
[p. 16 | section: Step 4: Higher-order bias. | type: Equation]
|
| 440 |
+
\mathbb{E}[\Delta\theta] = -\eta \gamma k \cdot \nabla \mathcal{L}(\theta_0) + O(\varepsilon^3) \tag{75}
|
| 441 |
+
|
| 442 |
+
[p. 16 | section: Step 4: Higher-order bias. | type: Text]
|
| 443 |
+
This proves that the expected update direction aligns with the negative gradient, with effective learning rate \eta_{eff} = \eta \gamma k . \Box
|
| 444 |
+
|
| 445 |
+
[p. 16 | section: B.5. Adaptive Sampling Analysis | type: Text]
|
| 446 |
+
Theorem B.6 (Conditional Unbiasedness of Posterior Mean under Adaptive Sampling). Let \mu^{(m)} denote the posterior mean after m adaptive sampling steps. Given the subspace basis B and the true gradient g, for any adaptive sampling strategy \pi (where d^{(j)} is \mathcal{D}_{j-1} -measurable), we have:
|
| 447 |
+
|
| 448 |
+
[p. 16 | section: B.5. Adaptive Sampling Analysis | type: Equation]
|
| 449 |
+
\mathbb{E}[\mu^{(m)}|B,g] = \mathbb{E}\left[\Sigma^{(m)}D_m^{\top}R_m^{-1}D_m \mid B\right]\tilde{g}^* = \mathbb{E}[\Gamma_m|B] \cdot \tilde{g}^* (76)
|
| 450 |
+
|
| 451 |
+
[p. 17 | section: B.5. Adaptive Sampling Analysis | type: Text]
|
| 452 |
+
In particular, if \Sigma^{(m)} is deterministic given B (e.g., coordinate-axis sampling or any strategy that depends only on \mathcal{D}_{m-1} ), then:
|
| 453 |
+
|
| 454 |
+
[p. 17 | section: B.5. Adaptive Sampling Analysis | type: Equation]
|
| 455 |
+
\mathbb{E}[\mu^{(m)}|B, g, \mathcal{D}_m] = \Gamma_m \cdot \tilde{g}^* \tag{77}
|
| 456 |
+
|
| 457 |
+
[p. 17 | section: B.5. Adaptive Sampling Analysis | type: Text]
|
| 458 |
+
where \Gamma_m := I_k - \sigma_n^{-2} \Sigma^{(m)} is the shrinkage matrix.
|
| 459 |
+
|
| 460 |
+
[p. 17 | section: Proof. Step 1: Expression for the posterior mean. | type: Text]
|
| 461 |
+
By the standard Bayesian linear regression formula:
|
| 462 |
+
|
| 463 |
+
[p. 17 | section: Proof. Step 1: Expression for the posterior mean. | type: Equation]
|
| 464 |
+
\mu^{(m)} = \Sigma^{(m)} D_m^{\top} R_m^{-1} Y_m \tag{78}
|
| 465 |
+
|
| 466 |
+
[p. 17 | section: Proof. Step 1: Expression for the posterior mean. | type: Text]
|
| 467 |
+
where Y_m = [y^{(1)}, \dots, y^{(m)}]^{\top} .
|
| 468 |
+
|
| 469 |
+
[p. 17 | section: Step 2: Computing the conditional expectation. | type: Text]
|
| 470 |
+
Note that \Sigma^{(m)} and D_m are both \mathcal{D}_m -measurable. The key is to compute \mathbb{E}[Y_m|B,g,\mathcal{D}_m] .
|
| 471 |
+
|
| 472 |
+
[p. 17 | section: Step 2: Computing the conditional expectation. | type: Text]
|
| 473 |
+
For each y^{(j)} :
|
| 474 |
+
|
| 475 |
+
[p. 17 | section: Step 2: Computing the conditional expectation. | type: Equation]
|
| 476 |
+
\mathbb{E}[y^{(j)}|B, g, \mathcal{D}_m] = \mathbb{E}[y^{(j)}|B, g, d^{(j)}] = d^{(j)\top}\tilde{g}^* (79)
|
| 477 |
+
|
| 478 |
+
[p. 17 | section: Step 2: Computing the conditional expectation. | type: Text]
|
| 479 |
+
The first equality holds because given d^{(j)} , y^{(j)} is conditionally independent of other d^{(i)} ( i \neq j ).
|
| 480 |
+
|
| 481 |
+
[p. 17 | section: Step 2: Computing the conditional expectation. | type: Text]
|
| 482 |
+
Therefore:
|
| 483 |
+
|
| 484 |
+
[p. 17 | section: Step 2: Computing the conditional expectation. | type: Equation]
|
| 485 |
+
\mathbb{E}[Y_m|B,g,\mathcal{D}_m] = D_m \tilde{g}^* \tag{80}
|
| 486 |
+
|
| 487 |
+
[p. 17 | section: Step 3: Substituting into the posterior mean. | type: Equation]
|
| 488 |
+
\mathbb{E}[\mu^{(m)}|B, g, \mathcal{D}_m] = \Sigma^{(m)} D_m^{\top} R_m^{-1} \mathbb{E}[Y_m | B, g, \mathcal{D}_m] = \Sigma^{(m)} D_m^{\top} R_m^{-1} D_m \tilde{g}^* (81)
|
| 489 |
+
|
| 490 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Text]
|
| 491 |
+
By the definition of \Sigma^{(m)} :
|
| 492 |
+
|
| 493 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Equation]
|
| 494 |
+
(\Sigma^{(m)})^{-1} = \sigma_p^{-2} I_k + D_m^{\top} R_m^{-1} D_m (82)
|
| 495 |
+
|
| 496 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Text]
|
| 497 |
+
Therefore:
|
| 498 |
+
|
| 499 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Equation]
|
| 500 |
+
D_m^{\top} R_m^{-1} D_m = (\Sigma^{(m)})^{-1} - \sigma_p^{-2} I_k (83)
|
| 501 |
+
|
| 502 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Text]
|
| 503 |
+
Substituting:
|
| 504 |
+
|
| 505 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Equation]
|
| 506 |
+
\mathbb{E}[\mu^{(m)}|B,g,\mathcal{D}_m] = \Sigma^{(m)} \left[ (\Sigma^{(m)})^{-1} - \sigma_p^{-2} I_k \right] \tilde{g}^* = \left( I_k - \sigma_p^{-2} \Sigma^{(m)} \right) \tilde{g}^* (84)
|
| 507 |
+
|
| 508 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Text]
|
| 509 |
+
Defining the shrinkage matrix \Gamma_m := I_k - \sigma_p^{-2} \Sigma^{(m)} , we obtain:
|
| 510 |
+
|
| 511 |
+
[p. 17 | section: Step 4: Simplifying the shrinkage matrix. | type: Equation]
|
| 512 |
+
\mathbb{E}[\mu^{(m)}|B,g,\mathcal{D}_m] = \Gamma_m \tilde{g}^* \tag{85}
|
| 513 |
+
|
| 514 |
+
[p. 17 | section: B.6. Convergence Rate under Adaptive Sampling | type: Text]
|
| 515 |
+
Theorem B.7 (Convergence Rate under Adaptive Sampling). Under Assumptions 3.1, 3.2, and isotropic noise, consider the BSZO algorithm with adaptive sampling (m samples, where the first k samples use coordinate-axis sampling). Let \tilde{n} = n + k + 1 be the effective dimension. Suppose \eta < \frac{2}{L\tilde{\gamma}\tilde{n}} , and define \beta(\eta) := 1 - \frac{L\eta\tilde{\gamma}\tilde{n}}{2} . Then, after T iterations, the following inequality holds:
|
| 516 |
+
|
| 517 |
+
[p. 17 | section: B.6. Convergence Rate under Adaptive Sampling | type: Equation]
|
| 518 |
+
\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{\Delta_0}{\beta(\eta)\eta \bar{\gamma}kT} + \frac{L\eta \bar{\gamma}(\tilde{n}\sigma_g^2 + n\sigma_n^2)}{2\beta(\eta)},\tag{86}
|
| 519 |
+
|
| 520 |
+
[p. 17 | section: B.6. Convergence Rate under Adaptive Sampling | type: Text]
|
| 521 |
+
where:
|
| 522 |
+
|
| 523 |
+
[p. 18 | section: B.6. Convergence Rate under Adaptive Sampling | type: ListGroup]
|
| 524 |
+
\bar{\gamma} := \min_t \bar{\gamma}_t \geq \gamma is the minimum effective shrinkage factor, \sigma_g^2 is the gradient noise variance, \sigma_n^2 is the finite-difference approximation noise variance, \Delta_0 := \mathcal{L}(\theta_0) \mathcal{L}^* .
|
| 525 |
+
|
| 526 |
+
[p. 18 | section: B.6. Convergence Rate under Adaptive Sampling | type: Text]
|
| 527 |
+
Corollary B.8. Let \eta = \frac{1}{L\bar{\gamma}\tilde{n}} . Then \beta = 1/2 , and the convergence bound simplifies to:
|
| 528 |
+
|
| 529 |
+
[p. 18 | section: B.6. Convergence Rate under Adaptive Sampling | type: Equation]
|
| 530 |
+
\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{2L\bar{\gamma}\tilde{n}\Delta_0}{kT} + \sigma_g^2 + \frac{n}{\tilde{n}}\sigma_n^2. \tag{87}
|
| 531 |
+
|
| 532 |
+
[p. 18 | section: B.6. Convergence Rate under Adaptive Sampling | type: Text]
|
| 533 |
+
Remark B.9. When n \gg k , we have \tilde{n} \approx n , so the noise floor \sigma_g^2 + \frac{n}{\tilde{n}} \sigma_n^2 \approx \sigma_e^2 becomes decoupled from the dimension n.
|
| 534 |
+
|
| 535 |
+
[p. 18 | section: B.6. Convergence Rate under Adaptive Sampling | type: Text]
|
| 536 |
+
Proof. The proof follows the same structure as Theorem 4.2, with the fixed \gamma replaced by the adaptive effective shrinkage factor \bar{\gamma}_t .
|
| 537 |
+
|
| 538 |
+
[p. 18 | section: Step 1: Single-step descent. | type: Text]
|
| 539 |
+
By Assumption 3.1 ( L -smoothness):
|
| 540 |
+
|
| 541 |
+
[p. 18 | section: Step 1: Single-step descent. | type: Equation]
|
| 542 |
+
\mathcal{L}(\theta_{t+1}) \le \mathcal{L}(\theta_t) + \langle g_t, \Delta \theta_t \rangle + \frac{L}{2} ||\Delta \theta_t||^2 (88)
|
| 543 |
+
|
| 544 |
+
[p. 18 | section: Step 2: Inner product term under adaptive sampling. | type: Text]
|
| 545 |
+
By the adaptive sampling theorem (Theorem B.6), the expected update direction satisfies:
|
| 546 |
+
|
| 547 |
+
[p. 18 | section: Step 2: Inner product term under adaptive sampling. | type: Equation]
|
| 548 |
+
\mathbb{E}[\langle g_t, \Delta \theta_t \rangle | \theta_t] = -\eta \mathbb{E}[\operatorname{tr}(\Gamma_m^{(t)})] \|g_t\|^2 = -\eta \bar{\gamma}_t k \|g_t\|^2 (89)
|
| 549 |
+
|
| 550 |
+
[p. 18 | section: Step 2: Inner product term under adaptive sampling. | type: Text]
|
| 551 |
+
where \bar{\gamma}_t = \frac{1}{k} \text{tr}(\Gamma_m^{(t)}) = 1 - \frac{U_m^{(t)}}{k \sigma_n^2} is the effective shrinkage factor at iteration t.
|
| 552 |
+
|
| 553 |
+
[p. 18 | section: Step 3: Second moment (same structure as main theorem). | type: Text]
|
| 554 |
+
Following the same derivation as Theorem 4.2, with \gamma replaced by \bar{\gamma}_t :
|
| 555 |
+
|
| 556 |
+
[p. 18 | section: Step 3: Second moment (same structure as main theorem). | type: Equation]
|
| 557 |
+
\mathbb{E}[\|\Delta\theta_t\|^2 | \theta_t] = \eta^2 \bar{\gamma}_t^2 k \tilde{n} \|g_t\|^2 + \eta^2 \bar{\gamma}_t^2 k (\tilde{n}\sigma_g^2 + n\sigma_n^2) (90)
|
| 558 |
+
|
| 559 |
+
[p. 18 | section: Step 3: Second moment (same structure as main theorem). | type: Text]
|
| 560 |
+
The key observation is that the second moment structure remains unchanged because:
|
| 561 |
+
|
| 562 |
+
[p. 18 | section: Step 3: Second moment (same structure as main theorem). | type: ListGroup]
|
| 563 |
+
The gradient noise \sigma_g^2 interacts with B_t to produce the \tilde{n} factor The finite-difference noise \sigma_n^2 is independent of B_t , producing only the n factor
|
| 564 |
+
|
| 565 |
+
[p. 18 | section: Step 4: Combining and bounding. | type: Text]
|
| 566 |
+
Substituting into the descent inequality:
|
| 567 |
+
|
| 568 |
+
[p. 18 | section: Step 4: Combining and bounding. | type: Equation]
|
| 569 |
+
\mathbb{E}[\mathcal{L}(\theta_{t+1})] \le \mathbb{E}[\mathcal{L}(\theta_t)] - \eta \bar{\gamma}_t k \left(1 - \frac{L\eta \bar{\gamma}_t \tilde{n}}{2}\right) \mathbb{E}[\|g_t\|^2] + \frac{L\eta^2 \bar{\gamma}_t^2 k (\tilde{n} \sigma_g^2 + n \sigma_n^2)}{2} (91)
|
| 570 |
+
|
| 571 |
+
[p. 18 | section: Step 4: Combining and bounding. | type: Text]
|
| 572 |
+
Since \bar{\gamma}_t \geq \bar{\gamma} := \min_t \bar{\gamma}_t \geq \gamma (by Lemma in Theorem B.6), and assuming \eta < \frac{2}{L\bar{\gamma}\bar{n}} , we define \beta(\eta) = 1 - \frac{L\eta\bar{\gamma}\bar{n}}{2} > 0 .
|
| 573 |
+
|
| 574 |
+
[p. 18 | section: Step 4: Combining and bounding. | type: Text]
|
| 575 |
+
Rearranging:
|
| 576 |
+
|
| 577 |
+
[p. 18 | section: Step 4: Combining and bounding. | type: Equation]
|
| 578 |
+
\mathbb{E}[\|g_t\|^2] \le \frac{1}{\beta(\eta)\eta\bar{\gamma}k} \left( \mathbb{E}[\mathcal{L}(\theta_t)] - \mathbb{E}[\mathcal{L}(\theta_{t+1})] \right) + \frac{L\eta\bar{\gamma}(\tilde{n}\sigma_g^2 + n\sigma_n^2)}{2\beta(\eta)} (92)
|
| 579 |
+
|
| 580 |
+
[p. 19 | section: Step 5: Telescoping sum. | type: Text]
|
| 581 |
+
Summing over t = 0, ..., T - 1 and dividing by T:
|
| 582 |
+
|
| 583 |
+
[p. 19 | section: Step 5: Telescoping sum. | type: Equation]
|
| 584 |
+
\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{\Delta_0}{\beta(\eta)\eta \bar{\gamma}kT} + \frac{L\eta \bar{\gamma}(\tilde{n}\sigma_g^2 + n\sigma_n^2)}{2\beta(\eta)} (93)
|
| 585 |
+
|
| 586 |
+
[p. 19 | section: Step 5: Telescoping sum. | type: Text]
|
| 587 |
+
For the special learning rate \eta=\frac{1}{L\bar{\gamma}\tilde{n}} , we have \beta=1/2 , and the bound simplifies to:
|
| 588 |
+
|
| 589 |
+
[p. 19 | section: Step 5: Telescoping sum. | type: Equation]
|
| 590 |
+
\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{2L\bar{\gamma}\tilde{n}\Delta_0}{kT} + \sigma_g^2 + \frac{n}{\tilde{n}}\sigma_n^2 (94)
|
| 591 |
+
|
| 592 |
+
[p. 19 | section: Step 5: Telescoping sum. | type: Text]
|
| 593 |
+
When n\gg k , we have \tilde{n}\approx n , so the noise floor \sigma_g^2+\frac{n}{\tilde{n}}\sigma_n^2\approx\sigma_e^2 becomes decoupled from dimension n.
|
| 594 |
+
|
| 595 |
+
[p. 19 | section: C. Experiment Details | type: TableGroup]
|
| 596 |
+
Table 6. Number of training and validation samples for each dataset. SPLIT SST-2 BoolQ RTE COPA WIC WSC СВ TREC TRAINING VALIDATION 1000 1000 1000 300 1000 450 200 1000 500 500 500 100 500 100 50 500 Table 7. Hyperparameter configurations for fine-tuning RoBERTa-large.
|
| 597 |
+
|
| 598 |
+
[p. 19 | section: C. Experiment Details | type: Table]
|
| 599 |
+
Algorithm Hyperparameter Values MeZO Batch size Learning rate \varepsilon MeZO-Adam Batch size Learning rate \varepsilon HiZOO Batch size Learning rate \varepsilon \{1 \times 10^{-4}, 1 \times 10^{-5}, 1 \times 10^{-6}, 1 \times 10^{-7}, 1 \times 10^{-8} \} LOZO Batch size Learning rate ε Rank Interval BSZO Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000
|
| 600 |
+
|
| 601 |
+
[p. 20 | section: C. Experiment Details | type: TableGroup]
|
| 602 |
+
Table 8. Hyperparameter configurations for fine-tuning OPT-1.3B. Algorithm Hyperparameter Values MeZO Batch size Learning rate \varepsilon MeZO-Adam Batch size Learning rate \varepsilon HiZOO Batch size Learning rate \varepsilon LOZO Batch size Learning rate ε Rank Interval BSZO Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000
|
| 603 |
+
|
| 604 |
+
[p. 21 | section: C. Experiment Details | type: Caption]
|
| 605 |
+
Table 9. Hyperparameter configurations for fine-tuning Mistral-7B.
|
| 606 |
+
|
| 607 |
+
[p. 21 | section: C. Experiment Details | type: Table]
|
| 608 |
+
Algorithm Hyperparameter Values MeZO Batch size Learning rate \varepsilon HiZOO Batch size Learning rate \varepsilon LOZO Batch size Learning rate \varepsilon Rank Interval BSZO Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000
|
| 609 |
+
|
| 610 |
+
[p. 22 | section: C. Experiment Details | type: TableGroup]
|
| 611 |
+
Table 10. Hyperparameter configurations for fine-tuning OPT-13B. Algorithm Hyperparameter Values MeZO Batch size Learning rate \varepsilon MeZO-Adam Batch size Learning rate \varepsilon HiZOO Batch size Learning rate \varepsilon LOZO Batch size Learning rate ε Rank Interval BSZO Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) BSZO-B Batch size Learning rate \varepsilon k (Subspace dim) m (Samples) All Methods Early stopping patience 4,000
|
| 612 |
+
|
| 613 |
+
[p. 22 | section: D. Raw Experimental Results | type: Text]
|
| 614 |
+
We provide the complete raw results of 5 independent runs for each method on RoBERTa-large in Table 11. The mean and standard deviation reported in Table 1 are computed from these results.
|
| 615 |
+
|
| 616 |
+
[p. 23 | section: D. Raw Experimental Results | type: Caption]
|
| 617 |
+
Table 11. Raw test accuracy (%) of 5 runs on RoBERTa-large (355M).
|
| 618 |
+
|
| 619 |
+
[p. 23 | section: D. Raw Experimental Results | type: TableGroup]
|
| 620 |
+
MEZO 92.43 92.32 91.74 92.78 91.86 MEZO-ADAM 92.32 92.66 91.51 92.43 92.78 HIZOO 91.97 91.86 91.28 91.17 90.94 SST-2 LOZO 91.63 92.09 91.74 91.51 92.20 BSZO 92.89 92.43 92.78 92.43 92.78 BSZO-B 92.66 91.74 92.32 91.97 92.66 MEZO 69.68 68.95 65.70 65.34 62.09 MEZO-ADAM 62.09 64.26 64.62 64.98 62.09 HIZOO 59.21 63.18 57.76 56.68 59.21 RTE LOZO 59.57 63.90 61.01 65.34 63.18 BSZO 68.59 68.23 69.68 66.07 66.43 BSZO-B 67.87 70.04 70.76 66.43 66.79 MEZO 87.50 85.71 91.07 76.79 89.29 MEZO-ADAM 82.14 83.93 80.36 82.14 76.79 HIZOO 78.57 75.00 75.00 78.57 75.00 CB LOZO 87.50 82.14 82.14 89.29 80.36 BSZO 83.93 85.71 87.50 87.50 83.93 BSZO-B 85.71 83.93 82.14 85.71 83.93 MEZO 49.69 58.31 52.98 57.52 57.52 MEZO-ADAM 54.39 51.10 54.70 46.55 57.52 HIZOO 50.31 54.39 51.10 54.70 57.52 WIC LOZO 54.55 54.55 51.88 55.17 54.86 BSZO 57.68 57.52 55.64 54.70 54.70 BSZO-B 57.99 57.99 55.80 56.58 57.68 MEZO 81.20 86.40 86.40 86.20 86.60 MEZO-ADAM 84.80 74.20 71.40 83.00 80.60 HIZOO 65.20 65.20 65.20 62.20 59.40 TREC LOZO 80.40 74.80 77.20 79.20 77.20 BSZO 83.40 84.60 84.40 83.80 84.60 DATASET METHOD RUN 1 RUN 2 RUN 3 RUN 4 RUN 5 BSZO-B 85.80 84.60 85.20 82.20 86.20 Table 12. Full ablation studies on OPT-1.3B (fp32). (a) Effect of subspace dimension k with m = k. (b) Effect of observation count m with m = k + 1. (c) Noise-free adaptive sampling. Best per row in bold.
|
| 621 |
+
|
| 622 |
+
[p. 23 | section: D. Raw Experimental Results | type: Table]
|
| 623 |
+
(a) Effect of k (b) Effect of m (c) NF-Adaptive k SST-2 RTE k SST-2 RTE k SST-2 RTE 1 92.32 60.29 1 91.74 61.37 1 91.28 63.58 2 92.78 64.26 2 92.43 66.79 2 92.43 65.34 4 92.66 67.51 4 93.58 66.43 4 93.12 66.07 8 93.23 66.07 8 93.23 68.59 8 93.35 69.31
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/assets.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"filename": "_page_1_Figure_1.jpeg",
|
| 4 |
+
"path": "data/processed_papers/icml26_20260429_1952_duequeue/marker_raw/9506ea3e-e66f-4fdc-be2e-f42de95f2875/marker_markdown/9506ea3e-e66f-4fdc-be2e-f42de95f2875/_page_1_Figure_1.jpeg",
|
| 5 |
+
"bytes": 69908,
|
| 6 |
+
"width": 1301,
|
| 7 |
+
"height": 390,
|
| 8 |
+
"aspect_ratio": 3.335897435897436,
|
| 9 |
+
"keep": true,
|
| 10 |
+
"reject_reason": null,
|
| 11 |
+
"model_path": "assets/_page_1_Figure_1.jpeg"
|
| 12 |
+
},
|
| 13 |
+
{
|
| 14 |
+
"filename": "_page_4_Figure_8.jpeg",
|
| 15 |
+
"path": "data/processed_papers/icml26_20260429_1952_duequeue/marker_raw/9506ea3e-e66f-4fdc-be2e-f42de95f2875/marker_markdown/9506ea3e-e66f-4fdc-be2e-f42de95f2875/_page_4_Figure_8.jpeg",
|
| 16 |
+
"bytes": 23337,
|
| 17 |
+
"width": 672,
|
| 18 |
+
"height": 427,
|
| 19 |
+
"aspect_ratio": 1.5737704918032787,
|
| 20 |
+
"keep": true,
|
| 21 |
+
"reject_reason": null,
|
| 22 |
+
"model_path": "assets/_page_4_Figure_8.jpeg"
|
| 23 |
+
}
|
| 24 |
+
]
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/assets/_page_1_Figure_1.jpeg
ADDED
|
Git LFS Details
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/assets/_page_4_Figure_8.jpeg
ADDED
|
Git LFS Details
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/chunks_v3_anonymized.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/dataset_meta.json
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875",
|
| 3 |
+
"pipeline": "Paper2Markdown-V3",
|
| 4 |
+
"ok": true,
|
| 5 |
+
"page_count": 23,
|
| 6 |
+
"chunk_count": 313,
|
| 7 |
+
"main_body_chunk_count": 101,
|
| 8 |
+
"appendix_chunk_count": 208,
|
| 9 |
+
"reference_chunk_count": 4,
|
| 10 |
+
"model_text_chars": 38870,
|
| 11 |
+
"raw_markdown_chars": 136249,
|
| 12 |
+
"sanitized_chars": 79443,
|
| 13 |
+
"page_provenance": {
|
| 14 |
+
"min_page": 1,
|
| 15 |
+
"max_page": 23,
|
| 16 |
+
"invalid_count": 0
|
| 17 |
+
},
|
| 18 |
+
"marker_block_type_counts": {
|
| 19 |
+
"Caption": 2,
|
| 20 |
+
"Code": 1,
|
| 21 |
+
"Equation": 96,
|
| 22 |
+
"FigureGroup": 2,
|
| 23 |
+
"Footnote": 1,
|
| 24 |
+
"ListGroup": 10,
|
| 25 |
+
"PageFooter": 23,
|
| 26 |
+
"PageHeader": 26,
|
| 27 |
+
"SectionHeader": 57,
|
| 28 |
+
"Table": 4,
|
| 29 |
+
"TableGroup": 9,
|
| 30 |
+
"Text": 598
|
| 31 |
+
},
|
| 32 |
+
"asset_count_raw": 2,
|
| 33 |
+
"asset_count_model_kept": 2,
|
| 34 |
+
"asset_count_rejected": 0,
|
| 35 |
+
"asset_reject_reasons": {
|
| 36 |
+
"kept": 2
|
| 37 |
+
},
|
| 38 |
+
"artifact_leak_audit": {
|
| 39 |
+
"ok": true,
|
| 40 |
+
"hits": {
|
| 41 |
+
"Anonymous Authors": [],
|
| 42 |
+
"ACKNOWLEDGMENT": [],
|
| 43 |
+
"OpenReview": [],
|
| 44 |
+
"\"accept_label\"": [],
|
| 45 |
+
"\"decision\"": [],
|
| 46 |
+
"\"decision_tier\"": [],
|
| 47 |
+
"\"source_status\"": [],
|
| 48 |
+
"Meta-review": [],
|
| 49 |
+
"Official Review": [],
|
| 50 |
+
"official_reviews": [],
|
| 51 |
+
"meta_reviews": [],
|
| 52 |
+
"suggested_verdict_score": []
|
| 53 |
+
},
|
| 54 |
+
"artifact_count": 2
|
| 55 |
+
},
|
| 56 |
+
"default_model_input": "model_text_v3.txt",
|
| 57 |
+
"appendix_input": "appendix_text_v3.txt",
|
| 58 |
+
"reference_input": "reference_text_v3.txt",
|
| 59 |
+
"source": "koala_icml26_due_queue",
|
| 60 |
+
"run_name": "icml26_20260429_1952_duequeue"
|
| 61 |
+
}
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/main_body_chunks.jsonl
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0000", "section": "Abstract", "page_start": 1, "page_end": 1, "type": "Text", "text": "Fine-tuning large language models (LLMs) with zeroth-order (ZO) optimization reduces memory by approximating gradients through function evaluations. However, existing methods essentially perform updates in a one-dimensional space, and suffer from collapse or substantial performance degradation under low-precision training. We introduce BSZO, an adaptive Bayesian Subspace Zeroth-Order Optimizer, which applies Kalman filtering to combine finite-difference information across multiple perturbation directions within a subspace. By treating each finite-difference measurement as a noisy observation, BSZO builds a posterior distribution over the subspace-projected gradient and updates it through Bayesian inference, with a residual-based adaptive mechanism to adapt to noise variations. Theoretical analysis shows that BSZO improves the convergence rate by a factor of k/γ compared to standard ZO methods. Experiments on RoBERTa, Mistral, and OPT models show that BSZO outperforms the baselines across various tasks, achieving up to 6.67% absolute average improvement on OPT-13B while remaining robust under fp16/bf16 precision and keeping memory usage close to inference-only baselines (1.00×–1.08× of MeZO).", "source": "marker_v2", "marker_block_id": "/page/0/Text/4"}
|
| 2 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0001", "section": "1. Introduction", "page_start": 1, "page_end": 1, "type": "Text", "text": "Large language models (LLMs) are getting increasingly important in natural language understanding and generation (Devlin et al., 2019; Brown et al., 2020; Touvron et al., 2023) . However, adapting these models to downstream tasks through fine-tuning remains challenging due to their large scale. The standard approach, using first-order optimizers like Adam, requires consuming a large amount of GPU", "source": "marker_v2", "marker_block_id": "/page/0/Text/6"}
|
| 3 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0002", "section": "1. Introduction", "page_start": 1, "page_end": 1, "type": "Text", "text": "memory. For a 13B-parameter model, this translates to over 100GB of GPU memory, roughly 10× the cost of inference alone (Malladi et al., 2023) . Such requirements put full finetuning out of reach for most people, no matter in academia or industry.", "source": "marker_v2", "marker_block_id": "/page/0/Text/9"}
|
| 4 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0003", "section": "1. Introduction", "page_start": 1, "page_end": 1, "type": "Text", "text": "Several strategies have been proposed to reduce memory burden. Parameter-efficient fine-tuning (PEFT) methods, including LoRA (Hu et al., 2022) and Adapters (Houlsby et al., 2019) , freeze the base model and only update a small set of additional parameters. But these methods still rely on backpropagation and may underperform full fine-tuning on difficult tasks. An alternative direction is zeroth-order (ZO) optimization, which estimates gradients using only forward passes. MeZO (Malladi et al., 2023) demonstrated that this approach can match the memory footprint of inference, while achieving reasonable accuracy. The catch? ZO methods converge slowly and require significantly more iterations than their first-order counterparts, due to the high variance inherent in finite-difference gradient estimates.", "source": "marker_v2", "marker_block_id": "/page/0/Text/10"}
|
| 5 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0004", "section": "1. Introduction", "page_start": 1, "page_end": 1, "type": "Text", "text": "This raises a question: how can we achieve a better tradeoff between convergence speed and memory usage? We observe that the existing ZO methods have three main weaknesses. First, most existing ZO optimizers essentially perform updates along a single random direction within each batch. Even with increased forward passes and perturbation directions, they process each perturbation in isolation, simply averaging or using them independently—throwing away information about how these measurements relate to each other. Second, the noise level in ZO estimates varies significantly during training, yet most methods do not account for this effect. This rigidity leads to poor adaptation: updates may oscillate wildly around local minima, jump out of the basin, and finally cause training collapse. Moreover, reduced-precision training (fp16/bf16) can cause these methods to collapse or suffer substantial performance degradation, as we show in Figure 1 and Table 3.", "source": "marker_v2", "marker_block_id": "/page/0/Text/11"}
|
| 6 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0005", "section": "1. Introduction", "page_start": 1, "page_end": 1, "type": "Text", "text": "We propose Bayesian Subspace Zeroth-order Optimization (BSZO) to address these limitations. The main idea is to treat gradient estimation as an inference problem. At each step, we sample k random directions to form a lowdimensional subspace (Zhang, 2025) and model the projected gradient as a latent variable. Instead of treating", "source": "marker_v2", "marker_block_id": "/page/0/Text/12"}
|
| 7 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0006", "section": "1. Introduction", "page_start": 2, "page_end": 2, "type": "FigureGroup", "text": "Figure 1. Training loss on SST-2 with OPT-13B under bf16 precision. (a)–(b) Existing ZO methods exhibit erratic loss curves across different learning rates, with some runs failing to converge or even diverging. BSZO achieves smooth and steady convergence. (c) Comparison under each method's best-tuned learning rate.", "source": "marker_v2", "marker_block_id": "/page/1/FigureGroup/635"}
|
| 8 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0007", "section": "1. Introduction", "page_start": 2, "page_end": 2, "type": "Text", "text": "each finite-difference query as providing an independent estimate, we use Kalman filtering to aggregate observations—essentially asking: given what we have measured so far, what is our best guess of the true gradient? This Bayesian formulation accounts for measurement noise and produces more accurate estimates from the same number of forward passes. We further introduce an adaptive mechanism that tracks prediction residuals and adjusts the noise variance on the fly, allowing the algorithm to respond to changing curvature conditions during training.", "source": "marker_v2", "marker_block_id": "/page/1/Text/3"}
|
| 9 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0008", "section": "1. Introduction", "page_start": 2, "page_end": 2, "type": "Text", "text": "Our contributions can be summarized as follows:", "source": "marker_v2", "marker_block_id": "/page/1/Text/4"}
|
| 10 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0009", "section": "1. Introduction", "page_start": 2, "page_end": 2, "type": "ListGroup", "text": "1. We propose BSZO, a zeroth-order optimizer that uses Bayesian inference to aggregate gradient information across multiple perturbation directions within a subspace. To our knowledge, this is the first application of Bayesian inference and Kalman filtering to ZO optimization for LLMs. 2. We design a residual-based adaptive scheme that enables BSZO to adjust the parameter update scale adaptively without manual tuning. 3. We analyze the convergence of BSZO and show that the rate improves by a factor of k/γ compared to standard ZO methods. 4. Experiments on multiple LLMs and benchmarks show that BSZO achieves strong performance across diverse tasks while remaining robust under low-precision training and maintaining memory consumption comparable to MeZO.", "source": "marker_v2", "marker_block_id": "/page/1/ListGroup/636"}
|
| 11 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0010", "section": "2. Related Work", "page_start": 2, "page_end": 2, "type": "Text", "text": "Zeroth-Order Optimization for LLMs. Classical derivative-free methods achieve strong sample efficiency via surrogate modeling, but their per-iteration cost grows", "source": "marker_v2", "marker_block_id": "/page/1/Text/10"}
|
| 12 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0011", "section": "2. Related Work", "page_start": 2, "page_end": 2, "type": "Text", "text": "rapidly with dimension, making them impractical at LLM scale (Zhang, 2025) . The SPSA estimator (Spall, 1992) offers a scalable alternative by approximating gradients through random perturbations. Building on this, MeZO (Malladi et al., 2023) introduced memory-efficient ZO fine-tuning for LLMs, matching inference-time memory by regenerating perturbations from random seeds. Follow-up methods target different bottlenecks: Sparse-MeZO (Liu et al., 2024) restricts updates to influential parameters, HiZOO (Zhao et al., 2025) leverages diagonal Hessian estimates for adaptive preconditioning, LOZO (Chen et al., 2024) exploits low-rank gradient structure, and TeZO (Sun et al., 2025) captures temporal correlations across iterations. Despite these advances, most methods adhere to the \"one batch, one update\" paradigm, overlooking the possibility that multiple function evaluations within a batch could support multiple parameter updates. Moreover, some of these methods incur substantial memory overhead; while still lower than full fine-tuning, this conflicts with the original motivation of ZO optimization—minimizing memory consumption. Since low-precision fine-tuning is essential in memory-constrained scenarios, the robustness of these methods also warrants further evaluation.", "source": "marker_v2", "marker_block_id": "/page/1/Text/11"}
|
| 13 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0012", "section": "2. Related Work", "page_start": 2, "page_end": 2, "type": "Text", "text": "Population-Based Gradient Estimation. An alternative strategy evaluates multiple perturbations per iteration and aggregates them into a single update. Evolution Strategies (Salimans et al., 2017) and Augmented Random Search (Mania et al., 2018) popularized this paradigm in reinforcement learning. However, these methods typically require a large number of function evaluations per batch to obtain reliable gradient estimates. Given that each forward pass through an LLM is already computationally expensive, such sample-intensive approaches become impractical for language model fine-tuning. This raises a natural question: how can we extract more information from a limited number of function evaluations? Our work addresses this by treating finite-difference measurements as noisy linear ob-", "source": "marker_v2", "marker_block_id": "/page/1/Text/12"}
|
| 14 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0013", "section": "2. Related Work", "page_start": 3, "page_end": 3, "type": "Text", "text": "servations of the underlying gradient and applying Bayesian inference to fuse information across directions.", "source": "marker_v2", "marker_block_id": "/page/2/Text/1"}
|
| 15 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0014", "section": "2. Related Work", "page_start": 3, "page_end": 3, "type": "Text", "text": "Bayesian Inference for Optimization. Bayesian methods provide a principled way to integrate observations with prior knowledge while quantifying uncertainty. Kalman filtering (Kalman, 1960) is the canonical example: it sequentially updates a Gaussian belief over a hidden state as new measurements arrive. Gaussian processes extend this idea to function-space modeling and underpin Bayesian optimization (Shahriari et al., 2016; Rasmussen & Williams, 2006). Our work adapts the Kalman perspective to ZO gradient estimation: we model the projected gradient as a hidden state, interpret each perturbation query as a noisy linear measurement, and update a posterior that pools information across all sampled directions within an iteration. Leveraging the flexibility of the Bayesian framework, we further design an adaptive residual mechanism that effectively fuses both historical and current-batch information. This yields improved gradient estimates without additional memory overhead.", "source": "marker_v2", "marker_block_id": "/page/2/Text/2"}
|
| 16 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0015", "section": "3. Method", "page_start": 3, "page_end": 3, "type": "Text", "text": "141142", "source": "marker_v2", "marker_block_id": "/page/2/Text/59"}
|
| 17 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0016", "section": "3. Method", "page_start": 3, "page_end": 3, "type": "Text", "text": "148149", "source": "marker_v2", "marker_block_id": "/page/2/Text/65"}
|
| 18 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0017", "section": "3. Method", "page_start": 3, "page_end": 3, "type": "Text", "text": "150151", "source": "marker_v2", "marker_block_id": "/page/2/Text/66"}
|
| 19 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0018", "section": "3. Method", "page_start": 3, "page_end": 3, "type": "Text", "text": "154155", "source": "marker_v2", "marker_block_id": "/page/2/Text/69"}
|
| 20 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0019", "section": "3. Method", "page_start": 3, "page_end": 3, "type": "Text", "text": "In this section, we present the Bayesian Subspace Zerothorder Optimization (BSZO) algorithm, which controls the step size of subspace by the Bayesian method.", "source": "marker_v2", "marker_block_id": "/page/2/Text/4"}
|
| 21 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0020", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "We consider the stochastic optimization problem:", "source": "marker_v2", "marker_block_id": "/page/2/Text/6"}
|
| 22 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0021", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Equation", "text": "\\min_{\\theta \\in \\mathbb{R}^n} \\mathcal{L}(\\theta) := \\mathbb{E}_{\\xi \\sim \\mathcal{D}}[\\mathcal{L}(\\theta; \\xi)], \\tag{1}", "source": "marker_v2", "marker_block_id": "/page/2/Equation/7"}
|
| 23 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0022", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "where \\theta \\in \\mathbb{R}^n denotes the model parameters, \\mathcal{D} is the training dataset, and \\mathcal{L}(\\theta;\\xi) is the loss on a minibatch \\xi . We denote the optimal value by \\mathcal{L}^* := \\min_{\\theta} \\mathcal{L}(\\theta) .", "source": "marker_v2", "marker_block_id": "/page/2/Text/8"}
|
| 24 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0023", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "Assumption 3.1. The function \\mathcal{L} is L-smooth, i.e., there exists L > 0 such that for all \\theta, \\theta' \\in \\mathbb{R}^n ,", "source": "marker_v2", "marker_block_id": "/page/2/Text/9"}
|
| 25 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0024", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Equation", "text": "\\|\\mathcal{L}(\\theta) - \\mathcal{L}(\\theta')\\| \\le L\\|\\theta - \\theta'\\|. \\tag{2}", "source": "marker_v2", "marker_block_id": "/page/2/Equation/10"}
|
| 26 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0025", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "Equivalently,", "source": "marker_v2", "marker_block_id": "/page/2/Text/11"}
|
| 27 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0026", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Equation", "text": "\\mathcal{L}(\\theta) \\le \\mathcal{L}(\\theta') + \\nabla \\mathcal{L}(\\theta')^{\\top} (\\theta - \\theta') + \\frac{L}{2} \\|\\theta - \\theta'\\|^{2}. (3)", "source": "marker_v2", "marker_block_id": "/page/2/Equation/12"}
|
| 28 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0027", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "Assumption 3.2. The stochastic gradient \\nabla \\mathcal{L}(\\theta, \\xi) has bounded variance, i.e., there exists \\sigma_g^2 \\geq 0 such that:", "source": "marker_v2", "marker_block_id": "/page/2/Text/13"}
|
| 29 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0028", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Equation", "text": "\\mathbb{E}_{\\xi}[\\|\\nabla \\mathcal{L}(\\theta;\\xi) - \\nabla \\mathcal{L}(\\theta)\\|^{2}] \\le \\sigma_{q}^{2}, \\quad \\forall \\theta \\in \\mathbb{R}^{n} (4)", "source": "marker_v2", "marker_block_id": "/page/2/Equation/14"}
|
| 30 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0029", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "Definition 3.3. Given a set of k perturbation vectors \\{z_1, z_2, \\ldots, z_k\\} , where z_i \\in \\mathbb{R}^n is from Gaussian distribution \\mathcal{N}(0, I_n) , define the subspace basis matrix B = [z_1, z_2, \\ldots, z_k] \\in \\mathbb{R}^{n \\times k} .", "source": "marker_v2", "marker_block_id": "/page/2/Text/15"}
|
| 31 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0030", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "Algorithm 1 Bayesian Subspace Zeroth-Order Optimization (BSZO)", "source": "marker_v2", "marker_block_id": "/page/2/Text/16"}
|
| 32 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0031", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "Input: parameters \\theta , learning rate \\eta , perturbation scale \\varepsilon , subspace dimension k, sampling steps m, prior variance \\sigma_n^2, noise variance \\sigma_e^2, smoothing factor \\alpha, max step Tfor t = 1 to T do Sample k random seeds \\{s_i\\}_{i=1}^k Initialize \\mu \\leftarrow \\mathbf{0}_k , \\Sigma \\leftarrow \\sigma_p^2 I_k , f_0 \\leftarrow \\mathcal{L}(\\theta) Initialize cache Y \\leftarrow \\{\\} for \\tau = 1 to m do if \\tau \\leq k then d \\leftarrow e_{\\tau} \\theta \\leftarrow \\theta + \\varepsilon \\cdot \\text{Randn}(n, s_{\\tau}) y \\leftarrow (\\mathcal{L}(\\theta) - f_0)/\\varepsilon \\theta \\leftarrow \\theta - \\varepsilon \\cdot \\text{RANDN}(n, s_{\\tau}) Y[\\tau] \\leftarrow y \\begin{split} r &\\leftarrow (y - d^\\top \\mu) / \\|d\\|, \\quad \\sigma_e^2 \\leftarrow (1 - \\alpha) \\sigma_e^2 + \\alpha r^2 \\\\ j &\\leftarrow \\arg\\max_i \\Sigma_{ii} \\quad \\triangleright \\text{Find max uncertainty axis} \\end{split} y \\leftarrow Y[j] ▶ Reuse cached value K \\leftarrow \\Sigma d / (d^{\\top} \\Sigma d + \\sigma_e^2) \\mu \\leftarrow \\mu + K(y - d^{\\mathsf{T}}\\mu), \\quad \\Sigma \\leftarrow \\Sigma - Kd^{\\mathsf{T}}\\Sigma end for for i = 1 to k do \\theta \\leftarrow \\theta - \\eta \\cdot \\mu_i \\cdot \\text{RANDN}(n, s_i) end for end for return \\theta RANDN(n, s): returns n-dim Gaussian vector seeded by s", "source": "marker_v2", "marker_block_id": "/page/2/Text/17"}
|
| 33 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0032", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "Definition 3.4. The one-side difference of \\mathcal{L} along the displacement d \\in \\mathbb{R}^k in subspace B on minibatch \\xi is defined as follows:", "source": "marker_v2", "marker_block_id": "/page/2/Text/18"}
|
| 34 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0033", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Equation", "text": "\\hat{y}(\\theta; \\xi, d) = \\frac{\\mathcal{L}(\\theta + \\varepsilon B d; \\xi) - \\mathcal{L}(\\theta; \\xi)}{\\varepsilon}, \\tag{5}", "source": "marker_v2", "marker_block_id": "/page/2/Equation/19"}
|
| 35 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0034", "section": "3.1. Preliminaries", "page_start": 3, "page_end": 3, "type": "Text", "text": "where \\varepsilon > 0 is a small constant.", "source": "marker_v2", "marker_block_id": "/page/2/Text/20"}
|
| 36 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0035", "section": "3.2. Bayesian Gradient Estimation", "page_start": 3, "page_end": 3, "type": "Text", "text": "For the \\mathcal{L}(\\theta+\\varepsilon Bd) , the subspace gradient can be obtained through the chain rule:", "source": "marker_v2", "marker_block_id": "/page/2/Text/22"}
|
| 37 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0036", "section": "3.2. Bayesian Gradient Estimation", "page_start": 3, "page_end": 3, "type": "Equation", "text": "g_d := \\nabla_d \\mathcal{L}(\\theta + \\varepsilon Bd \\mid d = 0) = \\varepsilon B^T g, (6)", "source": "marker_v2", "marker_block_id": "/page/2/Equation/23"}
|
| 38 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0037", "section": "3.2. Bayesian Gradient Estimation", "page_start": 3, "page_end": 3, "type": "Text", "text": "where g:=\\nabla\\mathcal{L} is the real gradient of \\mathcal{L} . In order to keep numerical accuracy controllable, we introduce the concept of normalized subspace gradient as \\tilde{g}:=B^\\top g=\\frac{g_s}{\\varepsilon}\\in\\mathbb{R}^k .", "source": "marker_v2", "marker_block_id": "/page/2/Text/24"}
|
| 39 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0038", "section": "3.2. Bayesian Gradient Estimation", "page_start": 3, "page_end": 3, "type": "Text", "text": "Lemma 3.5. For any direction d \\in \\mathbb{R}^k of subspace B, the expectation of one-side difference \\hat{y}(d) satisfies:", "source": "marker_v2", "marker_block_id": "/page/2/Text/25"}
|
| 40 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0039", "section": "3.2. Bayesian Gradient Estimation", "page_start": 3, "page_end": 3, "type": "Equation", "text": "\\mathbb{E}[\\hat{y}(d)] = d^{\\mathsf{T}} B^{\\mathsf{T}} q + O(\\varepsilon L) \\approx d^{\\mathsf{T}} \\tilde{q}. \\tag{7}", "source": "marker_v2", "marker_block_id": "/page/2/Equation/26"}
|
| 41 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0040", "section": "3.2. Bayesian Gradient Estimation", "page_start": 4, "page_end": 4, "type": "TableGroup", "text": "Table 1. Test accuracy (%) on RoBERTa-large (355M). We report the mean±std over 5 runs. The top two results are highlighted in bold . BSZO-B is the baseline version of BSZO without caching optimization. Метнор SST-2 RTE СВ WIC TREC Avg MEZO 92.22 (\\pm 0.42) 66.35 (±3.06) 86.07 (±5.56) 55.20 (±3.73) 85.36 (±2.33) 77.04 MEZO-ADAM 92.34 ( \\pm 0.50 ) 63.61 (± 1.41 ) 81.07 (\\pm 2.71) 52.85 (\\pm 4.19) 78.80 \\ (\\pm 5.76) 73.73 HıZOO 91.44 (\\pm 0.45) 59.21 (\\pm 2.46) 76.43 \\ (\\pm 1.96) 53.60 (\\pm 2.93) 63.44 (\\pm 2.61) 68.82 LOZO 91.83 ( \\pm 0.30 ) 62.60 (\\pm 2.31) 84.29 (\\pm 3.87) 54.20~(\\pm 1.32) 77.76 ( \\pm 2.15 ) 74.14 BSZO 92.66 (\\pm 0.21) 67.80 (\\pm 1.52) 85.71 (\\pm 1.79) 56.05 ( \\pm 1.47 ) 84.16 ( \\pm 0.54 ) 77.28 BSZO-B 92.27\\ (\\pm0.41) 68.38 ( \\pm 1.94 ) 84.29 ( \\pm 1.49 ) 57.21 (\\pm 0.98) 84.80 (\\pm 1.57) 77.39", "source": "marker_v2", "marker_block_id": "/page/3/TableGroup/399"}
|
| 42 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0041", "section": "3.2. Bayesian Gradient Estimation", "page_start": 4, "page_end": 4, "type": "Text", "text": "Based on Lemma3.5, we can model the one-side difference \\hat{y}(d) as a linear observation of the normalized subspace gradient \\tilde{g} with Gaussian noise:", "source": "marker_v2", "marker_block_id": "/page/3/Text/4"}
|
| 43 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0042", "section": "3.2. Bayesian Gradient Estimation", "page_start": 4, "page_end": 4, "type": "Equation", "text": "\\hat{y}(d) = d^{\\top} \\tilde{g} + \\nu, \\quad \\nu \\sim \\mathcal{N}(0, \\sigma_e^2 ||d||^2), (8)", "source": "marker_v2", "marker_block_id": "/page/3/Equation/5"}
|
| 44 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0043", "section": "3.2. Bayesian Gradient Estimation", "page_start": 4, "page_end": 4, "type": "Text", "text": "where \\nu represents comprehensive noise term. The justification of the variance definition is provided in Appendix B.2. Then, we adopt a Bayesian approach by placing a Gaussian prior on \\tilde{g} , i.e., \\tilde{g} \\sim \\mathcal{N}(0, \\sigma_p^2 I_k) which make the posterior computable in closed-form (Kalman, 1960).", "source": "marker_v2", "marker_block_id": "/page/3/Text/6"}
|
| 45 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0044", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "According to the standard Bayesian linear regression theory (Rasmussen & Williams, 2006), after m perturbations and observations (d^{(1)}, \\hat{y}^{(1)}), \\ldots, (d^{(m)}, \\hat{y}^{(m)}) , the posterior \\tilde{g}|Y \\sim \\mathcal{N}(\\mu^{(m)}, \\Sigma^{(m)}) is also a Gaussian distribution, where", "source": "marker_v2", "marker_block_id": "/page/3/Text/8"}
|
| 46 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0045", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Equation", "text": "\\Sigma^{(m)} = \\left(\\sigma_p^{-2} I_k + D^{\\top} R^{-1} D\\right)^{-1}, \\mu^{(m)} = \\Sigma^{(m)} D^{\\top} R^{-1} Y. (9)", "source": "marker_v2", "marker_block_id": "/page/3/Equation/9"}
|
| 47 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0046", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "Here, D = [d^{(1)}, \\dots, d^{(m)}]^{\\top} \\in \\mathbb{R}^{m \\times k} is the design matrix, Y = [\\hat{y}^{(1)}, \\dots, \\hat{y}^{(m)}]^{\\top} \\in \\mathbb{R}^m is the observation vector, and R = \\operatorname{diag}(\\sigma_e^2 \\|d^{(1)}\\|^2, \\dots, \\sigma_e^2 \\|d^{(m)}\\|^2) is the noise covariance matrix. When m > k or \\Sigma is already full-rank, we set the new sampling direction to the principal eigenvector of the covariance matrix, i.e., d^{(j)} = v_{\\max}(\\Sigma^{(j-1)}) .", "source": "marker_v2", "marker_block_id": "/page/3/Text/10"}
|
| 48 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0047", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "After getting the posterior mean \\mu^{(m)} , we can use it as the final displacement in subspace B, which means the parameters updated by:", "source": "marker_v2", "marker_block_id": "/page/3/Text/11"}
|
| 49 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0048", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Equation", "text": "\\Delta \\theta = -\\eta B \\mu^{(k)},\\tag{10}", "source": "marker_v2", "marker_block_id": "/page/3/Equation/12"}
|
| 50 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0049", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "where \\eta>0 is learning rate. In this way, we can use the finite k forward passes to update the parameters k times, with \\mu^{(k)} controlling the step size in subspace. This means that, for the same batch, the parameters move along a \"diagonal\" direction rather than a single direction.", "source": "marker_v2", "marker_block_id": "/page/3/Text/13"}
|
| 51 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0050", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "Corollary 3.6. Under coordinate-axis sampling, i.e., m = k and d^{(i)} = e_i (the i -th standard basis vector), then the", "source": "marker_v2", "marker_block_id": "/page/3/Text/14"}
|
| 52 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0051", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "posterior mean and covariance reduce to:", "source": "marker_v2", "marker_block_id": "/page/3/Text/15"}
|
| 53 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0052", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Equation", "text": "\\Sigma^{(k)} = \\gamma I_k, \\mu^{(k)} = \\gamma Y, (11)", "source": "marker_v2", "marker_block_id": "/page/3/Equation/16"}
|
| 54 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0053", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "where \\gamma:=\\frac{\\sigma_p^2}{\\sigma_p^2+\\sigma_e^2}\\in(0,1) is the shrinkage factor.", "source": "marker_v2", "marker_block_id": "/page/3/Text/17"}
|
| 55 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0054", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "Corollary 3.6 simplifies the form of the posterior distribution, thereby making the analysis and update easier. Thus, we adopt coordinate-axis sampling as the default sampling strategy in BSZO (for the first k sampling directions).", "source": "marker_v2", "marker_block_id": "/page/3/Text/18"}
|
| 56 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0055", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "Theorem 3.7. Let \\Delta \\theta = -\\eta B \\mu^{(k)} . Under Assumptions 3.1 and Assumption3.2, we have:", "source": "marker_v2", "marker_block_id": "/page/3/Text/19"}
|
| 57 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0056", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Equation", "text": "\\mathbb{E}[\\Delta \\theta] = -\\eta \\gamma k \\cdot \\nabla \\mathcal{L}(\\theta) + O(\\varepsilon^3) \\tag{12}", "source": "marker_v2", "marker_block_id": "/page/3/Equation/20"}
|
| 58 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0057", "section": "3.3. Posterior Update In Subspace", "page_start": 4, "page_end": 4, "type": "Text", "text": "The above theorem shows that the expected update direction aligns with the negative gradient under coordinate-axis sampling. Furthermore, the analysis of the expected direction under adaptive sampling is provided in Theorem B.6 (Appendix B.5).", "source": "marker_v2", "marker_block_id": "/page/3/Text/21"}
|
| 59 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0058", "section": "3.4. Algorithm", "page_start": 4, "page_end": 4, "type": "Text", "text": "Clearly, the choice of \\gamma is crucial. We observe that the norm of the projected gradient estimated via finite differences remains stable during the early and middle stages of optimization, but tends to grow in later stages due to numerical precision limitations, which restricts the achievable convergence accuracy. To this end, we design a residual-based mechanism that adaptively adjusts \\sigma_e after the \\tau -th sample:", "source": "marker_v2", "marker_block_id": "/page/3/Text/23"}
|
| 60 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0059", "section": "3.4. Algorithm", "page_start": 4, "page_end": 4, "type": "Equation", "text": "r_{\\tau} := \\frac{\\hat{y}^{(\\tau)} - d^{(\\tau)^{\\top}} \\mu^{(\\tau - 1)}}{\\|d^{(\\tau)}\\|}, (\\sigma_e^{(\\tau)})^2 = (1 - \\alpha)(\\sigma_e^{(\\tau - 1)})^2 + \\alpha r_{\\tau}^2, (13)", "source": "marker_v2", "marker_block_id": "/page/3/Equation/24"}
|
| 61 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0060", "section": "3.4. Algorithm", "page_start": 4, "page_end": 4, "type": "Text", "text": "where \\alpha \\in (0,1) is the smoothing factor.", "source": "marker_v2", "marker_block_id": "/page/3/Text/25"}
|
| 62 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0061", "section": "3.4. Algorithm", "page_start": 4, "page_end": 4, "type": "Text", "text": "Corollary 3.6 shows that under coordinate-axis sampling, the posterior covariance \\Sigma degenerates into a diagonal matrix with a single distinct eigenvalue, implying that any axis-aligned direction may serve as the adaptive sampling", "source": "marker_v2", "marker_block_id": "/page/3/Text/26"}
|
| 63 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0062", "section": "3.4. Algorithm", "page_start": 5, "page_end": 5, "type": "Text", "text": "271272", "source": "marker_v2", "marker_block_id": "/page/4/Text/23"}
|
| 64 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0063", "section": "3.4. Algorithm", "page_start": 5, "page_end": 5, "type": "TableGroup", "text": "Table 2. Memory complexity comparison of different methods. Method Memory Additional Space MeZO-SGD O(n) O(1) MeZO-Adam O(n) O(n) HiZOO O(n) O(n) LOZO O(n) O(1) BSZO (Ours) O(n) O(k^2)", "source": "marker_v2", "marker_block_id": "/page/4/TableGroup/310"}
|
| 65 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0064", "section": "3.4. Algorithm", "page_start": 5, "page_end": 5, "type": "Text", "text": "direction when j>k. The residual-based adaptation breaks this degeneracy by differentiating the diagonal entries of \\Sigma , thereby producing a meaningful adaptive sampling direction. However, the diagonal structure implies that the adaptive sampling direction always coincides with one of the coordinate axes, which can lead to redundant computation. To address this, we cache the (d,y) pairs from the first k samples within each batch. When j>k, we directly reuse the cached pair corresponding to the largest diagonal entry of \\Sigma , eliminating the need for an additional forward pass. This extra sample leverages the updated residual to more precisely correct the step size along the direction of greatest uncertainty. In practice, we set m=k+1 by default.", "source": "marker_v2", "marker_block_id": "/page/4/Text/4"}
|
| 66 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0065", "section": "3.4. Algorithm", "page_start": 5, "page_end": 5, "type": "Text", "text": "The main procedure of BSZO is summarized in Algorithm 1. Following MeZO (Malladi et al., 2023), we store perturbation vectors via random seeds rather than explicitly, requiring only O(k^2) additional space. A basic version without caching is provided in Algorithm 2 (Appendix A), which supports arbitrary initial sampling directions and additional adaptive sampling steps. In this version, the adaptive sampling performs extra forward passes to obtain new function values. Typically, the result of this forward pass coincides with the cached value. However, under reduced precision (fp16 or bf16), certain GPU operations use nondeterministic algorithms (PyTorch Team, 2024), causing function evaluations to differ across calls even with identical inputs and random seeds. Moreover, due to numerical errors, parameters do not fully recover after perturbation and restoration. As a result, the extra forward pass in the basic version yields a value different from the cached one, better reflecting the local landscape at the perturbed point and leading to improved performance (as confirmed in Section 5.3). To examine this effect, we include the coordinate-axis sampling variant of Algorithm 2 as an experimental baseline (denoted as BSZO-B). Table 2 compares the memory complexity of different methods, showing that BSZO is also memory-efficient. We analyze the convergence properties of BSZO in the next section.", "source": "marker_v2", "marker_block_id": "/page/4/Text/5"}
|
| 67 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0066", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Text", "text": "Definition 4.1. Let \\Sigma = \\operatorname{Cov}(\\zeta) be the covariance matrix of the gradient noise, the effective noise \\sigma_e^2 can be decomposed", "source": "marker_v2", "marker_block_id": "/page/4/Text/7"}
|
| 68 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0067", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "FigureGroup", "text": "Figure 2. GPU memory usage comparison across different models. BSZO maintains memory consumption comparable to MeZO, while MeZO-Adam and HiZOO require significantly more memory due to storing optimizer states or Hessian estimates.", "source": "marker_v2", "marker_block_id": "/page/4/FigureGroup/311"}
|
| 69 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0068", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Equation", "text": "as: \\sigma_c^2 = \\sigma_c^2 + \\operatorname{tr}(\\Sigma), \\tag{14}", "source": "marker_v2", "marker_block_id": "/page/4/Equation/10"}
|
| 70 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0069", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Text", "text": "where \\operatorname{tr}(\\Sigma) \\leq \\sigma_g^2 (Assumption 3.2). The justification for this definition is provided by Lemma B.5 in Appendix B.2. For analytical tractability, we assume that \\sigma_e is fixed (taking the worst-case noise across batches gives the same result). The convergence of BSZO is characterized by the following theorem:", "source": "marker_v2", "marker_block_id": "/page/4/Text/11"}
|
| 71 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0070", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Text", "text": "Theorem 4.2. Under Assumptions 3.1 and 3.2, let \\tilde{n} = n + k + 1 be effective dimension. Suppose m = k and \\eta < \\frac{2}{L\\gamma\\tilde{n}} . Then, after T iterations, the following inequality holds:", "source": "marker_v2", "marker_block_id": "/page/4/Text/12"}
|
| 72 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0071", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Equation", "text": "\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\nabla \\mathcal{L}(\\theta_t)\\|^2] \\le \\frac{\\mathcal{L}(\\theta_0) - \\mathcal{L}^*}{\\beta(\\eta)\\eta\\gamma kT} + \\frac{L\\eta\\gamma(\\tilde{n} \\cdot tr(\\Sigma) + n\\sigma_{\\varepsilon}^2)}{2\\beta(\\eta)}, (15)", "source": "marker_v2", "marker_block_id": "/page/4/Equation/13"}
|
| 73 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0072", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Text", "text": "where \\beta(\\eta) := 1 - \\frac{L\\eta\\gamma\\tilde{n}}{2} and \\sigma_e^2 = \\sigma_\\varepsilon^2 + tr(\\Sigma) .", "source": "marker_v2", "marker_block_id": "/page/4/Text/14"}
|
| 74 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0073", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Text", "text": "Corollary 4.3. Let \\eta = \\frac{1}{L\\gamma\\bar{n}} , then \\beta = 1/2 , which simplifies Theorem 4.2 to:", "source": "marker_v2", "marker_block_id": "/page/4/Text/15"}
|
| 75 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0074", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Equation", "text": "\\frac{1}{T} \\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\nabla \\mathcal{L}(\\theta_t)\\|^2] \\le \\frac{2L\\gamma \\tilde{n}\\Delta_0}{kT} + tr(\\Sigma) + \\frac{n}{\\tilde{n}}\\sigma_{\\varepsilon}^2, \\tag{16}", "source": "marker_v2", "marker_block_id": "/page/4/Equation/16"}
|
| 76 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0075", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Text", "text": "where \\Delta_0 := \\mathcal{L}(\\theta_0) - \\mathcal{L}^* .", "source": "marker_v2", "marker_block_id": "/page/4/Text/17"}
|
| 77 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0076", "section": "4. Theoretical Analysis", "page_start": 5, "page_end": 5, "type": "Text", "text": "According to Corollary 4.3, the convergence rate of BSZO is improved by the factor of subspace dimension k. Although \\gamma slightly reduces the convergence rate, it is crucial for training stability. We also analyze the convergence under adaptive sampling in Theorem B.7 (Appendix B.6).", "source": "marker_v2", "marker_block_id": "/page/4/Text/18"}
|
| 78 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0077", "section": "5. Experiments", "page_start": 5, "page_end": 5, "type": "Text", "text": "In this section, we evaluate the performance of BSZO and BSZO-B (Section 3.4) on various fine-tuning tasks in dif-", "source": "marker_v2", "marker_block_id": "/page/4/Text/20"}
|
| 79 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0078", "section": "5. Experiments", "page_start": 6, "page_end": 6, "type": "TableGroup", "text": "Table 3. Experiments on three different models (OPT-1.3B, Mistral-7B, OPT-13B). We show the test accuracy (%) of MeZO, MeZO-Adam, HiZOO, LOZO, BSZO, and BSZO-B on them, with the top two results highlighted in bold. BSZO-B is the baseline version of BSZO. Since fp16 can cause training crashes with Adam, we did not record the results of ZO-Adam for Mistral-7B. * indicates training collapse due to numerical overflow under fp16 precision. MODEL METHOD SST-2 RTE COPA WIC WSC TREC AVG SENTIMENT NLI REASONING TOPIC OPT-1.3B MEZO 91.74 64.98 76.0 58.78 59.62 80.6 71.95 OPT-1.3B MEZO-ADAM 93.35 60.29 75.0 56.58 62.50 79.4 71.19 OPT-1.3B HIZOO 91.51 62.09 77.0 56.58 63.46 66.2 69.48 OPT-1.3B LOZO 92.66 63.18 75.0 56.58 57.69 75.8 70.15 OPT-1.3B BSZO 92.43 66.79 79.0 59.88 64.42 87.0 74.92 OPT-1.3B BSZO-B 93.01 64.98 81.0 59.09 61.54 87.4 74.50 MISTRAL-7B MEZO 90.94 64.26 88.0 56.58 63.46 88.6 75.31 MISTRAL-7B HIZOO 93.01 63.90 90.0 55.64 63.46 * 73.20 MISTRAL-7B LOZO 92.43 61.37 86.0 57.83 63.46 * 72.22 MISTRAL-7B BSZO 94.50 75.81 87.0 60.03 59.62 90.0 77.83 MISTRAL-7B BSZO-B 94.04 78.70 87.0 59.72 60.58 91.0 78.51 OPT-13B MEZO 85.89 62.09 80.0 54.55 60.58 59.4 67.09 OPT-13B MEZO-ADAM 79.82 61.73 81.0 54.39 57.69 62.2 66.14 OPT-13B HIZOO 72.71 62.46 80.0 52.35 46.15 19.8 55.58 OPT-13B LOZO 86.12 57.04 80.0 55.96 59.62 60.4 66.52 OPT-13B BSZO 93.23 69.31 83.0 56.27 61.54 79.2 73.76 OPT-13B BSZO-B 91.86 71.84 85.0 53.14 64.42 80.8 74.51", "source": "marker_v2", "marker_block_id": "/page/5/TableGroup/608"}
|
| 80 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0079", "section": "5. Experiments", "page_start": 6, "page_end": 6, "type": "Text", "text": "ferent language models, comparing them with several baselines: MeZO (Malladi et al., 2023) , MeZO-Adam (Mal ladi et al., 2023) , HiZOO (Zhao et al., 2025) , and LOZO (Chen et al., 2024) . Our experiments show that both variants achieve excellent robustness and strong accuracy across most scenarios, requiring only the GPU memory needed for forward propagation, making them more cost-effective than HiZOO and MeZO-Adam.", "source": "marker_v2", "marker_block_id": "/page/5/Text/3"}
|
| 81 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0080", "section": "5.1. Experimental Setup", "page_start": 6, "page_end": 6, "type": "Text", "text": "Language Models. The experiments in this paper center on two categories of models: masked Language Models (mLMs) and decoder-only Large Language Models (LLMs). For mLMs, we adopt RoBERTa-large (355M) (Liu et al., 2019) as the backbone model. For decoder-only LLMs, we select OPT-1.3B and OPT-13B (Zhang et al., 2022) , as well as Mistral-7B (Jiang et al., 2023) .", "source": "marker_v2", "marker_block_id": "/page/5/Text/5"}
|
| 82 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0081", "section": "5.1. Experimental Setup", "page_start": 6, "page_end": 6, "type": "Text", "text": "Datasets. We full fine-tune the above models on tasks from the GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) and TREC (Li & Roth, 2002) benchmarks, including Stanford Sentiment Treebank (SST-2), Boolean Questions (BoolQ) (Clark et al., 2019) , Recognizing Textual Entailment (RTE) (Dagan et al., 2005) , Choice of Plausible Alternatives (COPA) (Roemmele et al., 2011) , Word-in-Context (WIC) (Pilehvar & Camacho-Collados, 2019) , Winograd Schema Challenge (WSC) (Levesque et al., 2012) , CommitmentBank (CB) (De Marneffe et al.,", "source": "marker_v2", "marker_block_id": "/page/5/Text/6"}
|
| 83 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0082", "section": "5.1. Experimental Setup", "page_start": 6, "page_end": 6, "type": "Text", "text": "2019) , and TREC. Following HiZOO (Zhao et al., 2025) , we use the first n 1 samples (up to 1000) from the training set for training and the next n 2 samples for validation. The original validation set serves as the test set. See Table 6 in Appendix C for specific values of n 1 and n2.", "source": "marker_v2", "marker_block_id": "/page/5/Text/7"}
|
| 84 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0083", "section": "5.1. Experimental Setup", "page_start": 6, "page_end": 6, "type": "Text", "text": "Hyperparameters. For BSZO and BSZO-B, we set the default subspace dimension k = 2 and the number of samples m = k + 1. This results in 3 forward passes per step for BSZO (with caching) and 4 for BSZO-B (without caching). BSZO matches HiZOO's forward pass count. As discussed in Section 3.4, we report results for both BSZO and BSZO-B across all models, with particular focus on comparing them under reduced precision (Mistral-7B in fp16 and OPT-13B in bf16) to examine the caching effect. Other methods use their default hyperparameters. Given the slower convergence of zeroth-order methods, all experiments are trained for up to 20,000 steps (Zhang et al., 2024) , with early stopping applied when validation performance does not improve for 8 evaluations (4,000 steps). For every experiment, we set the perturbation scale to ε = 10 − 4 and the batch size to 16. Hyperparameters are tuned via grid search. We select the best configuration based on validation performance and report its test accuracy. Due to memory constraints, we load OPT-13B in bf16 precision and Mistral-7B in fp16 precision, while other models use fp32. All experiments are conducted on a single H200 GPU. More details are provided in Appendix C.", "source": "marker_v2", "marker_block_id": "/page/5/Text/8"}
|
| 85 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0084", "section": "5.2. Performance in Masked Language Models", "page_start": 7, "page_end": 7, "type": "Text", "text": "BSZO achieves stable and competitive performance on mLMs. As shown in Table 1, BSZO-B reaches 77.39% average accuracy on RoBERTa-large, surpassing MeZO (77.04%, +0.35%), MeZO-Adam (73.73%, +3.66%), Hi-ZOO (68.82%, +8.57%), and LOZO (74.14%, +3.25%). BSZO achieves 77.28% average accuracy, second only to BSZO-B. BSZO secures top result on SST-2 (92.66%), while BSZO-B excels on RTE (68.38%) and WIC (57.21%). Moreover, BSZO exhibits notably lower variance across tasks (see Table 11 for raw results). Both variants demonstrate strong and consistent performance across all tasks.", "source": "marker_v2", "marker_block_id": "/page/6/Text/2"}
|
| 86 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0085", "section": "5.3. Performance in decoder-only models", "page_start": 7, "page_end": 7, "type": "Text", "text": "BSZO performs well on larger LLMs. Table 3 shows that BSZO outperforms baselines on decoder-only models, with gains increasing as model size grows. BSZO-B typically maintains a small lead over BSZO.", "source": "marker_v2", "marker_block_id": "/page/6/Text/4"}
|
| 87 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0086", "section": "5.3. Performance in decoder-only models", "page_start": 7, "page_end": 7, "type": "Text", "text": "OPT-1.3B. BSZO achieves 74.92% average accuracy, the highest among all methods, beating MeZO (71.95%, +2.97%), MeZO-Adam (71.19%, +3.73%), HiZOO (69.48%, +5.44%), and LOZO (70.15%, +4.77%). BSZO-B reaches 74.50% average accuracy. BSZO secures top results on RTE (66.79%), WIC (59.88%), and WSC (64.42%), while BSZO-B excels on COPA (81.0%) and TREC (87.4%). Both variants perform well across most tasks.", "source": "marker_v2", "marker_block_id": "/page/6/Text/5"}
|
| 88 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0087", "section": "5.3. Performance in decoder-only models", "page_start": 7, "page_end": 7, "type": "Text", "text": "Mistral-7B (fp16). BSZO reaches 77.83% on average, ahead of MeZO (75.31%, +2.52%), HiZOO (73.20%, +4.63%), and LOZO (72.22%, +5.61%). It also achieves the best results on SST-2 (94.50%, +1.49% vs HiZOO) and WIC (60.03%, +3.45% vs MeZO). BSZO-B reaches 78.51% on average, excelling on RTE (78.70%) and TREC (91.0%). The small 0.68% gap shows that the two variants perform very similarly.", "source": "marker_v2", "marker_block_id": "/page/6/Text/6"}
|
| 89 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0088", "section": "5.3. Performance in decoder-only models", "page_start": 7, "page_end": 7, "type": "Text", "text": "OPT-13B (bf16). The gains grow larger here. BSZO reaches 73.76% on average, up 6.67% over MeZO (67.09%), 7.62% over MeZO-Adam (66.14%), and 7.24% over LOZO (66.52%). BSZO achieves strong results across tasks, including top performance on WIC (56.27%), with particularly notable gains on SST-2 (93.23%, +7.34% vs MeZO) and TREC (79.2%, +19.8% vs MeZO). BSZO-B reaches 74.51% on average (+7.42% vs MeZO), with stronger balance across tasks. BSZO-B maintains a slight edge with one additional forward pass, though the gap in average accuracy remains very small (0.75%).", "source": "marker_v2", "marker_block_id": "/page/6/Text/7"}
|
| 90 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0089", "section": "5.3. Performance in decoder-only models", "page_start": 7, "page_end": 7, "type": "Text", "text": "Robustness. Reduced precision exposes fragility in several baselines (Table 3) and Figure 1. HiZOO and LOZO are particularly affected: on Mistral-7B (fp16), both methods suffer from TREC training overflow (*). On OPT-13B (bf16), all baseline methods show varying degrees of performance degradation compared to OPT-1.3B, with HiZOO being especially severe—its average accuracy drops from 69.48% to 55.58%, with TREC collapsing to 19.8% and", "source": "marker_v2", "marker_block_id": "/page/6/Text/8"}
|
| 91 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0090", "section": "5.3. Performance in decoder-only models", "page_start": 7, "page_end": 7, "type": "TableGroup", "text": "Table 4. Memory usage (GB) and per-step time (ms) across different models. OPT-1.3B Mistral-7B OPT-13B Method Mem Time Mem Time Mem Time MeZO 9.1 109.7 18.3 283.9 30.0 464.2 MeZO-Adam 19.7 135.1 47.6 373.1 82.1 614.5 HiZOO 15.7 188.0 34.3 540.2 58.9 877.1 LOZO 9.3 102.0 18.3 274.2 30.0 452.0 BSZO 9.8 97.0 18.8 275.7 30.1 440.5", "source": "marker_v2", "marker_block_id": "/page/6/TableGroup/611"}
|
| 92 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0091", "section": "5.3. Performance in decoder-only models", "page_start": 7, "page_end": 7, "type": "Text", "text": "WSC to 46.15%. We suspect H200's default TF32 mode introduces errors in Hessian-based estimates. In contrast, BSZO and BSZO-B remain stable throughout all precision settings, with BSZO-B even maintaining performance (from 74.50% to 74.51%).", "source": "marker_v2", "marker_block_id": "/page/6/Text/11"}
|
| 93 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0092", "section": "5.4. Memory and Time Efficiency", "page_start": 7, "page_end": 7, "type": "Text", "text": "BSZO keeps memory usage low. As shown in Figure 2 and Table 4, BSZO's memory footprint stays close to MeZO across three model scales—ranging from 1.00× to 1.08× of MeZO's usage. In contrast, HiZOO and MeZO-Adam need 1.73×–1.96× and 2.16×–2.74× more memory because they store additional optimizer states (momentum, Hessian estimates). BSZO avoids this overhead by using only O(k 2 ) extra space for the posterior covariance and adaptive noise estimation.", "source": "marker_v2", "marker_block_id": "/page/6/Text/13"}
|
| 94 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0093", "section": "5.4. Memory and Time Efficiency", "page_start": 7, "page_end": 7, "type": "Text", "text": "BSZO runs fast. Table 4 also reports per-step time. BSZO and LOZO are the fastest—both under 100ms per step on OPT-1.3B. HiZOO is roughly 2× slower due to Hessian estimation, and MeZO-Adam incurs extra cost from momentum updates.", "source": "marker_v2", "marker_block_id": "/page/6/Text/14"}
|
| 95 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0094", "section": "5.5. Ablation Study", "page_start": 7, "page_end": 7, "type": "Text", "text": "Table 5 shows ablation results on OPT-1.3B for two design choices of BSZO: subspace dimension k and sample count m. In Table 5( a), when m = k, RTE accuracy climbs from 60.29% (k = 1) to 67.51% (k = 4), while SST-2 peaks at k = 8 (93.23%), suggesting that increasing k generally improves performance. In Table 5( b), with extra refinement (m = k + 1), RTE performance improves consistently. Comparing to Table 5( a), m = k + 1 boosts RTE by 1-2% at most k levels (e.g., from 64.26% to 66.79% at k = 2, from 66.07% to 68.59% at k = 8). This confirms that the adaptive sampling step refines the posterior estimate (see Table 12 for more details). Table 5( c) investigates adaptive noise under bf16 precision on OPT-1.3B. As k grows, the gap between w/ and w/o adaptive noise becomes more pronounced: at k = 8, the adaptive variant leads by 8.67% on RTE, indicating that adaptive noise yields substantial gains in low-precision settings. Table 5( d) validates this on", "source": "marker_v2", "marker_block_id": "/page/6/Text/16"}
|
| 96 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0095", "section": "5.5. Ablation Study", "page_start": 8, "page_end": 8, "type": "TableGroup", "text": "Table 5. Ablation studies. (a) Effect of subspace dimension k with m = k on OPT-1.3B. (b) Effect of m = k + 1 on OPT-1.3B. (c) Effect of adaptive noise on OPT-1.3B (bf16). (d) Effect of adaptive noise on OPT-13B (bf16). Best results in bold. Full results in fp32 are in Table 12. (a) Effect of k (b) Effect of m (c) Adaptive Noise k SST-2 RTE k SST-2 RTE k w/ w/o 1 92.32 60.29 1 91.74 61.37 1 54.15 55.24 2 92.78 64.26 2 92.43 66.79 2 57.76 56.32 4 92.66 67.51 4 93.58 66.43 4 61.73 56.32 8 93.23 66.07 8 93.23 68.59 8 66.43 57.76", "source": "marker_v2", "marker_block_id": "/page/7/TableGroup/581"}
|
| 97 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0096", "section": "5.5. Ablation Study", "page_start": 8, "page_end": 8, "type": "Table", "text": "(d) Adaptive Noise on OPT-13B (bf16) Method SST-2 RTE WSC COPA TREC WIC w/ adaptive w/o adaptive 93.23 91.97 69.31 63.18 61.54 58.65 83.00 85.00 79.20 75.00 56.27 54.70", "source": "marker_v2", "marker_block_id": "/page/7/Table/4"}
|
| 98 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0097", "section": "5.5. Ablation Study", "page_start": 8, "page_end": 8, "type": "Text", "text": "OPT-13B (bf16), where adaptive noise brings improvements on 5 out of 6 tasks, with RTE gaining 6.13%.", "source": "marker_v2", "marker_block_id": "/page/7/Text/5"}
|
| 99 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0098", "section": "6. Conclusion", "page_start": 8, "page_end": 8, "type": "Text", "text": "In this work, we introduce BSZO, which is the first zerothorder optimizer that applies Kalman filtering to aggregate gradient information across multiple perturbation directions for LLM fine-tuning. By treating finite-difference measurements as noisy observations of the true gradient, BSZO builds a posterior distribution over the projected gradient and refines it through Bayesian updates. We design a residual-based adaptive mechanism to adjust the perturbation scale adaptively without manual tuning. Our theoretical analysis shows that BSZO improves the convergence rate by a factor of k/γ over standard ZO methods. Experiments on RoBERTa, Mistral, and OPT show that BSZO achieves strong accuracy across various tasks, remains stable under fp16/bf16 precision where existing methods often collapse, and keeps memory usage close to inference-only baselines.", "source": "marker_v2", "marker_block_id": "/page/7/Text/7"}
|
| 100 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0099", "section": "Software and Data", "page_start": 8, "page_end": 8, "type": "Text", "text": "Our implementation is available at com/AeonianQuill/BSZO . Datasets used in this work (GLUE, SuperGLUE, TREC) are publicly accessible and should be downloaded separately. Pre-trained models can also be obtained from Hugging Face.", "source": "marker_v2", "marker_block_id": "/page/7/Text/9"}
|
| 101 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0100", "section": "Impact Statement", "page_start": 8, "page_end": 8, "type": "Text", "text": "This paper presents work whose goal is to advance the field of machine learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.", "source": "marker_v2", "marker_block_id": "/page/7/Text/11"}
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/marker_meta.json
ADDED
|
@@ -0,0 +1,2512 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"table_of_contents": [
|
| 3 |
+
{
|
| 4 |
+
"title": "000\n001\n002\n003\n004\n005\n006\n007\n008\n009\n010\n011\n012\n013\n014\n015\n016\n017\n018\n019\n020\n021\n022\n023\n024\n025\n026\n027\n028\n029\n030\n031\n032\n033\n034\n035",
|
| 5 |
+
"heading_level": null,
|
| 6 |
+
"page_id": 0,
|
| 7 |
+
"polygon": [
|
| 8 |
+
[
|
| 9 |
+
24.802734375,
|
| 10 |
+
69.16845703125
|
| 11 |
+
],
|
| 12 |
+
[
|
| 13 |
+
40.49589920043945,
|
| 14 |
+
69.16845703125
|
| 15 |
+
],
|
| 16 |
+
[
|
| 17 |
+
40.49589920043945,
|
| 18 |
+
497.5630798339844
|
| 19 |
+
],
|
| 20 |
+
[
|
| 21 |
+
24.802734375,
|
| 22 |
+
497.5630798339844
|
| 23 |
+
]
|
| 24 |
+
]
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"title": "Robust and Efficient Zeroth-Order LLM Fine-Tuning via\nAdaptive Bayesian Subspace Optimizer",
|
| 28 |
+
"heading_level": null,
|
| 29 |
+
"page_id": 0,
|
| 30 |
+
"polygon": [
|
| 31 |
+
[
|
| 32 |
+
122.822998046875,
|
| 33 |
+
88.9453125
|
| 34 |
+
],
|
| 35 |
+
[
|
| 36 |
+
474.5390625,
|
| 37 |
+
88.9453125
|
| 38 |
+
],
|
| 39 |
+
[
|
| 40 |
+
474.5390625,
|
| 41 |
+
122.191162109375
|
| 42 |
+
],
|
| 43 |
+
[
|
| 44 |
+
122.822998046875,
|
| 45 |
+
122.191162109375
|
| 46 |
+
]
|
| 47 |
+
]
|
| 48 |
+
},
|
| 49 |
+
{
|
| 50 |
+
"title": "Anonymous Authors1",
|
| 51 |
+
"heading_level": null,
|
| 52 |
+
"page_id": 0,
|
| 53 |
+
"polygon": [
|
| 54 |
+
[
|
| 55 |
+
248.625,
|
| 56 |
+
158.94140625
|
| 57 |
+
],
|
| 58 |
+
[
|
| 59 |
+
340.7579040527344,
|
| 60 |
+
158.94140625
|
| 61 |
+
],
|
| 62 |
+
[
|
| 63 |
+
340.7579040527344,
|
| 64 |
+
170.8990478515625
|
| 65 |
+
],
|
| 66 |
+
[
|
| 67 |
+
248.625,
|
| 68 |
+
170.8990478515625
|
| 69 |
+
]
|
| 70 |
+
]
|
| 71 |
+
},
|
| 72 |
+
{
|
| 73 |
+
"title": "Abstract",
|
| 74 |
+
"heading_level": null,
|
| 75 |
+
"page_id": 0,
|
| 76 |
+
"polygon": [
|
| 77 |
+
[
|
| 78 |
+
148.517578125,
|
| 79 |
+
192.97265625
|
| 80 |
+
],
|
| 81 |
+
[
|
| 82 |
+
194.6832733154297,
|
| 83 |
+
192.97265625
|
| 84 |
+
],
|
| 85 |
+
[
|
| 86 |
+
194.6832733154297,
|
| 87 |
+
205.94830322265625
|
| 88 |
+
],
|
| 89 |
+
[
|
| 90 |
+
148.517578125,
|
| 91 |
+
205.94830322265625
|
| 92 |
+
]
|
| 93 |
+
]
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"title": "1. Introduction",
|
| 97 |
+
"heading_level": null,
|
| 98 |
+
"page_id": 0,
|
| 99 |
+
"polygon": [
|
| 100 |
+
[
|
| 101 |
+
52.892578125,
|
| 102 |
+
545.9591522216797
|
| 103 |
+
],
|
| 104 |
+
[
|
| 105 |
+
132.27606201171875,
|
| 106 |
+
545.9591522216797
|
| 107 |
+
],
|
| 108 |
+
[
|
| 109 |
+
132.27606201171875,
|
| 110 |
+
557.9143524169922
|
| 111 |
+
],
|
| 112 |
+
[
|
| 113 |
+
52.892578125,
|
| 114 |
+
557.9143524169922
|
| 115 |
+
]
|
| 116 |
+
]
|
| 117 |
+
},
|
| 118 |
+
{
|
| 119 |
+
"title": "2. Related Work",
|
| 120 |
+
"heading_level": null,
|
| 121 |
+
"page_id": 1,
|
| 122 |
+
"polygon": [
|
| 123 |
+
[
|
| 124 |
+
53.19140625,
|
| 125 |
+
662.6011657714844
|
| 126 |
+
],
|
| 127 |
+
[
|
| 128 |
+
138.55255126953125,
|
| 129 |
+
662.6011657714844
|
| 130 |
+
],
|
| 131 |
+
[
|
| 132 |
+
138.55255126953125,
|
| 133 |
+
674.5563583374023
|
| 134 |
+
],
|
| 135 |
+
[
|
| 136 |
+
53.19140625,
|
| 137 |
+
674.5563583374023
|
| 138 |
+
]
|
| 139 |
+
]
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"title": "3. Method",
|
| 143 |
+
"heading_level": null,
|
| 144 |
+
"page_id": 2,
|
| 145 |
+
"polygon": [
|
| 146 |
+
[
|
| 147 |
+
53.19140625,
|
| 148 |
+
325.23046875
|
| 149 |
+
],
|
| 150 |
+
[
|
| 151 |
+
108.0,
|
| 152 |
+
325.23046875
|
| 153 |
+
],
|
| 154 |
+
[
|
| 155 |
+
108.0,
|
| 156 |
+
335.25
|
| 157 |
+
],
|
| 158 |
+
[
|
| 159 |
+
53.19140625,
|
| 160 |
+
335.25
|
| 161 |
+
]
|
| 162 |
+
]
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"title": "3.1. Preliminaries",
|
| 166 |
+
"heading_level": null,
|
| 167 |
+
"page_id": 2,
|
| 168 |
+
"polygon": [
|
| 169 |
+
[
|
| 170 |
+
52.892578125,
|
| 171 |
+
394.5
|
| 172 |
+
],
|
| 173 |
+
[
|
| 174 |
+
130.5,
|
| 175 |
+
394.5
|
| 176 |
+
],
|
| 177 |
+
[
|
| 178 |
+
130.5,
|
| 179 |
+
403.34765625
|
| 180 |
+
],
|
| 181 |
+
[
|
| 182 |
+
52.892578125,
|
| 183 |
+
403.34765625
|
| 184 |
+
]
|
| 185 |
+
]
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"title": "3.2. Bayesian Gradient Estimation",
|
| 189 |
+
"heading_level": null,
|
| 190 |
+
"page_id": 2,
|
| 191 |
+
"polygon": [
|
| 192 |
+
[
|
| 193 |
+
306.0,
|
| 194 |
+
562.5
|
| 195 |
+
],
|
| 196 |
+
[
|
| 197 |
+
453.75,
|
| 198 |
+
562.5
|
| 199 |
+
],
|
| 200 |
+
[
|
| 201 |
+
453.75,
|
| 202 |
+
573.50390625
|
| 203 |
+
],
|
| 204 |
+
[
|
| 205 |
+
306.0,
|
| 206 |
+
573.50390625
|
| 207 |
+
]
|
| 208 |
+
]
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"title": "3.3. Posterior Update In Subspace",
|
| 212 |
+
"heading_level": null,
|
| 213 |
+
"page_id": 3,
|
| 214 |
+
"polygon": [
|
| 215 |
+
[
|
| 216 |
+
53.7890625,
|
| 217 |
+
362.25
|
| 218 |
+
],
|
| 219 |
+
[
|
| 220 |
+
200.25,
|
| 221 |
+
362.25
|
| 222 |
+
],
|
| 223 |
+
[
|
| 224 |
+
200.25,
|
| 225 |
+
373.18359375
|
| 226 |
+
],
|
| 227 |
+
[
|
| 228 |
+
53.7890625,
|
| 229 |
+
373.18359375
|
| 230 |
+
]
|
| 231 |
+
]
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"title": "3.4. Algorithm",
|
| 235 |
+
"heading_level": null,
|
| 236 |
+
"page_id": 3,
|
| 237 |
+
"polygon": [
|
| 238 |
+
[
|
| 239 |
+
306.0,
|
| 240 |
+
489.75
|
| 241 |
+
],
|
| 242 |
+
[
|
| 243 |
+
369.75,
|
| 244 |
+
489.75
|
| 245 |
+
],
|
| 246 |
+
[
|
| 247 |
+
369.75,
|
| 248 |
+
499.25390625
|
| 249 |
+
],
|
| 250 |
+
[
|
| 251 |
+
306.0,
|
| 252 |
+
499.25390625
|
| 253 |
+
]
|
| 254 |
+
]
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"title": "4. Theoretical Analysis",
|
| 258 |
+
"heading_level": null,
|
| 259 |
+
"page_id": 4,
|
| 260 |
+
"polygon": [
|
| 261 |
+
[
|
| 262 |
+
51.697265625,
|
| 263 |
+
675.0
|
| 264 |
+
],
|
| 265 |
+
[
|
| 266 |
+
171.75,
|
| 267 |
+
675.0
|
| 268 |
+
],
|
| 269 |
+
[
|
| 270 |
+
171.75,
|
| 271 |
+
684.87890625
|
| 272 |
+
],
|
| 273 |
+
[
|
| 274 |
+
51.697265625,
|
| 275 |
+
684.87890625
|
| 276 |
+
]
|
| 277 |
+
]
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"title": "5. Experiments",
|
| 281 |
+
"heading_level": null,
|
| 282 |
+
"page_id": 4,
|
| 283 |
+
"polygon": [
|
| 284 |
+
[
|
| 285 |
+
306.0,
|
| 286 |
+
675.0
|
| 287 |
+
],
|
| 288 |
+
[
|
| 289 |
+
385.5,
|
| 290 |
+
675.0
|
| 291 |
+
],
|
| 292 |
+
[
|
| 293 |
+
385.5,
|
| 294 |
+
684.87890625
|
| 295 |
+
],
|
| 296 |
+
[
|
| 297 |
+
306.0,
|
| 298 |
+
684.87890625
|
| 299 |
+
]
|
| 300 |
+
]
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"title": "5.1. Experimental Setup",
|
| 304 |
+
"heading_level": null,
|
| 305 |
+
"page_id": 5,
|
| 306 |
+
"polygon": [
|
| 307 |
+
[
|
| 308 |
+
53.490234375,
|
| 309 |
+
471.3285217285156
|
| 310 |
+
],
|
| 311 |
+
[
|
| 312 |
+
157.83558654785156,
|
| 313 |
+
471.3285217285156
|
| 314 |
+
],
|
| 315 |
+
[
|
| 316 |
+
157.83558654785156,
|
| 317 |
+
481.2911071777344
|
| 318 |
+
],
|
| 319 |
+
[
|
| 320 |
+
53.490234375,
|
| 321 |
+
481.2911071777344
|
| 322 |
+
]
|
| 323 |
+
]
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"title": "5.2. Performance in Masked Language Models",
|
| 327 |
+
"heading_level": null,
|
| 328 |
+
"page_id": 6,
|
| 329 |
+
"polygon": [
|
| 330 |
+
[
|
| 331 |
+
53.19140625,
|
| 332 |
+
69.22265625
|
| 333 |
+
],
|
| 334 |
+
[
|
| 335 |
+
252.74925231933594,
|
| 336 |
+
69.22265625
|
| 337 |
+
],
|
| 338 |
+
[
|
| 339 |
+
252.74925231933594,
|
| 340 |
+
79.33905029296875
|
| 341 |
+
],
|
| 342 |
+
[
|
| 343 |
+
53.19140625,
|
| 344 |
+
79.33905029296875
|
| 345 |
+
]
|
| 346 |
+
]
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"title": "5.3. Performance in decoder-only models",
|
| 350 |
+
"heading_level": null,
|
| 351 |
+
"page_id": 6,
|
| 352 |
+
"polygon": [
|
| 353 |
+
[
|
| 354 |
+
54.984375,
|
| 355 |
+
232.7144775390625
|
| 356 |
+
],
|
| 357 |
+
[
|
| 358 |
+
228.39071655273438,
|
| 359 |
+
232.7144775390625
|
| 360 |
+
],
|
| 361 |
+
[
|
| 362 |
+
228.39071655273438,
|
| 363 |
+
242.67706298828125
|
| 364 |
+
],
|
| 365 |
+
[
|
| 366 |
+
54.984375,
|
| 367 |
+
242.67706298828125
|
| 368 |
+
]
|
| 369 |
+
]
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"title": "5.4. Memory and Time Efficiency",
|
| 373 |
+
"heading_level": null,
|
| 374 |
+
"page_id": 6,
|
| 375 |
+
"polygon": [
|
| 376 |
+
[
|
| 377 |
+
304.20703125,
|
| 378 |
+
291.19921875
|
| 379 |
+
],
|
| 380 |
+
[
|
| 381 |
+
448.9388122558594,
|
| 382 |
+
291.19921875
|
| 383 |
+
],
|
| 384 |
+
[
|
| 385 |
+
448.9388122558594,
|
| 386 |
+
302.2611083984375
|
| 387 |
+
],
|
| 388 |
+
[
|
| 389 |
+
304.20703125,
|
| 390 |
+
302.2611083984375
|
| 391 |
+
]
|
| 392 |
+
]
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"title": "5.5. Ablation Study",
|
| 396 |
+
"heading_level": null,
|
| 397 |
+
"page_id": 6,
|
| 398 |
+
"polygon": [
|
| 399 |
+
[
|
| 400 |
+
307.44000244140625,
|
| 401 |
+
497.4795227050781
|
| 402 |
+
],
|
| 403 |
+
[
|
| 404 |
+
389.671875,
|
| 405 |
+
497.4795227050781
|
| 406 |
+
],
|
| 407 |
+
[
|
| 408 |
+
389.671875,
|
| 409 |
+
508.53515625
|
| 410 |
+
],
|
| 411 |
+
[
|
| 412 |
+
307.44000244140625,
|
| 413 |
+
508.53515625
|
| 414 |
+
]
|
| 415 |
+
]
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"title": "6. Conclusion",
|
| 419 |
+
"heading_level": null,
|
| 420 |
+
"page_id": 7,
|
| 421 |
+
"polygon": [
|
| 422 |
+
[
|
| 423 |
+
53.490234375,
|
| 424 |
+
339.15234375
|
| 425 |
+
],
|
| 426 |
+
[
|
| 427 |
+
124.52911376953125,
|
| 428 |
+
339.15234375
|
| 429 |
+
],
|
| 430 |
+
[
|
| 431 |
+
124.52911376953125,
|
| 432 |
+
351.5323486328125
|
| 433 |
+
],
|
| 434 |
+
[
|
| 435 |
+
53.490234375,
|
| 436 |
+
351.5323486328125
|
| 437 |
+
]
|
| 438 |
+
]
|
| 439 |
+
},
|
| 440 |
+
{
|
| 441 |
+
"title": "Software and Data",
|
| 442 |
+
"heading_level": null,
|
| 443 |
+
"page_id": 7,
|
| 444 |
+
"polygon": [
|
| 445 |
+
[
|
| 446 |
+
52.59375,
|
| 447 |
+
554.8881530761719
|
| 448 |
+
],
|
| 449 |
+
[
|
| 450 |
+
150.8544464111328,
|
| 451 |
+
554.8881530761719
|
| 452 |
+
],
|
| 453 |
+
[
|
| 454 |
+
150.8544464111328,
|
| 455 |
+
566.8433532714844
|
| 456 |
+
],
|
| 457 |
+
[
|
| 458 |
+
52.59375,
|
| 459 |
+
566.8433532714844
|
| 460 |
+
]
|
| 461 |
+
]
|
| 462 |
+
},
|
| 463 |
+
{
|
| 464 |
+
"title": "Impact Statement",
|
| 465 |
+
"heading_level": null,
|
| 466 |
+
"page_id": 7,
|
| 467 |
+
"polygon": [
|
| 468 |
+
[
|
| 469 |
+
51.99609375,
|
| 470 |
+
650.6461639404297
|
| 471 |
+
],
|
| 472 |
+
[
|
| 473 |
+
146.74185180664062,
|
| 474 |
+
650.6461639404297
|
| 475 |
+
],
|
| 476 |
+
[
|
| 477 |
+
146.74185180664062,
|
| 478 |
+
662.6013641357422
|
| 479 |
+
],
|
| 480 |
+
[
|
| 481 |
+
51.99609375,
|
| 482 |
+
662.6013641357422
|
| 483 |
+
]
|
| 484 |
+
]
|
| 485 |
+
},
|
| 486 |
+
{
|
| 487 |
+
"title": "References",
|
| 488 |
+
"heading_level": null,
|
| 489 |
+
"page_id": 7,
|
| 490 |
+
"polygon": [
|
| 491 |
+
[
|
| 492 |
+
306.0,
|
| 493 |
+
67.67578125
|
| 494 |
+
],
|
| 495 |
+
[
|
| 496 |
+
363.375,
|
| 497 |
+
67.67578125
|
| 498 |
+
],
|
| 499 |
+
[
|
| 500 |
+
363.375,
|
| 501 |
+
79.80230712890625
|
| 502 |
+
],
|
| 503 |
+
[
|
| 504 |
+
306.0,
|
| 505 |
+
79.80230712890625
|
| 506 |
+
]
|
| 507 |
+
]
|
| 508 |
+
},
|
| 509 |
+
{
|
| 510 |
+
"title": "A. BSZO Basic Algorithm",
|
| 511 |
+
"heading_level": null,
|
| 512 |
+
"page_id": 9,
|
| 513 |
+
"polygon": [
|
| 514 |
+
[
|
| 515 |
+
52.294921875,
|
| 516 |
+
68.0625
|
| 517 |
+
],
|
| 518 |
+
[
|
| 519 |
+
188.25,
|
| 520 |
+
68.0625
|
| 521 |
+
],
|
| 522 |
+
[
|
| 523 |
+
188.25,
|
| 524 |
+
78.890625
|
| 525 |
+
],
|
| 526 |
+
[
|
| 527 |
+
52.294921875,
|
| 528 |
+
78.890625
|
| 529 |
+
]
|
| 530 |
+
]
|
| 531 |
+
},
|
| 532 |
+
{
|
| 533 |
+
"title": "Algorithm 2 Bayesian Subspace Zeroth-Order Optimization (Basic Version)",
|
| 534 |
+
"heading_level": null,
|
| 535 |
+
"page_id": 9,
|
| 536 |
+
"polygon": [
|
| 537 |
+
[
|
| 538 |
+
52.59375,
|
| 539 |
+
122.58984375
|
| 540 |
+
],
|
| 541 |
+
[
|
| 542 |
+
361.5,
|
| 543 |
+
122.58984375
|
| 544 |
+
],
|
| 545 |
+
[
|
| 546 |
+
361.5,
|
| 547 |
+
132.64453125
|
| 548 |
+
],
|
| 549 |
+
[
|
| 550 |
+
52.59375,
|
| 551 |
+
132.64453125
|
| 552 |
+
]
|
| 553 |
+
]
|
| 554 |
+
},
|
| 555 |
+
{
|
| 556 |
+
"title": "B. Theoretical Proofs",
|
| 557 |
+
"heading_level": null,
|
| 558 |
+
"page_id": 9,
|
| 559 |
+
"polygon": [
|
| 560 |
+
[
|
| 561 |
+
53.19140625,
|
| 562 |
+
453.0
|
| 563 |
+
],
|
| 564 |
+
[
|
| 565 |
+
163.5,
|
| 566 |
+
453.0
|
| 567 |
+
],
|
| 568 |
+
[
|
| 569 |
+
163.5,
|
| 570 |
+
462.75
|
| 571 |
+
],
|
| 572 |
+
[
|
| 573 |
+
53.19140625,
|
| 574 |
+
462.75
|
| 575 |
+
]
|
| 576 |
+
]
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"title": "B.1. Auxiliary Lemmas",
|
| 580 |
+
"heading_level": null,
|
| 581 |
+
"page_id": 9,
|
| 582 |
+
"polygon": [
|
| 583 |
+
[
|
| 584 |
+
53.7890625,
|
| 585 |
+
473.25
|
| 586 |
+
],
|
| 587 |
+
[
|
| 588 |
+
153.75,
|
| 589 |
+
473.25
|
| 590 |
+
],
|
| 591 |
+
[
|
| 592 |
+
153.75,
|
| 593 |
+
483.78515625
|
| 594 |
+
],
|
| 595 |
+
[
|
| 596 |
+
53.7890625,
|
| 597 |
+
483.78515625
|
| 598 |
+
]
|
| 599 |
+
]
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"title": "(a) Conditional variance derivation:",
|
| 603 |
+
"heading_level": null,
|
| 604 |
+
"page_id": 10,
|
| 605 |
+
"polygon": [
|
| 606 |
+
[
|
| 607 |
+
51.99609375,
|
| 608 |
+
174.41015625
|
| 609 |
+
],
|
| 610 |
+
[
|
| 611 |
+
208.5,
|
| 612 |
+
174.41015625
|
| 613 |
+
],
|
| 614 |
+
[
|
| 615 |
+
208.5,
|
| 616 |
+
183.0
|
| 617 |
+
],
|
| 618 |
+
[
|
| 619 |
+
51.99609375,
|
| 620 |
+
183.0
|
| 621 |
+
]
|
| 622 |
+
]
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"title": "(b) Unconditional variance derivation:",
|
| 626 |
+
"heading_level": null,
|
| 627 |
+
"page_id": 10,
|
| 628 |
+
"polygon": [
|
| 629 |
+
[
|
| 630 |
+
52.59375,
|
| 631 |
+
354.0
|
| 632 |
+
],
|
| 633 |
+
[
|
| 634 |
+
219.0,
|
| 635 |
+
354.0
|
| 636 |
+
],
|
| 637 |
+
[
|
| 638 |
+
219.0,
|
| 639 |
+
363.12890625
|
| 640 |
+
],
|
| 641 |
+
[
|
| 642 |
+
52.59375,
|
| 643 |
+
363.12890625
|
| 644 |
+
]
|
| 645 |
+
]
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"title": "Proof. (a) Norm concentration:",
|
| 649 |
+
"heading_level": null,
|
| 650 |
+
"page_id": 10,
|
| 651 |
+
"polygon": [
|
| 652 |
+
[
|
| 653 |
+
52.294921875,
|
| 654 |
+
570.75
|
| 655 |
+
],
|
| 656 |
+
[
|
| 657 |
+
187.5,
|
| 658 |
+
570.75
|
| 659 |
+
],
|
| 660 |
+
[
|
| 661 |
+
187.5,
|
| 662 |
+
579.75
|
| 663 |
+
],
|
| 664 |
+
[
|
| 665 |
+
52.294921875,
|
| 666 |
+
579.75
|
| 667 |
+
]
|
| 668 |
+
]
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"title": "(b) Approximate orthogonality:",
|
| 672 |
+
"heading_level": null,
|
| 673 |
+
"page_id": 10,
|
| 674 |
+
"polygon": [
|
| 675 |
+
[
|
| 676 |
+
54.0,
|
| 677 |
+
706.5
|
| 678 |
+
],
|
| 679 |
+
[
|
| 680 |
+
189.75,
|
| 681 |
+
706.5
|
| 682 |
+
],
|
| 683 |
+
[
|
| 684 |
+
189.75,
|
| 685 |
+
715.81640625
|
| 686 |
+
],
|
| 687 |
+
[
|
| 688 |
+
54.0,
|
| 689 |
+
715.81640625
|
| 690 |
+
]
|
| 691 |
+
]
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"title": "B.2. Noise Variance Justification",
|
| 695 |
+
"heading_level": null,
|
| 696 |
+
"page_id": 11,
|
| 697 |
+
"polygon": [
|
| 698 |
+
[
|
| 699 |
+
52.294921875,
|
| 700 |
+
626.87109375
|
| 701 |
+
],
|
| 702 |
+
[
|
| 703 |
+
192.465576171875,
|
| 704 |
+
626.87109375
|
| 705 |
+
],
|
| 706 |
+
[
|
| 707 |
+
192.465576171875,
|
| 708 |
+
637.3851165771484
|
| 709 |
+
],
|
| 710 |
+
[
|
| 711 |
+
52.294921875,
|
| 712 |
+
637.3851165771484
|
| 713 |
+
]
|
| 714 |
+
]
|
| 715 |
+
},
|
| 716 |
+
{
|
| 717 |
+
"title": "660 Proof. Step 1: Decomposition of the observation.",
|
| 718 |
+
"heading_level": null,
|
| 719 |
+
"page_id": 12,
|
| 720 |
+
"polygon": [
|
| 721 |
+
[
|
| 722 |
+
23.25,
|
| 723 |
+
69.0
|
| 724 |
+
],
|
| 725 |
+
[
|
| 726 |
+
262.5,
|
| 727 |
+
69.0
|
| 728 |
+
],
|
| 729 |
+
[
|
| 730 |
+
262.5,
|
| 731 |
+
78.890625
|
| 732 |
+
],
|
| 733 |
+
[
|
| 734 |
+
23.25,
|
| 735 |
+
78.890625
|
| 736 |
+
]
|
| 737 |
+
]
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"title": "Step 2: Identifying the noise term.",
|
| 741 |
+
"heading_level": null,
|
| 742 |
+
"page_id": 12,
|
| 743 |
+
"polygon": [
|
| 744 |
+
[
|
| 745 |
+
53.490234375,
|
| 746 |
+
263.25
|
| 747 |
+
],
|
| 748 |
+
[
|
| 749 |
+
201.0,
|
| 750 |
+
263.25
|
| 751 |
+
],
|
| 752 |
+
[
|
| 753 |
+
201.0,
|
| 754 |
+
273.796875
|
| 755 |
+
],
|
| 756 |
+
[
|
| 757 |
+
53.490234375,
|
| 758 |
+
273.796875
|
| 759 |
+
]
|
| 760 |
+
]
|
| 761 |
+
},
|
| 762 |
+
{
|
| 763 |
+
"title": "Step 3: Conditional variance (given z_i).",
|
| 764 |
+
"heading_level": null,
|
| 765 |
+
"page_id": 12,
|
| 766 |
+
"polygon": [
|
| 767 |
+
[
|
| 768 |
+
52.59375,
|
| 769 |
+
318.0
|
| 770 |
+
],
|
| 771 |
+
[
|
| 772 |
+
222.0,
|
| 773 |
+
318.0
|
| 774 |
+
],
|
| 775 |
+
[
|
| 776 |
+
222.0,
|
| 777 |
+
327.9375
|
| 778 |
+
],
|
| 779 |
+
[
|
| 780 |
+
52.59375,
|
| 781 |
+
327.9375
|
| 782 |
+
]
|
| 783 |
+
]
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"title": "Step 4: Unconditional variance (taking expectation over z_i).",
|
| 787 |
+
"heading_level": null,
|
| 788 |
+
"page_id": 12,
|
| 789 |
+
"polygon": [
|
| 790 |
+
[
|
| 791 |
+
52.59375,
|
| 792 |
+
385.5
|
| 793 |
+
],
|
| 794 |
+
[
|
| 795 |
+
309.75,
|
| 796 |
+
385.5
|
| 797 |
+
],
|
| 798 |
+
[
|
| 799 |
+
309.75,
|
| 800 |
+
395.61328125
|
| 801 |
+
],
|
| 802 |
+
[
|
| 803 |
+
52.59375,
|
| 804 |
+
395.61328125
|
| 805 |
+
]
|
| 806 |
+
]
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"title": "B.3. Proof of Main Convergence Theorem",
|
| 810 |
+
"heading_level": null,
|
| 811 |
+
"page_id": 12,
|
| 812 |
+
"polygon": [
|
| 813 |
+
[
|
| 814 |
+
51.99609375,
|
| 815 |
+
526.5
|
| 816 |
+
],
|
| 817 |
+
[
|
| 818 |
+
232.5,
|
| 819 |
+
526.5
|
| 820 |
+
],
|
| 821 |
+
[
|
| 822 |
+
232.5,
|
| 823 |
+
536.37890625
|
| 824 |
+
],
|
| 825 |
+
[
|
| 826 |
+
51.99609375,
|
| 827 |
+
536.37890625
|
| 828 |
+
]
|
| 829 |
+
]
|
| 830 |
+
},
|
| 831 |
+
{
|
| 832 |
+
"title": "Step 2: Inner product term.",
|
| 833 |
+
"heading_level": null,
|
| 834 |
+
"page_id": 12,
|
| 835 |
+
"polygon": [
|
| 836 |
+
[
|
| 837 |
+
52.892578125,
|
| 838 |
+
636.0
|
| 839 |
+
],
|
| 840 |
+
[
|
| 841 |
+
173.25,
|
| 842 |
+
636.0
|
| 843 |
+
],
|
| 844 |
+
[
|
| 845 |
+
173.25,
|
| 846 |
+
644.66015625
|
| 847 |
+
],
|
| 848 |
+
[
|
| 849 |
+
52.892578125,
|
| 850 |
+
644.66015625
|
| 851 |
+
]
|
| 852 |
+
]
|
| 853 |
+
},
|
| 854 |
+
{
|
| 855 |
+
"title": "Step 3: Second moment (detailed computation).",
|
| 856 |
+
"heading_level": null,
|
| 857 |
+
"page_id": 13,
|
| 858 |
+
"polygon": [
|
| 859 |
+
[
|
| 860 |
+
51.99609375,
|
| 861 |
+
282.0
|
| 862 |
+
],
|
| 863 |
+
[
|
| 864 |
+
257.25,
|
| 865 |
+
282.0
|
| 866 |
+
],
|
| 867 |
+
[
|
| 868 |
+
257.25,
|
| 869 |
+
292.359375
|
| 870 |
+
],
|
| 871 |
+
[
|
| 872 |
+
51.99609375,
|
| 873 |
+
292.359375
|
| 874 |
+
]
|
| 875 |
+
]
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"title": "Step 4: Combining.",
|
| 879 |
+
"heading_level": null,
|
| 880 |
+
"page_id": 14,
|
| 881 |
+
"polygon": [
|
| 882 |
+
[
|
| 883 |
+
53.19140625,
|
| 884 |
+
372.0
|
| 885 |
+
],
|
| 886 |
+
[
|
| 887 |
+
138.05859375,
|
| 888 |
+
372.0
|
| 889 |
+
],
|
| 890 |
+
[
|
| 891 |
+
138.05859375,
|
| 892 |
+
382.078125
|
| 893 |
+
],
|
| 894 |
+
[
|
| 895 |
+
53.19140625,
|
| 896 |
+
382.078125
|
| 897 |
+
]
|
| 898 |
+
]
|
| 899 |
+
},
|
| 900 |
+
{
|
| 901 |
+
"title": "Step 5: Telescoping sum.",
|
| 902 |
+
"heading_level": null,
|
| 903 |
+
"page_id": 14,
|
| 904 |
+
"polygon": [
|
| 905 |
+
[
|
| 906 |
+
54.0,
|
| 907 |
+
562.5
|
| 908 |
+
],
|
| 909 |
+
[
|
| 910 |
+
160.5,
|
| 911 |
+
562.5
|
| 912 |
+
],
|
| 913 |
+
[
|
| 914 |
+
160.5,
|
| 915 |
+
573.50390625
|
| 916 |
+
],
|
| 917 |
+
[
|
| 918 |
+
54.0,
|
| 919 |
+
573.50390625
|
| 920 |
+
]
|
| 921 |
+
]
|
| 922 |
+
},
|
| 923 |
+
{
|
| 924 |
+
"title": "B.4. Proof of Expected Update Direction",
|
| 925 |
+
"heading_level": null,
|
| 926 |
+
"page_id": 15,
|
| 927 |
+
"polygon": [
|
| 928 |
+
[
|
| 929 |
+
50.80078125,
|
| 930 |
+
68.8359375
|
| 931 |
+
],
|
| 932 |
+
[
|
| 933 |
+
226.5,
|
| 934 |
+
68.8359375
|
| 935 |
+
],
|
| 936 |
+
[
|
| 937 |
+
226.5,
|
| 938 |
+
78.1171875
|
| 939 |
+
],
|
| 940 |
+
[
|
| 941 |
+
50.80078125,
|
| 942 |
+
78.1171875
|
| 943 |
+
]
|
| 944 |
+
]
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"title": "Step 2: Conditional expectation of update.",
|
| 948 |
+
"heading_level": null,
|
| 949 |
+
"page_id": 15,
|
| 950 |
+
"polygon": [
|
| 951 |
+
[
|
| 952 |
+
51.3984375,
|
| 953 |
+
282.75
|
| 954 |
+
],
|
| 955 |
+
[
|
| 956 |
+
235.5,
|
| 957 |
+
282.75
|
| 958 |
+
],
|
| 959 |
+
[
|
| 960 |
+
235.5,
|
| 961 |
+
292.359375
|
| 962 |
+
],
|
| 963 |
+
[
|
| 964 |
+
51.3984375,
|
| 965 |
+
292.359375
|
| 966 |
+
]
|
| 967 |
+
]
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"title": "Step 3: Expectation over subspace basis.",
|
| 971 |
+
"heading_level": null,
|
| 972 |
+
"page_id": 15,
|
| 973 |
+
"polygon": [
|
| 974 |
+
[
|
| 975 |
+
52.59375,
|
| 976 |
+
345.75
|
| 977 |
+
],
|
| 978 |
+
[
|
| 979 |
+
227.25,
|
| 980 |
+
345.75
|
| 981 |
+
],
|
| 982 |
+
[
|
| 983 |
+
227.25,
|
| 984 |
+
356.16796875
|
| 985 |
+
],
|
| 986 |
+
[
|
| 987 |
+
52.59375,
|
| 988 |
+
356.16796875
|
| 989 |
+
]
|
| 990 |
+
]
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"title": "Step 4: Higher-order bias.",
|
| 994 |
+
"heading_level": null,
|
| 995 |
+
"page_id": 15,
|
| 996 |
+
"polygon": [
|
| 997 |
+
[
|
| 998 |
+
51.697265625,
|
| 999 |
+
537.15234375
|
| 1000 |
+
],
|
| 1001 |
+
[
|
| 1002 |
+
167.25,
|
| 1003 |
+
537.15234375
|
| 1004 |
+
],
|
| 1005 |
+
[
|
| 1006 |
+
167.25,
|
| 1007 |
+
546.75
|
| 1008 |
+
],
|
| 1009 |
+
[
|
| 1010 |
+
51.697265625,
|
| 1011 |
+
546.75
|
| 1012 |
+
]
|
| 1013 |
+
]
|
| 1014 |
+
},
|
| 1015 |
+
{
|
| 1016 |
+
"title": "B.5. Adaptive Sampling Analysis",
|
| 1017 |
+
"heading_level": null,
|
| 1018 |
+
"page_id": 15,
|
| 1019 |
+
"polygon": [
|
| 1020 |
+
[
|
| 1021 |
+
54.0,
|
| 1022 |
+
639.0
|
| 1023 |
+
],
|
| 1024 |
+
[
|
| 1025 |
+
195.75,
|
| 1026 |
+
639.0
|
| 1027 |
+
],
|
| 1028 |
+
[
|
| 1029 |
+
195.75,
|
| 1030 |
+
648.75
|
| 1031 |
+
],
|
| 1032 |
+
[
|
| 1033 |
+
54.0,
|
| 1034 |
+
648.75
|
| 1035 |
+
]
|
| 1036 |
+
]
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"title": "Proof. Step 1: Expression for the posterior mean.",
|
| 1040 |
+
"heading_level": null,
|
| 1041 |
+
"page_id": 16,
|
| 1042 |
+
"polygon": [
|
| 1043 |
+
[
|
| 1044 |
+
53.7890625,
|
| 1045 |
+
140.37890625
|
| 1046 |
+
],
|
| 1047 |
+
[
|
| 1048 |
+
264.75,
|
| 1049 |
+
140.37890625
|
| 1050 |
+
],
|
| 1051 |
+
[
|
| 1052 |
+
264.75,
|
| 1053 |
+
150.43359375
|
| 1054 |
+
],
|
| 1055 |
+
[
|
| 1056 |
+
53.7890625,
|
| 1057 |
+
150.43359375
|
| 1058 |
+
]
|
| 1059 |
+
]
|
| 1060 |
+
},
|
| 1061 |
+
{
|
| 1062 |
+
"title": "Step 2: Computing the conditional expectation.",
|
| 1063 |
+
"heading_level": null,
|
| 1064 |
+
"page_id": 16,
|
| 1065 |
+
"polygon": [
|
| 1066 |
+
[
|
| 1067 |
+
51.99609375,
|
| 1068 |
+
219.75
|
| 1069 |
+
],
|
| 1070 |
+
[
|
| 1071 |
+
255.75,
|
| 1072 |
+
219.75
|
| 1073 |
+
],
|
| 1074 |
+
[
|
| 1075 |
+
255.75,
|
| 1076 |
+
230.87109375
|
| 1077 |
+
],
|
| 1078 |
+
[
|
| 1079 |
+
51.99609375,
|
| 1080 |
+
230.87109375
|
| 1081 |
+
]
|
| 1082 |
+
]
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"title": "Step 3: Substituting into the posterior mean.",
|
| 1086 |
+
"heading_level": null,
|
| 1087 |
+
"page_id": 16,
|
| 1088 |
+
"polygon": [
|
| 1089 |
+
[
|
| 1090 |
+
53.19140625,
|
| 1091 |
+
339.75
|
| 1092 |
+
],
|
| 1093 |
+
[
|
| 1094 |
+
243.75,
|
| 1095 |
+
339.75
|
| 1096 |
+
],
|
| 1097 |
+
[
|
| 1098 |
+
243.75,
|
| 1099 |
+
349.59375
|
| 1100 |
+
],
|
| 1101 |
+
[
|
| 1102 |
+
53.19140625,
|
| 1103 |
+
349.59375
|
| 1104 |
+
]
|
| 1105 |
+
]
|
| 1106 |
+
},
|
| 1107 |
+
{
|
| 1108 |
+
"title": "Step 4: Simplifying the shrinkage matrix.",
|
| 1109 |
+
"heading_level": null,
|
| 1110 |
+
"page_id": 16,
|
| 1111 |
+
"polygon": [
|
| 1112 |
+
[
|
| 1113 |
+
54.0,
|
| 1114 |
+
388.5
|
| 1115 |
+
],
|
| 1116 |
+
[
|
| 1117 |
+
232.5,
|
| 1118 |
+
388.5
|
| 1119 |
+
],
|
| 1120 |
+
[
|
| 1121 |
+
232.5,
|
| 1122 |
+
399.09375
|
| 1123 |
+
],
|
| 1124 |
+
[
|
| 1125 |
+
54.0,
|
| 1126 |
+
399.09375
|
| 1127 |
+
]
|
| 1128 |
+
]
|
| 1129 |
+
},
|
| 1130 |
+
{
|
| 1131 |
+
"title": "B.6. Convergence Rate under Adaptive Sampling",
|
| 1132 |
+
"heading_level": null,
|
| 1133 |
+
"page_id": 16,
|
| 1134 |
+
"polygon": [
|
| 1135 |
+
[
|
| 1136 |
+
53.19140625,
|
| 1137 |
+
591.0
|
| 1138 |
+
],
|
| 1139 |
+
[
|
| 1140 |
+
264.0,
|
| 1141 |
+
591.0
|
| 1142 |
+
],
|
| 1143 |
+
[
|
| 1144 |
+
264.0,
|
| 1145 |
+
601.34765625
|
| 1146 |
+
],
|
| 1147 |
+
[
|
| 1148 |
+
53.19140625,
|
| 1149 |
+
601.34765625
|
| 1150 |
+
]
|
| 1151 |
+
]
|
| 1152 |
+
},
|
| 1153 |
+
{
|
| 1154 |
+
"title": "Step 1: Single-step descent.",
|
| 1155 |
+
"heading_level": null,
|
| 1156 |
+
"page_id": 17,
|
| 1157 |
+
"polygon": [
|
| 1158 |
+
[
|
| 1159 |
+
53.490234375,
|
| 1160 |
+
252.75
|
| 1161 |
+
],
|
| 1162 |
+
[
|
| 1163 |
+
171.75,
|
| 1164 |
+
252.75
|
| 1165 |
+
],
|
| 1166 |
+
[
|
| 1167 |
+
171.75,
|
| 1168 |
+
263.35546875
|
| 1169 |
+
],
|
| 1170 |
+
[
|
| 1171 |
+
53.490234375,
|
| 1172 |
+
263.35546875
|
| 1173 |
+
]
|
| 1174 |
+
]
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"title": "Step 2: Inner product term under adaptive sampling.",
|
| 1178 |
+
"heading_level": null,
|
| 1179 |
+
"page_id": 17,
|
| 1180 |
+
"polygon": [
|
| 1181 |
+
[
|
| 1182 |
+
52.59375,
|
| 1183 |
+
329.25
|
| 1184 |
+
],
|
| 1185 |
+
[
|
| 1186 |
+
281.25,
|
| 1187 |
+
329.25
|
| 1188 |
+
],
|
| 1189 |
+
[
|
| 1190 |
+
281.25,
|
| 1191 |
+
338.765625
|
| 1192 |
+
],
|
| 1193 |
+
[
|
| 1194 |
+
52.59375,
|
| 1195 |
+
338.765625
|
| 1196 |
+
]
|
| 1197 |
+
]
|
| 1198 |
+
},
|
| 1199 |
+
{
|
| 1200 |
+
"title": "Step 3: Second moment (same structure as main theorem).",
|
| 1201 |
+
"heading_level": null,
|
| 1202 |
+
"page_id": 17,
|
| 1203 |
+
"polygon": [
|
| 1204 |
+
[
|
| 1205 |
+
53.19140625,
|
| 1206 |
+
414.75
|
| 1207 |
+
],
|
| 1208 |
+
[
|
| 1209 |
+
303.75,
|
| 1210 |
+
414.75
|
| 1211 |
+
],
|
| 1212 |
+
[
|
| 1213 |
+
303.75,
|
| 1214 |
+
425.00390625
|
| 1215 |
+
],
|
| 1216 |
+
[
|
| 1217 |
+
53.19140625,
|
| 1218 |
+
425.00390625
|
| 1219 |
+
]
|
| 1220 |
+
]
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"title": "Step 4: Combining and bounding.",
|
| 1224 |
+
"heading_level": null,
|
| 1225 |
+
"page_id": 17,
|
| 1226 |
+
"polygon": [
|
| 1227 |
+
[
|
| 1228 |
+
54.0,
|
| 1229 |
+
556.5
|
| 1230 |
+
],
|
| 1231 |
+
[
|
| 1232 |
+
199.5,
|
| 1233 |
+
556.5
|
| 1234 |
+
],
|
| 1235 |
+
[
|
| 1236 |
+
199.5,
|
| 1237 |
+
567.31640625
|
| 1238 |
+
],
|
| 1239 |
+
[
|
| 1240 |
+
54.0,
|
| 1241 |
+
567.31640625
|
| 1242 |
+
]
|
| 1243 |
+
]
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"title": "Step 5: Telescoping sum.",
|
| 1247 |
+
"heading_level": null,
|
| 1248 |
+
"page_id": 17,
|
| 1249 |
+
"polygon": [
|
| 1250 |
+
[
|
| 1251 |
+
54.0,
|
| 1252 |
+
707.25
|
| 1253 |
+
],
|
| 1254 |
+
[
|
| 1255 |
+
161.25,
|
| 1256 |
+
707.25
|
| 1257 |
+
],
|
| 1258 |
+
[
|
| 1259 |
+
161.25,
|
| 1260 |
+
715.81640625
|
| 1261 |
+
],
|
| 1262 |
+
[
|
| 1263 |
+
54.0,
|
| 1264 |
+
715.81640625
|
| 1265 |
+
]
|
| 1266 |
+
]
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"title": "C. Experiment Details",
|
| 1270 |
+
"heading_level": null,
|
| 1271 |
+
"page_id": 18,
|
| 1272 |
+
"polygon": [
|
| 1273 |
+
[
|
| 1274 |
+
54.0,
|
| 1275 |
+
249.75
|
| 1276 |
+
],
|
| 1277 |
+
[
|
| 1278 |
+
169.5,
|
| 1279 |
+
249.75
|
| 1280 |
+
],
|
| 1281 |
+
[
|
| 1282 |
+
169.5,
|
| 1283 |
+
262.58203125
|
| 1284 |
+
],
|
| 1285 |
+
[
|
| 1286 |
+
54.0,
|
| 1287 |
+
262.58203125
|
| 1288 |
+
]
|
| 1289 |
+
]
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"title": "D. Raw Experimental Results",
|
| 1293 |
+
"heading_level": null,
|
| 1294 |
+
"page_id": 21,
|
| 1295 |
+
"polygon": [
|
| 1296 |
+
[
|
| 1297 |
+
52.892578125,
|
| 1298 |
+
674.25
|
| 1299 |
+
],
|
| 1300 |
+
[
|
| 1301 |
+
208.5,
|
| 1302 |
+
674.25
|
| 1303 |
+
],
|
| 1304 |
+
[
|
| 1305 |
+
208.5,
|
| 1306 |
+
686.25
|
| 1307 |
+
],
|
| 1308 |
+
[
|
| 1309 |
+
52.892578125,
|
| 1310 |
+
686.25
|
| 1311 |
+
]
|
| 1312 |
+
]
|
| 1313 |
+
}
|
| 1314 |
+
],
|
| 1315 |
+
"page_stats": [
|
| 1316 |
+
{
|
| 1317 |
+
"page_id": 0,
|
| 1318 |
+
"text_extraction_method": "pdftext",
|
| 1319 |
+
"block_counts": [
|
| 1320 |
+
[
|
| 1321 |
+
"Span",
|
| 1322 |
+
333
|
| 1323 |
+
],
|
| 1324 |
+
[
|
| 1325 |
+
"Line",
|
| 1326 |
+
141
|
| 1327 |
+
],
|
| 1328 |
+
[
|
| 1329 |
+
"Text",
|
| 1330 |
+
8
|
| 1331 |
+
],
|
| 1332 |
+
[
|
| 1333 |
+
"SectionHeader",
|
| 1334 |
+
5
|
| 1335 |
+
],
|
| 1336 |
+
[
|
| 1337 |
+
"Footnote",
|
| 1338 |
+
1
|
| 1339 |
+
],
|
| 1340 |
+
[
|
| 1341 |
+
"PageFooter",
|
| 1342 |
+
1
|
| 1343 |
+
]
|
| 1344 |
+
],
|
| 1345 |
+
"block_metadata": {
|
| 1346 |
+
"llm_request_count": 0,
|
| 1347 |
+
"llm_error_count": 0,
|
| 1348 |
+
"llm_tokens_used": 0,
|
| 1349 |
+
"previous_text": "",
|
| 1350 |
+
"previous_type": "",
|
| 1351 |
+
"previous_order": 0
|
| 1352 |
+
}
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"page_id": 1,
|
| 1356 |
+
"text_extraction_method": "pdftext",
|
| 1357 |
+
"block_counts": [
|
| 1358 |
+
[
|
| 1359 |
+
"Span",
|
| 1360 |
+
439
|
| 1361 |
+
],
|
| 1362 |
+
[
|
| 1363 |
+
"Line",
|
| 1364 |
+
178
|
| 1365 |
+
],
|
| 1366 |
+
[
|
| 1367 |
+
"Text",
|
| 1368 |
+
9
|
| 1369 |
+
],
|
| 1370 |
+
[
|
| 1371 |
+
"ListItem",
|
| 1372 |
+
4
|
| 1373 |
+
],
|
| 1374 |
+
[
|
| 1375 |
+
"PageHeader",
|
| 1376 |
+
1
|
| 1377 |
+
],
|
| 1378 |
+
[
|
| 1379 |
+
"Figure",
|
| 1380 |
+
1
|
| 1381 |
+
],
|
| 1382 |
+
[
|
| 1383 |
+
"Caption",
|
| 1384 |
+
1
|
| 1385 |
+
],
|
| 1386 |
+
[
|
| 1387 |
+
"SectionHeader",
|
| 1388 |
+
1
|
| 1389 |
+
],
|
| 1390 |
+
[
|
| 1391 |
+
"PageFooter",
|
| 1392 |
+
1
|
| 1393 |
+
],
|
| 1394 |
+
[
|
| 1395 |
+
"FigureGroup",
|
| 1396 |
+
1
|
| 1397 |
+
],
|
| 1398 |
+
[
|
| 1399 |
+
"ListGroup",
|
| 1400 |
+
1
|
| 1401 |
+
],
|
| 1402 |
+
[
|
| 1403 |
+
"Reference",
|
| 1404 |
+
1
|
| 1405 |
+
]
|
| 1406 |
+
],
|
| 1407 |
+
"block_metadata": {
|
| 1408 |
+
"llm_request_count": 0,
|
| 1409 |
+
"llm_error_count": 0,
|
| 1410 |
+
"llm_tokens_used": 0,
|
| 1411 |
+
"previous_text": "",
|
| 1412 |
+
"previous_type": "",
|
| 1413 |
+
"previous_order": 0
|
| 1414 |
+
}
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"page_id": 2,
|
| 1418 |
+
"text_extraction_method": "surya",
|
| 1419 |
+
"block_counts": [
|
| 1420 |
+
[
|
| 1421 |
+
"Span",
|
| 1422 |
+
218
|
| 1423 |
+
],
|
| 1424 |
+
[
|
| 1425 |
+
"Line",
|
| 1426 |
+
216
|
| 1427 |
+
],
|
| 1428 |
+
[
|
| 1429 |
+
"Text",
|
| 1430 |
+
66
|
| 1431 |
+
],
|
| 1432 |
+
[
|
| 1433 |
+
"Equation",
|
| 1434 |
+
7
|
| 1435 |
+
],
|
| 1436 |
+
[
|
| 1437 |
+
"Reference",
|
| 1438 |
+
4
|
| 1439 |
+
],
|
| 1440 |
+
[
|
| 1441 |
+
"SectionHeader",
|
| 1442 |
+
3
|
| 1443 |
+
],
|
| 1444 |
+
[
|
| 1445 |
+
"PageHeader",
|
| 1446 |
+
1
|
| 1447 |
+
],
|
| 1448 |
+
[
|
| 1449 |
+
"PageFooter",
|
| 1450 |
+
1
|
| 1451 |
+
]
|
| 1452 |
+
],
|
| 1453 |
+
"block_metadata": {
|
| 1454 |
+
"llm_request_count": 0,
|
| 1455 |
+
"llm_error_count": 0,
|
| 1456 |
+
"llm_tokens_used": 0,
|
| 1457 |
+
"previous_text": "",
|
| 1458 |
+
"previous_type": "",
|
| 1459 |
+
"previous_order": 0
|
| 1460 |
+
}
|
| 1461 |
+
},
|
| 1462 |
+
{
|
| 1463 |
+
"page_id": 3,
|
| 1464 |
+
"text_extraction_method": "surya",
|
| 1465 |
+
"block_counts": [
|
| 1466 |
+
[
|
| 1467 |
+
"Line",
|
| 1468 |
+
193
|
| 1469 |
+
],
|
| 1470 |
+
[
|
| 1471 |
+
"Span",
|
| 1472 |
+
136
|
| 1473 |
+
],
|
| 1474 |
+
[
|
| 1475 |
+
"Text",
|
| 1476 |
+
57
|
| 1477 |
+
],
|
| 1478 |
+
[
|
| 1479 |
+
"TableCell",
|
| 1480 |
+
49
|
| 1481 |
+
],
|
| 1482 |
+
[
|
| 1483 |
+
"Equation",
|
| 1484 |
+
6
|
| 1485 |
+
],
|
| 1486 |
+
[
|
| 1487 |
+
"Reference",
|
| 1488 |
+
4
|
| 1489 |
+
],
|
| 1490 |
+
[
|
| 1491 |
+
"PageHeader",
|
| 1492 |
+
2
|
| 1493 |
+
],
|
| 1494 |
+
[
|
| 1495 |
+
"SectionHeader",
|
| 1496 |
+
2
|
| 1497 |
+
],
|
| 1498 |
+
[
|
| 1499 |
+
"Caption",
|
| 1500 |
+
1
|
| 1501 |
+
],
|
| 1502 |
+
[
|
| 1503 |
+
"Table",
|
| 1504 |
+
1
|
| 1505 |
+
],
|
| 1506 |
+
[
|
| 1507 |
+
"PageFooter",
|
| 1508 |
+
1
|
| 1509 |
+
],
|
| 1510 |
+
[
|
| 1511 |
+
"TableGroup",
|
| 1512 |
+
1
|
| 1513 |
+
]
|
| 1514 |
+
],
|
| 1515 |
+
"block_metadata": {
|
| 1516 |
+
"llm_request_count": 0,
|
| 1517 |
+
"llm_error_count": 0,
|
| 1518 |
+
"llm_tokens_used": 0,
|
| 1519 |
+
"previous_text": "",
|
| 1520 |
+
"previous_type": "",
|
| 1521 |
+
"previous_order": 0
|
| 1522 |
+
}
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"page_id": 4,
|
| 1526 |
+
"text_extraction_method": "surya",
|
| 1527 |
+
"block_counts": [
|
| 1528 |
+
[
|
| 1529 |
+
"Line",
|
| 1530 |
+
145
|
| 1531 |
+
],
|
| 1532 |
+
[
|
| 1533 |
+
"Span",
|
| 1534 |
+
139
|
| 1535 |
+
],
|
| 1536 |
+
[
|
| 1537 |
+
"TableCell",
|
| 1538 |
+
18
|
| 1539 |
+
],
|
| 1540 |
+
[
|
| 1541 |
+
"Text",
|
| 1542 |
+
14
|
| 1543 |
+
],
|
| 1544 |
+
[
|
| 1545 |
+
"Reference",
|
| 1546 |
+
4
|
| 1547 |
+
],
|
| 1548 |
+
[
|
| 1549 |
+
"Equation",
|
| 1550 |
+
3
|
| 1551 |
+
],
|
| 1552 |
+
[
|
| 1553 |
+
"PageHeader",
|
| 1554 |
+
2
|
| 1555 |
+
],
|
| 1556 |
+
[
|
| 1557 |
+
"Caption",
|
| 1558 |
+
2
|
| 1559 |
+
],
|
| 1560 |
+
[
|
| 1561 |
+
"SectionHeader",
|
| 1562 |
+
2
|
| 1563 |
+
],
|
| 1564 |
+
[
|
| 1565 |
+
"Table",
|
| 1566 |
+
1
|
| 1567 |
+
],
|
| 1568 |
+
[
|
| 1569 |
+
"Figure",
|
| 1570 |
+
1
|
| 1571 |
+
],
|
| 1572 |
+
[
|
| 1573 |
+
"PageFooter",
|
| 1574 |
+
1
|
| 1575 |
+
],
|
| 1576 |
+
[
|
| 1577 |
+
"TableGroup",
|
| 1578 |
+
1
|
| 1579 |
+
],
|
| 1580 |
+
[
|
| 1581 |
+
"FigureGroup",
|
| 1582 |
+
1
|
| 1583 |
+
]
|
| 1584 |
+
],
|
| 1585 |
+
"block_metadata": {
|
| 1586 |
+
"llm_request_count": 0,
|
| 1587 |
+
"llm_error_count": 0,
|
| 1588 |
+
"llm_tokens_used": 0,
|
| 1589 |
+
"previous_text": "",
|
| 1590 |
+
"previous_type": "",
|
| 1591 |
+
"previous_order": 0
|
| 1592 |
+
}
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"page_id": 5,
|
| 1596 |
+
"text_extraction_method": "pdftext",
|
| 1597 |
+
"block_counts": [
|
| 1598 |
+
[
|
| 1599 |
+
"Span",
|
| 1600 |
+
466
|
| 1601 |
+
],
|
| 1602 |
+
[
|
| 1603 |
+
"TableCell",
|
| 1604 |
+
170
|
| 1605 |
+
],
|
| 1606 |
+
[
|
| 1607 |
+
"Line",
|
| 1608 |
+
121
|
| 1609 |
+
],
|
| 1610 |
+
[
|
| 1611 |
+
"Text",
|
| 1612 |
+
16
|
| 1613 |
+
],
|
| 1614 |
+
[
|
| 1615 |
+
"PageHeader",
|
| 1616 |
+
1
|
| 1617 |
+
],
|
| 1618 |
+
[
|
| 1619 |
+
"Caption",
|
| 1620 |
+
1
|
| 1621 |
+
],
|
| 1622 |
+
[
|
| 1623 |
+
"Table",
|
| 1624 |
+
1
|
| 1625 |
+
],
|
| 1626 |
+
[
|
| 1627 |
+
"SectionHeader",
|
| 1628 |
+
1
|
| 1629 |
+
],
|
| 1630 |
+
[
|
| 1631 |
+
"PageFooter",
|
| 1632 |
+
1
|
| 1633 |
+
],
|
| 1634 |
+
[
|
| 1635 |
+
"TableGroup",
|
| 1636 |
+
1
|
| 1637 |
+
],
|
| 1638 |
+
[
|
| 1639 |
+
"Reference",
|
| 1640 |
+
1
|
| 1641 |
+
]
|
| 1642 |
+
],
|
| 1643 |
+
"block_metadata": {
|
| 1644 |
+
"llm_request_count": 0,
|
| 1645 |
+
"llm_error_count": 0,
|
| 1646 |
+
"llm_tokens_used": 0,
|
| 1647 |
+
"previous_text": "",
|
| 1648 |
+
"previous_type": "",
|
| 1649 |
+
"previous_order": 0
|
| 1650 |
+
}
|
| 1651 |
+
},
|
| 1652 |
+
{
|
| 1653 |
+
"page_id": 6,
|
| 1654 |
+
"text_extraction_method": "pdftext",
|
| 1655 |
+
"block_counts": [
|
| 1656 |
+
[
|
| 1657 |
+
"Span",
|
| 1658 |
+
438
|
| 1659 |
+
],
|
| 1660 |
+
[
|
| 1661 |
+
"Line",
|
| 1662 |
+
145
|
| 1663 |
+
],
|
| 1664 |
+
[
|
| 1665 |
+
"TableCell",
|
| 1666 |
+
47
|
| 1667 |
+
],
|
| 1668 |
+
[
|
| 1669 |
+
"Text",
|
| 1670 |
+
20
|
| 1671 |
+
],
|
| 1672 |
+
[
|
| 1673 |
+
"SectionHeader",
|
| 1674 |
+
4
|
| 1675 |
+
],
|
| 1676 |
+
[
|
| 1677 |
+
"Reference",
|
| 1678 |
+
2
|
| 1679 |
+
],
|
| 1680 |
+
[
|
| 1681 |
+
"PageHeader",
|
| 1682 |
+
1
|
| 1683 |
+
],
|
| 1684 |
+
[
|
| 1685 |
+
"Caption",
|
| 1686 |
+
1
|
| 1687 |
+
],
|
| 1688 |
+
[
|
| 1689 |
+
"Table",
|
| 1690 |
+
1
|
| 1691 |
+
],
|
| 1692 |
+
[
|
| 1693 |
+
"PageFooter",
|
| 1694 |
+
1
|
| 1695 |
+
],
|
| 1696 |
+
[
|
| 1697 |
+
"TableGroup",
|
| 1698 |
+
1
|
| 1699 |
+
]
|
| 1700 |
+
],
|
| 1701 |
+
"block_metadata": {
|
| 1702 |
+
"llm_request_count": 0,
|
| 1703 |
+
"llm_error_count": 0,
|
| 1704 |
+
"llm_tokens_used": 0,
|
| 1705 |
+
"previous_text": "",
|
| 1706 |
+
"previous_type": "",
|
| 1707 |
+
"previous_order": 0
|
| 1708 |
+
}
|
| 1709 |
+
},
|
| 1710 |
+
{
|
| 1711 |
+
"page_id": 7,
|
| 1712 |
+
"text_extraction_method": "pdftext",
|
| 1713 |
+
"block_counts": [
|
| 1714 |
+
[
|
| 1715 |
+
"Span",
|
| 1716 |
+
407
|
| 1717 |
+
],
|
| 1718 |
+
[
|
| 1719 |
+
"Line",
|
| 1720 |
+
145
|
| 1721 |
+
],
|
| 1722 |
+
[
|
| 1723 |
+
"TableCell",
|
| 1724 |
+
65
|
| 1725 |
+
],
|
| 1726 |
+
[
|
| 1727 |
+
"Reference",
|
| 1728 |
+
12
|
| 1729 |
+
],
|
| 1730 |
+
[
|
| 1731 |
+
"ListItem",
|
| 1732 |
+
11
|
| 1733 |
+
],
|
| 1734 |
+
[
|
| 1735 |
+
"Text",
|
| 1736 |
+
8
|
| 1737 |
+
],
|
| 1738 |
+
[
|
| 1739 |
+
"SectionHeader",
|
| 1740 |
+
4
|
| 1741 |
+
],
|
| 1742 |
+
[
|
| 1743 |
+
"PageHeader",
|
| 1744 |
+
2
|
| 1745 |
+
],
|
| 1746 |
+
[
|
| 1747 |
+
"Table",
|
| 1748 |
+
2
|
| 1749 |
+
],
|
| 1750 |
+
[
|
| 1751 |
+
"Caption",
|
| 1752 |
+
1
|
| 1753 |
+
],
|
| 1754 |
+
[
|
| 1755 |
+
"PageFooter",
|
| 1756 |
+
1
|
| 1757 |
+
],
|
| 1758 |
+
[
|
| 1759 |
+
"TableGroup",
|
| 1760 |
+
1
|
| 1761 |
+
],
|
| 1762 |
+
[
|
| 1763 |
+
"ListGroup",
|
| 1764 |
+
1
|
| 1765 |
+
]
|
| 1766 |
+
],
|
| 1767 |
+
"block_metadata": {
|
| 1768 |
+
"llm_request_count": 0,
|
| 1769 |
+
"llm_error_count": 0,
|
| 1770 |
+
"llm_tokens_used": 0,
|
| 1771 |
+
"previous_text": "",
|
| 1772 |
+
"previous_type": "",
|
| 1773 |
+
"previous_order": 0
|
| 1774 |
+
}
|
| 1775 |
+
},
|
| 1776 |
+
{
|
| 1777 |
+
"page_id": 8,
|
| 1778 |
+
"text_extraction_method": "pdftext",
|
| 1779 |
+
"block_counts": [
|
| 1780 |
+
[
|
| 1781 |
+
"Span",
|
| 1782 |
+
316
|
| 1783 |
+
],
|
| 1784 |
+
[
|
| 1785 |
+
"Line",
|
| 1786 |
+
137
|
| 1787 |
+
],
|
| 1788 |
+
[
|
| 1789 |
+
"ListItem",
|
| 1790 |
+
20
|
| 1791 |
+
],
|
| 1792 |
+
[
|
| 1793 |
+
"Reference",
|
| 1794 |
+
20
|
| 1795 |
+
],
|
| 1796 |
+
[
|
| 1797 |
+
"ListGroup",
|
| 1798 |
+
3
|
| 1799 |
+
],
|
| 1800 |
+
[
|
| 1801 |
+
"PageHeader",
|
| 1802 |
+
1
|
| 1803 |
+
],
|
| 1804 |
+
[
|
| 1805 |
+
"PageFooter",
|
| 1806 |
+
1
|
| 1807 |
+
],
|
| 1808 |
+
[
|
| 1809 |
+
"Text",
|
| 1810 |
+
1
|
| 1811 |
+
]
|
| 1812 |
+
],
|
| 1813 |
+
"block_metadata": {
|
| 1814 |
+
"llm_request_count": 0,
|
| 1815 |
+
"llm_error_count": 0,
|
| 1816 |
+
"llm_tokens_used": 0,
|
| 1817 |
+
"previous_text": "",
|
| 1818 |
+
"previous_type": "",
|
| 1819 |
+
"previous_order": 0
|
| 1820 |
+
}
|
| 1821 |
+
},
|
| 1822 |
+
{
|
| 1823 |
+
"page_id": 9,
|
| 1824 |
+
"text_extraction_method": "surya",
|
| 1825 |
+
"block_counts": [
|
| 1826 |
+
[
|
| 1827 |
+
"Line",
|
| 1828 |
+
158
|
| 1829 |
+
],
|
| 1830 |
+
[
|
| 1831 |
+
"Span",
|
| 1832 |
+
146
|
| 1833 |
+
],
|
| 1834 |
+
[
|
| 1835 |
+
"Text",
|
| 1836 |
+
45
|
| 1837 |
+
],
|
| 1838 |
+
[
|
| 1839 |
+
"SectionHeader",
|
| 1840 |
+
4
|
| 1841 |
+
],
|
| 1842 |
+
[
|
| 1843 |
+
"Equation",
|
| 1844 |
+
4
|
| 1845 |
+
],
|
| 1846 |
+
[
|
| 1847 |
+
"Reference",
|
| 1848 |
+
4
|
| 1849 |
+
],
|
| 1850 |
+
[
|
| 1851 |
+
"PageHeader",
|
| 1852 |
+
1
|
| 1853 |
+
],
|
| 1854 |
+
[
|
| 1855 |
+
"Code",
|
| 1856 |
+
1
|
| 1857 |
+
],
|
| 1858 |
+
[
|
| 1859 |
+
"PageFooter",
|
| 1860 |
+
1
|
| 1861 |
+
]
|
| 1862 |
+
],
|
| 1863 |
+
"block_metadata": {
|
| 1864 |
+
"llm_request_count": 0,
|
| 1865 |
+
"llm_error_count": 0,
|
| 1866 |
+
"llm_tokens_used": 0,
|
| 1867 |
+
"previous_text": "",
|
| 1868 |
+
"previous_type": "",
|
| 1869 |
+
"previous_order": 0
|
| 1870 |
+
}
|
| 1871 |
+
},
|
| 1872 |
+
{
|
| 1873 |
+
"page_id": 10,
|
| 1874 |
+
"text_extraction_method": "surya",
|
| 1875 |
+
"block_counts": [
|
| 1876 |
+
[
|
| 1877 |
+
"Line",
|
| 1878 |
+
150
|
| 1879 |
+
],
|
| 1880 |
+
[
|
| 1881 |
+
"Span",
|
| 1882 |
+
107
|
| 1883 |
+
],
|
| 1884 |
+
[
|
| 1885 |
+
"Text",
|
| 1886 |
+
41
|
| 1887 |
+
],
|
| 1888 |
+
[
|
| 1889 |
+
"Equation",
|
| 1890 |
+
8
|
| 1891 |
+
],
|
| 1892 |
+
[
|
| 1893 |
+
"ListItem",
|
| 1894 |
+
6
|
| 1895 |
+
],
|
| 1896 |
+
[
|
| 1897 |
+
"SectionHeader",
|
| 1898 |
+
4
|
| 1899 |
+
],
|
| 1900 |
+
[
|
| 1901 |
+
"ListGroup",
|
| 1902 |
+
2
|
| 1903 |
+
],
|
| 1904 |
+
[
|
| 1905 |
+
"PageHeader",
|
| 1906 |
+
1
|
| 1907 |
+
],
|
| 1908 |
+
[
|
| 1909 |
+
"PageFooter",
|
| 1910 |
+
1
|
| 1911 |
+
]
|
| 1912 |
+
],
|
| 1913 |
+
"block_metadata": {
|
| 1914 |
+
"llm_request_count": 0,
|
| 1915 |
+
"llm_error_count": 0,
|
| 1916 |
+
"llm_tokens_used": 0,
|
| 1917 |
+
"previous_text": "",
|
| 1918 |
+
"previous_type": "",
|
| 1919 |
+
"previous_order": 0
|
| 1920 |
+
}
|
| 1921 |
+
},
|
| 1922 |
+
{
|
| 1923 |
+
"page_id": 11,
|
| 1924 |
+
"text_extraction_method": "pdftext",
|
| 1925 |
+
"block_counts": [
|
| 1926 |
+
[
|
| 1927 |
+
"Span",
|
| 1928 |
+
748
|
| 1929 |
+
],
|
| 1930 |
+
[
|
| 1931 |
+
"Line",
|
| 1932 |
+
158
|
| 1933 |
+
],
|
| 1934 |
+
[
|
| 1935 |
+
"Text",
|
| 1936 |
+
14
|
| 1937 |
+
],
|
| 1938 |
+
[
|
| 1939 |
+
"Equation",
|
| 1940 |
+
12
|
| 1941 |
+
],
|
| 1942 |
+
[
|
| 1943 |
+
"Reference",
|
| 1944 |
+
3
|
| 1945 |
+
],
|
| 1946 |
+
[
|
| 1947 |
+
"PageHeader",
|
| 1948 |
+
1
|
| 1949 |
+
],
|
| 1950 |
+
[
|
| 1951 |
+
"SectionHeader",
|
| 1952 |
+
1
|
| 1953 |
+
],
|
| 1954 |
+
[
|
| 1955 |
+
"PageFooter",
|
| 1956 |
+
1
|
| 1957 |
+
]
|
| 1958 |
+
],
|
| 1959 |
+
"block_metadata": {
|
| 1960 |
+
"llm_request_count": 0,
|
| 1961 |
+
"llm_error_count": 0,
|
| 1962 |
+
"llm_tokens_used": 0,
|
| 1963 |
+
"previous_text": "",
|
| 1964 |
+
"previous_type": "",
|
| 1965 |
+
"previous_order": 0
|
| 1966 |
+
}
|
| 1967 |
+
},
|
| 1968 |
+
{
|
| 1969 |
+
"page_id": 12,
|
| 1970 |
+
"text_extraction_method": "surya",
|
| 1971 |
+
"block_counts": [
|
| 1972 |
+
[
|
| 1973 |
+
"Line",
|
| 1974 |
+
158
|
| 1975 |
+
],
|
| 1976 |
+
[
|
| 1977 |
+
"Span",
|
| 1978 |
+
130
|
| 1979 |
+
],
|
| 1980 |
+
[
|
| 1981 |
+
"Text",
|
| 1982 |
+
44
|
| 1983 |
+
],
|
| 1984 |
+
[
|
| 1985 |
+
"Equation",
|
| 1986 |
+
7
|
| 1987 |
+
],
|
| 1988 |
+
[
|
| 1989 |
+
"SectionHeader",
|
| 1990 |
+
6
|
| 1991 |
+
],
|
| 1992 |
+
[
|
| 1993 |
+
"ListItem",
|
| 1994 |
+
3
|
| 1995 |
+
],
|
| 1996 |
+
[
|
| 1997 |
+
"PageHeader",
|
| 1998 |
+
1
|
| 1999 |
+
],
|
| 2000 |
+
[
|
| 2001 |
+
"PageFooter",
|
| 2002 |
+
1
|
| 2003 |
+
],
|
| 2004 |
+
[
|
| 2005 |
+
"ListGroup",
|
| 2006 |
+
1
|
| 2007 |
+
]
|
| 2008 |
+
],
|
| 2009 |
+
"block_metadata": {
|
| 2010 |
+
"llm_request_count": 0,
|
| 2011 |
+
"llm_error_count": 0,
|
| 2012 |
+
"llm_tokens_used": 0,
|
| 2013 |
+
"previous_text": "",
|
| 2014 |
+
"previous_type": "",
|
| 2015 |
+
"previous_order": 0
|
| 2016 |
+
}
|
| 2017 |
+
},
|
| 2018 |
+
{
|
| 2019 |
+
"page_id": 13,
|
| 2020 |
+
"text_extraction_method": "surya",
|
| 2021 |
+
"block_counts": [
|
| 2022 |
+
[
|
| 2023 |
+
"Line",
|
| 2024 |
+
152
|
| 2025 |
+
],
|
| 2026 |
+
[
|
| 2027 |
+
"Span",
|
| 2028 |
+
115
|
| 2029 |
+
],
|
| 2030 |
+
[
|
| 2031 |
+
"Text",
|
| 2032 |
+
45
|
| 2033 |
+
],
|
| 2034 |
+
[
|
| 2035 |
+
"Equation",
|
| 2036 |
+
10
|
| 2037 |
+
],
|
| 2038 |
+
[
|
| 2039 |
+
"PageHeader",
|
| 2040 |
+
1
|
| 2041 |
+
],
|
| 2042 |
+
[
|
| 2043 |
+
"SectionHeader",
|
| 2044 |
+
1
|
| 2045 |
+
],
|
| 2046 |
+
[
|
| 2047 |
+
"PageFooter",
|
| 2048 |
+
1
|
| 2049 |
+
]
|
| 2050 |
+
],
|
| 2051 |
+
"block_metadata": {
|
| 2052 |
+
"llm_request_count": 0,
|
| 2053 |
+
"llm_error_count": 0,
|
| 2054 |
+
"llm_tokens_used": 0,
|
| 2055 |
+
"previous_text": "",
|
| 2056 |
+
"previous_type": "",
|
| 2057 |
+
"previous_order": 0
|
| 2058 |
+
}
|
| 2059 |
+
},
|
| 2060 |
+
{
|
| 2061 |
+
"page_id": 14,
|
| 2062 |
+
"text_extraction_method": "surya",
|
| 2063 |
+
"block_counts": [
|
| 2064 |
+
[
|
| 2065 |
+
"Line",
|
| 2066 |
+
153
|
| 2067 |
+
],
|
| 2068 |
+
[
|
| 2069 |
+
"Span",
|
| 2070 |
+
85
|
| 2071 |
+
],
|
| 2072 |
+
[
|
| 2073 |
+
"Text",
|
| 2074 |
+
41
|
| 2075 |
+
],
|
| 2076 |
+
[
|
| 2077 |
+
"Equation",
|
| 2078 |
+
11
|
| 2079 |
+
],
|
| 2080 |
+
[
|
| 2081 |
+
"SectionHeader",
|
| 2082 |
+
2
|
| 2083 |
+
],
|
| 2084 |
+
[
|
| 2085 |
+
"PageHeader",
|
| 2086 |
+
1
|
| 2087 |
+
],
|
| 2088 |
+
[
|
| 2089 |
+
"PageFooter",
|
| 2090 |
+
1
|
| 2091 |
+
]
|
| 2092 |
+
],
|
| 2093 |
+
"block_metadata": {
|
| 2094 |
+
"llm_request_count": 0,
|
| 2095 |
+
"llm_error_count": 0,
|
| 2096 |
+
"llm_tokens_used": 0,
|
| 2097 |
+
"previous_text": "",
|
| 2098 |
+
"previous_type": "",
|
| 2099 |
+
"previous_order": 0
|
| 2100 |
+
}
|
| 2101 |
+
},
|
| 2102 |
+
{
|
| 2103 |
+
"page_id": 15,
|
| 2104 |
+
"text_extraction_method": "surya",
|
| 2105 |
+
"block_counts": [
|
| 2106 |
+
[
|
| 2107 |
+
"Line",
|
| 2108 |
+
172
|
| 2109 |
+
],
|
| 2110 |
+
[
|
| 2111 |
+
"Span",
|
| 2112 |
+
126
|
| 2113 |
+
],
|
| 2114 |
+
[
|
| 2115 |
+
"Text",
|
| 2116 |
+
49
|
| 2117 |
+
],
|
| 2118 |
+
[
|
| 2119 |
+
"Equation",
|
| 2120 |
+
10
|
| 2121 |
+
],
|
| 2122 |
+
[
|
| 2123 |
+
"SectionHeader",
|
| 2124 |
+
5
|
| 2125 |
+
],
|
| 2126 |
+
[
|
| 2127 |
+
"Reference",
|
| 2128 |
+
2
|
| 2129 |
+
],
|
| 2130 |
+
[
|
| 2131 |
+
"PageHeader",
|
| 2132 |
+
1
|
| 2133 |
+
],
|
| 2134 |
+
[
|
| 2135 |
+
"PageFooter",
|
| 2136 |
+
1
|
| 2137 |
+
]
|
| 2138 |
+
],
|
| 2139 |
+
"block_metadata": {
|
| 2140 |
+
"llm_request_count": 0,
|
| 2141 |
+
"llm_error_count": 0,
|
| 2142 |
+
"llm_tokens_used": 0,
|
| 2143 |
+
"previous_text": "",
|
| 2144 |
+
"previous_type": "",
|
| 2145 |
+
"previous_order": 0
|
| 2146 |
+
}
|
| 2147 |
+
},
|
| 2148 |
+
{
|
| 2149 |
+
"page_id": 16,
|
| 2150 |
+
"text_extraction_method": "surya",
|
| 2151 |
+
"block_counts": [
|
| 2152 |
+
[
|
| 2153 |
+
"Line",
|
| 2154 |
+
172
|
| 2155 |
+
],
|
| 2156 |
+
[
|
| 2157 |
+
"Span",
|
| 2158 |
+
122
|
| 2159 |
+
],
|
| 2160 |
+
[
|
| 2161 |
+
"Text",
|
| 2162 |
+
49
|
| 2163 |
+
],
|
| 2164 |
+
[
|
| 2165 |
+
"Equation",
|
| 2166 |
+
10
|
| 2167 |
+
],
|
| 2168 |
+
[
|
| 2169 |
+
"SectionHeader",
|
| 2170 |
+
5
|
| 2171 |
+
],
|
| 2172 |
+
[
|
| 2173 |
+
"Reference",
|
| 2174 |
+
2
|
| 2175 |
+
],
|
| 2176 |
+
[
|
| 2177 |
+
"PageHeader",
|
| 2178 |
+
1
|
| 2179 |
+
],
|
| 2180 |
+
[
|
| 2181 |
+
"PageFooter",
|
| 2182 |
+
1
|
| 2183 |
+
]
|
| 2184 |
+
],
|
| 2185 |
+
"block_metadata": {
|
| 2186 |
+
"llm_request_count": 0,
|
| 2187 |
+
"llm_error_count": 0,
|
| 2188 |
+
"llm_tokens_used": 0,
|
| 2189 |
+
"previous_text": "",
|
| 2190 |
+
"previous_type": "",
|
| 2191 |
+
"previous_order": 0
|
| 2192 |
+
}
|
| 2193 |
+
},
|
| 2194 |
+
{
|
| 2195 |
+
"page_id": 17,
|
| 2196 |
+
"text_extraction_method": "surya",
|
| 2197 |
+
"block_counts": [
|
| 2198 |
+
[
|
| 2199 |
+
"Line",
|
| 2200 |
+
157
|
| 2201 |
+
],
|
| 2202 |
+
[
|
| 2203 |
+
"Span",
|
| 2204 |
+
127
|
| 2205 |
+
],
|
| 2206 |
+
[
|
| 2207 |
+
"Text",
|
| 2208 |
+
43
|
| 2209 |
+
],
|
| 2210 |
+
[
|
| 2211 |
+
"Equation",
|
| 2212 |
+
6
|
| 2213 |
+
],
|
| 2214 |
+
[
|
| 2215 |
+
"ListItem",
|
| 2216 |
+
5
|
| 2217 |
+
],
|
| 2218 |
+
[
|
| 2219 |
+
"SectionHeader",
|
| 2220 |
+
5
|
| 2221 |
+
],
|
| 2222 |
+
[
|
| 2223 |
+
"ListGroup",
|
| 2224 |
+
2
|
| 2225 |
+
],
|
| 2226 |
+
[
|
| 2227 |
+
"PageHeader",
|
| 2228 |
+
1
|
| 2229 |
+
],
|
| 2230 |
+
[
|
| 2231 |
+
"PageFooter",
|
| 2232 |
+
1
|
| 2233 |
+
]
|
| 2234 |
+
],
|
| 2235 |
+
"block_metadata": {
|
| 2236 |
+
"llm_request_count": 0,
|
| 2237 |
+
"llm_error_count": 0,
|
| 2238 |
+
"llm_tokens_used": 0,
|
| 2239 |
+
"previous_text": "",
|
| 2240 |
+
"previous_type": "",
|
| 2241 |
+
"previous_order": 0
|
| 2242 |
+
}
|
| 2243 |
+
},
|
| 2244 |
+
{
|
| 2245 |
+
"page_id": 18,
|
| 2246 |
+
"text_extraction_method": "surya",
|
| 2247 |
+
"block_counts": [
|
| 2248 |
+
[
|
| 2249 |
+
"Line",
|
| 2250 |
+
113
|
| 2251 |
+
],
|
| 2252 |
+
[
|
| 2253 |
+
"Span",
|
| 2254 |
+
71
|
| 2255 |
+
],
|
| 2256 |
+
[
|
| 2257 |
+
"TableCell",
|
| 2258 |
+
54
|
| 2259 |
+
],
|
| 2260 |
+
[
|
| 2261 |
+
"Text",
|
| 2262 |
+
19
|
| 2263 |
+
],
|
| 2264 |
+
[
|
| 2265 |
+
"Equation",
|
| 2266 |
+
2
|
| 2267 |
+
],
|
| 2268 |
+
[
|
| 2269 |
+
"Caption",
|
| 2270 |
+
2
|
| 2271 |
+
],
|
| 2272 |
+
[
|
| 2273 |
+
"Table",
|
| 2274 |
+
2
|
| 2275 |
+
],
|
| 2276 |
+
[
|
| 2277 |
+
"Reference",
|
| 2278 |
+
2
|
| 2279 |
+
],
|
| 2280 |
+
[
|
| 2281 |
+
"PageHeader",
|
| 2282 |
+
1
|
| 2283 |
+
],
|
| 2284 |
+
[
|
| 2285 |
+
"SectionHeader",
|
| 2286 |
+
1
|
| 2287 |
+
],
|
| 2288 |
+
[
|
| 2289 |
+
"PageFooter",
|
| 2290 |
+
1
|
| 2291 |
+
],
|
| 2292 |
+
[
|
| 2293 |
+
"TableGroup",
|
| 2294 |
+
1
|
| 2295 |
+
]
|
| 2296 |
+
],
|
| 2297 |
+
"block_metadata": {
|
| 2298 |
+
"llm_request_count": 0,
|
| 2299 |
+
"llm_error_count": 0,
|
| 2300 |
+
"llm_tokens_used": 0,
|
| 2301 |
+
"previous_text": "",
|
| 2302 |
+
"previous_type": "",
|
| 2303 |
+
"previous_order": 0
|
| 2304 |
+
}
|
| 2305 |
+
},
|
| 2306 |
+
{
|
| 2307 |
+
"page_id": 19,
|
| 2308 |
+
"text_extraction_method": "surya",
|
| 2309 |
+
"block_counts": [
|
| 2310 |
+
[
|
| 2311 |
+
"Line",
|
| 2312 |
+
64
|
| 2313 |
+
],
|
| 2314 |
+
[
|
| 2315 |
+
"Span",
|
| 2316 |
+
58
|
| 2317 |
+
],
|
| 2318 |
+
[
|
| 2319 |
+
"TableCell",
|
| 2320 |
+
24
|
| 2321 |
+
],
|
| 2322 |
+
[
|
| 2323 |
+
"PageHeader",
|
| 2324 |
+
2
|
| 2325 |
+
],
|
| 2326 |
+
[
|
| 2327 |
+
"Caption",
|
| 2328 |
+
1
|
| 2329 |
+
],
|
| 2330 |
+
[
|
| 2331 |
+
"Table",
|
| 2332 |
+
1
|
| 2333 |
+
],
|
| 2334 |
+
[
|
| 2335 |
+
"PageFooter",
|
| 2336 |
+
1
|
| 2337 |
+
],
|
| 2338 |
+
[
|
| 2339 |
+
"Text",
|
| 2340 |
+
1
|
| 2341 |
+
],
|
| 2342 |
+
[
|
| 2343 |
+
"TableGroup",
|
| 2344 |
+
1
|
| 2345 |
+
]
|
| 2346 |
+
],
|
| 2347 |
+
"block_metadata": {
|
| 2348 |
+
"llm_request_count": 0,
|
| 2349 |
+
"llm_error_count": 0,
|
| 2350 |
+
"llm_tokens_used": 0,
|
| 2351 |
+
"previous_text": "",
|
| 2352 |
+
"previous_type": "",
|
| 2353 |
+
"previous_order": 0
|
| 2354 |
+
}
|
| 2355 |
+
},
|
| 2356 |
+
{
|
| 2357 |
+
"page_id": 20,
|
| 2358 |
+
"text_extraction_method": "surya",
|
| 2359 |
+
"block_counts": [
|
| 2360 |
+
[
|
| 2361 |
+
"Line",
|
| 2362 |
+
59
|
| 2363 |
+
],
|
| 2364 |
+
[
|
| 2365 |
+
"Span",
|
| 2366 |
+
58
|
| 2367 |
+
],
|
| 2368 |
+
[
|
| 2369 |
+
"TableCell",
|
| 2370 |
+
21
|
| 2371 |
+
],
|
| 2372 |
+
[
|
| 2373 |
+
"PageHeader",
|
| 2374 |
+
1
|
| 2375 |
+
],
|
| 2376 |
+
[
|
| 2377 |
+
"Caption",
|
| 2378 |
+
1
|
| 2379 |
+
],
|
| 2380 |
+
[
|
| 2381 |
+
"Table",
|
| 2382 |
+
1
|
| 2383 |
+
],
|
| 2384 |
+
[
|
| 2385 |
+
"PageFooter",
|
| 2386 |
+
1
|
| 2387 |
+
],
|
| 2388 |
+
[
|
| 2389 |
+
"Text",
|
| 2390 |
+
1
|
| 2391 |
+
]
|
| 2392 |
+
],
|
| 2393 |
+
"block_metadata": {
|
| 2394 |
+
"llm_request_count": 0,
|
| 2395 |
+
"llm_error_count": 0,
|
| 2396 |
+
"llm_tokens_used": 0,
|
| 2397 |
+
"previous_text": "",
|
| 2398 |
+
"previous_type": "",
|
| 2399 |
+
"previous_order": 0
|
| 2400 |
+
}
|
| 2401 |
+
},
|
| 2402 |
+
{
|
| 2403 |
+
"page_id": 21,
|
| 2404 |
+
"text_extraction_method": "surya",
|
| 2405 |
+
"block_counts": [
|
| 2406 |
+
[
|
| 2407 |
+
"Line",
|
| 2408 |
+
68
|
| 2409 |
+
],
|
| 2410 |
+
[
|
| 2411 |
+
"Span",
|
| 2412 |
+
61
|
| 2413 |
+
],
|
| 2414 |
+
[
|
| 2415 |
+
"TableCell",
|
| 2416 |
+
24
|
| 2417 |
+
],
|
| 2418 |
+
[
|
| 2419 |
+
"Text",
|
| 2420 |
+
5
|
| 2421 |
+
],
|
| 2422 |
+
[
|
| 2423 |
+
"PageHeader",
|
| 2424 |
+
1
|
| 2425 |
+
],
|
| 2426 |
+
[
|
| 2427 |
+
"Caption",
|
| 2428 |
+
1
|
| 2429 |
+
],
|
| 2430 |
+
[
|
| 2431 |
+
"Table",
|
| 2432 |
+
1
|
| 2433 |
+
],
|
| 2434 |
+
[
|
| 2435 |
+
"SectionHeader",
|
| 2436 |
+
1
|
| 2437 |
+
],
|
| 2438 |
+
[
|
| 2439 |
+
"PageFooter",
|
| 2440 |
+
1
|
| 2441 |
+
],
|
| 2442 |
+
[
|
| 2443 |
+
"TableGroup",
|
| 2444 |
+
1
|
| 2445 |
+
]
|
| 2446 |
+
],
|
| 2447 |
+
"block_metadata": {
|
| 2448 |
+
"llm_request_count": 0,
|
| 2449 |
+
"llm_error_count": 0,
|
| 2450 |
+
"llm_tokens_used": 0,
|
| 2451 |
+
"previous_text": "",
|
| 2452 |
+
"previous_type": "",
|
| 2453 |
+
"previous_order": 0
|
| 2454 |
+
}
|
| 2455 |
+
},
|
| 2456 |
+
{
|
| 2457 |
+
"page_id": 22,
|
| 2458 |
+
"text_extraction_method": "pdftext",
|
| 2459 |
+
"block_counts": [
|
| 2460 |
+
[
|
| 2461 |
+
"Span",
|
| 2462 |
+
338
|
| 2463 |
+
],
|
| 2464 |
+
[
|
| 2465 |
+
"TableCell",
|
| 2466 |
+
308
|
| 2467 |
+
],
|
| 2468 |
+
[
|
| 2469 |
+
"Line",
|
| 2470 |
+
101
|
| 2471 |
+
],
|
| 2472 |
+
[
|
| 2473 |
+
"Caption",
|
| 2474 |
+
2
|
| 2475 |
+
],
|
| 2476 |
+
[
|
| 2477 |
+
"Table",
|
| 2478 |
+
2
|
| 2479 |
+
],
|
| 2480 |
+
[
|
| 2481 |
+
"Text",
|
| 2482 |
+
2
|
| 2483 |
+
],
|
| 2484 |
+
[
|
| 2485 |
+
"Reference",
|
| 2486 |
+
2
|
| 2487 |
+
],
|
| 2488 |
+
[
|
| 2489 |
+
"PageHeader",
|
| 2490 |
+
1
|
| 2491 |
+
],
|
| 2492 |
+
[
|
| 2493 |
+
"PageFooter",
|
| 2494 |
+
1
|
| 2495 |
+
],
|
| 2496 |
+
[
|
| 2497 |
+
"TableGroup",
|
| 2498 |
+
1
|
| 2499 |
+
]
|
| 2500 |
+
],
|
| 2501 |
+
"block_metadata": {
|
| 2502 |
+
"llm_request_count": 0,
|
| 2503 |
+
"llm_error_count": 0,
|
| 2504 |
+
"llm_tokens_used": 0,
|
| 2505 |
+
"previous_text": "",
|
| 2506 |
+
"previous_type": "",
|
| 2507 |
+
"previous_order": 0
|
| 2508 |
+
}
|
| 2509 |
+
}
|
| 2510 |
+
],
|
| 2511 |
+
"debug_data_path": "debug_data/9506ea3e-e66f-4fdc-be2e-f42de95f2875"
|
| 2512 |
+
}
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/model_text_v3.txt
ADDED
|
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[p. 1 | section: Abstract | type: Text]
|
| 2 |
+
Fine-tuning large language models (LLMs) with zeroth-order (ZO) optimization reduces memory by approximating gradients through function evaluations. However, existing methods essentially perform updates in a one-dimensional space, and suffer from collapse or substantial performance degradation under low-precision training. We introduce BSZO, an adaptive Bayesian Subspace Zeroth-Order Optimizer, which applies Kalman filtering to combine finite-difference information across multiple perturbation directions within a subspace. By treating each finite-difference measurement as a noisy observation, BSZO builds a posterior distribution over the subspace-projected gradient and updates it through Bayesian inference, with a residual-based adaptive mechanism to adapt to noise variations. Theoretical analysis shows that BSZO improves the convergence rate by a factor of k/γ compared to standard ZO methods. Experiments on RoBERTa, Mistral, and OPT models show that BSZO outperforms the baselines across various tasks, achieving up to 6.67% absolute average improvement on OPT-13B while remaining robust under fp16/bf16 precision and keeping memory usage close to inference-only baselines (1.00×–1.08× of MeZO).
|
| 3 |
+
|
| 4 |
+
[p. 1 | section: 1. Introduction | type: Text]
|
| 5 |
+
Large language models (LLMs) are getting increasingly important in natural language understanding and generation (Devlin et al., 2019; Brown et al., 2020; Touvron et al., 2023) . However, adapting these models to downstream tasks through fine-tuning remains challenging due to their large scale. The standard approach, using first-order optimizers like Adam, requires consuming a large amount of GPU
|
| 6 |
+
|
| 7 |
+
[p. 1 | section: 1. Introduction | type: Text]
|
| 8 |
+
memory. For a 13B-parameter model, this translates to over 100GB of GPU memory, roughly 10× the cost of inference alone (Malladi et al., 2023) . Such requirements put full finetuning out of reach for most people, no matter in academia or industry.
|
| 9 |
+
|
| 10 |
+
[p. 1 | section: 1. Introduction | type: Text]
|
| 11 |
+
Several strategies have been proposed to reduce memory burden. Parameter-efficient fine-tuning (PEFT) methods, including LoRA (Hu et al., 2022) and Adapters (Houlsby et al., 2019) , freeze the base model and only update a small set of additional parameters. But these methods still rely on backpropagation and may underperform full fine-tuning on difficult tasks. An alternative direction is zeroth-order (ZO) optimization, which estimates gradients using only forward passes. MeZO (Malladi et al., 2023) demonstrated that this approach can match the memory footprint of inference, while achieving reasonable accuracy. The catch? ZO methods converge slowly and require significantly more iterations than their first-order counterparts, due to the high variance inherent in finite-difference gradient estimates.
|
| 12 |
+
|
| 13 |
+
[p. 1 | section: 1. Introduction | type: Text]
|
| 14 |
+
This raises a question: how can we achieve a better tradeoff between convergence speed and memory usage? We observe that the existing ZO methods have three main weaknesses. First, most existing ZO optimizers essentially perform updates along a single random direction within each batch. Even with increased forward passes and perturbation directions, they process each perturbation in isolation, simply averaging or using them independently—throwing away information about how these measurements relate to each other. Second, the noise level in ZO estimates varies significantly during training, yet most methods do not account for this effect. This rigidity leads to poor adaptation: updates may oscillate wildly around local minima, jump out of the basin, and finally cause training collapse. Moreover, reduced-precision training (fp16/bf16) can cause these methods to collapse or suffer substantial performance degradation, as we show in Figure 1 and Table 3.
|
| 15 |
+
|
| 16 |
+
[p. 1 | section: 1. Introduction | type: Text]
|
| 17 |
+
We propose Bayesian Subspace Zeroth-order Optimization (BSZO) to address these limitations. The main idea is to treat gradient estimation as an inference problem. At each step, we sample k random directions to form a lowdimensional subspace (Zhang, 2025) and model the projected gradient as a latent variable. Instead of treating
|
| 18 |
+
|
| 19 |
+
[p. 2 | section: 1. Introduction | type: FigureGroup]
|
| 20 |
+
Figure 1. Training loss on SST-2 with OPT-13B under bf16 precision. (a)–(b) Existing ZO methods exhibit erratic loss curves across different learning rates, with some runs failing to converge or even diverging. BSZO achieves smooth and steady convergence. (c) Comparison under each method's best-tuned learning rate.
|
| 21 |
+
|
| 22 |
+
[p. 2 | section: 1. Introduction | type: Text]
|
| 23 |
+
each finite-difference query as providing an independent estimate, we use Kalman filtering to aggregate observations—essentially asking: given what we have measured so far, what is our best guess of the true gradient? This Bayesian formulation accounts for measurement noise and produces more accurate estimates from the same number of forward passes. We further introduce an adaptive mechanism that tracks prediction residuals and adjusts the noise variance on the fly, allowing the algorithm to respond to changing curvature conditions during training.
|
| 24 |
+
|
| 25 |
+
[p. 2 | section: 1. Introduction | type: Text]
|
| 26 |
+
Our contributions can be summarized as follows:
|
| 27 |
+
|
| 28 |
+
[p. 2 | section: 1. Introduction | type: ListGroup]
|
| 29 |
+
1. We propose BSZO, a zeroth-order optimizer that uses Bayesian inference to aggregate gradient information across multiple perturbation directions within a subspace. To our knowledge, this is the first application of Bayesian inference and Kalman filtering to ZO optimization for LLMs. 2. We design a residual-based adaptive scheme that enables BSZO to adjust the parameter update scale adaptively without manual tuning. 3. We analyze the convergence of BSZO and show that the rate improves by a factor of k/γ compared to standard ZO methods. 4. Experiments on multiple LLMs and benchmarks show that BSZO achieves strong performance across diverse tasks while remaining robust under low-precision training and maintaining memory consumption comparable to MeZO.
|
| 30 |
+
|
| 31 |
+
[p. 2 | section: 2. Related Work | type: Text]
|
| 32 |
+
Zeroth-Order Optimization for LLMs. Classical derivative-free methods achieve strong sample efficiency via surrogate modeling, but their per-iteration cost grows
|
| 33 |
+
|
| 34 |
+
[p. 2 | section: 2. Related Work | type: Text]
|
| 35 |
+
rapidly with dimension, making them impractical at LLM scale (Zhang, 2025) . The SPSA estimator (Spall, 1992) offers a scalable alternative by approximating gradients through random perturbations. Building on this, MeZO (Malladi et al., 2023) introduced memory-efficient ZO fine-tuning for LLMs, matching inference-time memory by regenerating perturbations from random seeds. Follow-up methods target different bottlenecks: Sparse-MeZO (Liu et al., 2024) restricts updates to influential parameters, HiZOO (Zhao et al., 2025) leverages diagonal Hessian estimates for adaptive preconditioning, LOZO (Chen et al., 2024) exploits low-rank gradient structure, and TeZO (Sun et al., 2025) captures temporal correlations across iterations. Despite these advances, most methods adhere to the "one batch, one update" paradigm, overlooking the possibility that multiple function evaluations within a batch could support multiple parameter updates. Moreover, some of these methods incur substantial memory overhead; while still lower than full fine-tuning, this conflicts with the original motivation of ZO optimization—minimizing memory consumption. Since low-precision fine-tuning is essential in memory-constrained scenarios, the robustness of these methods also warrants further evaluation.
|
| 36 |
+
|
| 37 |
+
[p. 2 | section: 2. Related Work | type: Text]
|
| 38 |
+
Population-Based Gradient Estimation. An alternative strategy evaluates multiple perturbations per iteration and aggregates them into a single update. Evolution Strategies (Salimans et al., 2017) and Augmented Random Search (Mania et al., 2018) popularized this paradigm in reinforcement learning. However, these methods typically require a large number of function evaluations per batch to obtain reliable gradient estimates. Given that each forward pass through an LLM is already computationally expensive, such sample-intensive approaches become impractical for language model fine-tuning. This raises a natural question: how can we extract more information from a limited number of function evaluations? Our work addresses this by treating finite-difference measurements as noisy linear ob-
|
| 39 |
+
|
| 40 |
+
[p. 3 | section: 2. Related Work | type: Text]
|
| 41 |
+
servations of the underlying gradient and applying Bayesian inference to fuse information across directions.
|
| 42 |
+
|
| 43 |
+
[p. 3 | section: 2. Related Work | type: Text]
|
| 44 |
+
Bayesian Inference for Optimization. Bayesian methods provide a principled way to integrate observations with prior knowledge while quantifying uncertainty. Kalman filtering (Kalman, 1960) is the canonical example: it sequentially updates a Gaussian belief over a hidden state as new measurements arrive. Gaussian processes extend this idea to function-space modeling and underpin Bayesian optimization (Shahriari et al., 2016; Rasmussen & Williams, 2006). Our work adapts the Kalman perspective to ZO gradient estimation: we model the projected gradient as a hidden state, interpret each perturbation query as a noisy linear measurement, and update a posterior that pools information across all sampled directions within an iteration. Leveraging the flexibility of the Bayesian framework, we further design an adaptive residual mechanism that effectively fuses both historical and current-batch information. This yields improved gradient estimates without additional memory overhead.
|
| 45 |
+
|
| 46 |
+
[p. 3 | section: 3. Method | type: Text]
|
| 47 |
+
141142
|
| 48 |
+
|
| 49 |
+
[p. 3 | section: 3. Method | type: Text]
|
| 50 |
+
148149
|
| 51 |
+
|
| 52 |
+
[p. 3 | section: 3. Method | type: Text]
|
| 53 |
+
150151
|
| 54 |
+
|
| 55 |
+
[p. 3 | section: 3. Method | type: Text]
|
| 56 |
+
154155
|
| 57 |
+
|
| 58 |
+
[p. 3 | section: 3. Method | type: Text]
|
| 59 |
+
In this section, we present the Bayesian Subspace Zerothorder Optimization (BSZO) algorithm, which controls the step size of subspace by the Bayesian method.
|
| 60 |
+
|
| 61 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 62 |
+
We consider the stochastic optimization problem:
|
| 63 |
+
|
| 64 |
+
[p. 3 | section: 3.1. Preliminaries | type: Equation]
|
| 65 |
+
\min_{\theta \in \mathbb{R}^n} \mathcal{L}(\theta) := \mathbb{E}_{\xi \sim \mathcal{D}}[\mathcal{L}(\theta; \xi)], \tag{1}
|
| 66 |
+
|
| 67 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 68 |
+
where \theta \in \mathbb{R}^n denotes the model parameters, \mathcal{D} is the training dataset, and \mathcal{L}(\theta;\xi) is the loss on a minibatch \xi . We denote the optimal value by \mathcal{L}^* := \min_{\theta} \mathcal{L}(\theta) .
|
| 69 |
+
|
| 70 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 71 |
+
Assumption 3.1. The function \mathcal{L} is L-smooth, i.e., there exists L > 0 such that for all \theta, \theta' \in \mathbb{R}^n ,
|
| 72 |
+
|
| 73 |
+
[p. 3 | section: 3.1. Preliminaries | type: Equation]
|
| 74 |
+
\|\mathcal{L}(\theta) - \mathcal{L}(\theta')\| \le L\|\theta - \theta'\|. \tag{2}
|
| 75 |
+
|
| 76 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 77 |
+
Equivalently,
|
| 78 |
+
|
| 79 |
+
[p. 3 | section: 3.1. Preliminaries | type: Equation]
|
| 80 |
+
\mathcal{L}(\theta) \le \mathcal{L}(\theta') + \nabla \mathcal{L}(\theta')^{\top} (\theta - \theta') + \frac{L}{2} \|\theta - \theta'\|^{2}. (3)
|
| 81 |
+
|
| 82 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 83 |
+
Assumption 3.2. The stochastic gradient \nabla \mathcal{L}(\theta, \xi) has bounded variance, i.e., there exists \sigma_g^2 \geq 0 such that:
|
| 84 |
+
|
| 85 |
+
[p. 3 | section: 3.1. Preliminaries | type: Equation]
|
| 86 |
+
\mathbb{E}_{\xi}[\|\nabla \mathcal{L}(\theta;\xi) - \nabla \mathcal{L}(\theta)\|^{2}] \le \sigma_{q}^{2}, \quad \forall \theta \in \mathbb{R}^{n} (4)
|
| 87 |
+
|
| 88 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 89 |
+
Definition 3.3. Given a set of k perturbation vectors \{z_1, z_2, \ldots, z_k\} , where z_i \in \mathbb{R}^n is from Gaussian distribution \mathcal{N}(0, I_n) , define the subspace basis matrix B = [z_1, z_2, \ldots, z_k] \in \mathbb{R}^{n \times k} .
|
| 90 |
+
|
| 91 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 92 |
+
Algorithm 1 Bayesian Subspace Zeroth-Order Optimization (BSZO)
|
| 93 |
+
|
| 94 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 95 |
+
Input: parameters \theta , learning rate \eta , perturbation scale \varepsilon , subspace dimension k, sampling steps m, prior variance \sigma_n^2, noise variance \sigma_e^2, smoothing factor \alpha, max step Tfor t = 1 to T do Sample k random seeds \{s_i\}_{i=1}^k Initialize \mu \leftarrow \mathbf{0}_k , \Sigma \leftarrow \sigma_p^2 I_k , f_0 \leftarrow \mathcal{L}(\theta) Initialize cache Y \leftarrow \{\} for \tau = 1 to m do if \tau \leq k then d \leftarrow e_{\tau} \theta \leftarrow \theta + \varepsilon \cdot \text{Randn}(n, s_{\tau}) y \leftarrow (\mathcal{L}(\theta) - f_0)/\varepsilon \theta \leftarrow \theta - \varepsilon \cdot \text{RANDN}(n, s_{\tau}) Y[\tau] \leftarrow y \begin{split} r &\leftarrow (y - d^\top \mu) / \|d\|, \quad \sigma_e^2 \leftarrow (1 - \alpha) \sigma_e^2 + \alpha r^2 \\ j &\leftarrow \arg\max_i \Sigma_{ii} \quad \triangleright \text{Find max uncertainty axis} \end{split} y \leftarrow Y[j] ▶ Reuse cached value K \leftarrow \Sigma d / (d^{\top} \Sigma d + \sigma_e^2) \mu \leftarrow \mu + K(y - d^{\mathsf{T}}\mu), \quad \Sigma \leftarrow \Sigma - Kd^{\mathsf{T}}\Sigma end for for i = 1 to k do \theta \leftarrow \theta - \eta \cdot \mu_i \cdot \text{RANDN}(n, s_i) end for end for return \theta RANDN(n, s): returns n-dim Gaussian vector seeded by s
|
| 96 |
+
|
| 97 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 98 |
+
Definition 3.4. The one-side difference of \mathcal{L} along the displacement d \in \mathbb{R}^k in subspace B on minibatch \xi is defined as follows:
|
| 99 |
+
|
| 100 |
+
[p. 3 | section: 3.1. Preliminaries | type: Equation]
|
| 101 |
+
\hat{y}(\theta; \xi, d) = \frac{\mathcal{L}(\theta + \varepsilon B d; \xi) - \mathcal{L}(\theta; \xi)}{\varepsilon}, \tag{5}
|
| 102 |
+
|
| 103 |
+
[p. 3 | section: 3.1. Preliminaries | type: Text]
|
| 104 |
+
where \varepsilon > 0 is a small constant.
|
| 105 |
+
|
| 106 |
+
[p. 3 | section: 3.2. Bayesian Gradient Estimation | type: Text]
|
| 107 |
+
For the \mathcal{L}(\theta+\varepsilon Bd) , the subspace gradient can be obtained through the chain rule:
|
| 108 |
+
|
| 109 |
+
[p. 3 | section: 3.2. Bayesian Gradient Estimation | type: Equation]
|
| 110 |
+
g_d := \nabla_d \mathcal{L}(\theta + \varepsilon Bd \mid d = 0) = \varepsilon B^T g, (6)
|
| 111 |
+
|
| 112 |
+
[p. 3 | section: 3.2. Bayesian Gradient Estimation | type: Text]
|
| 113 |
+
where g:=\nabla\mathcal{L} is the real gradient of \mathcal{L} . In order to keep numerical accuracy controllable, we introduce the concept of normalized subspace gradient as \tilde{g}:=B^\top g=\frac{g_s}{\varepsilon}\in\mathbb{R}^k .
|
| 114 |
+
|
| 115 |
+
[p. 3 | section: 3.2. Bayesian Gradient Estimation | type: Text]
|
| 116 |
+
Lemma 3.5. For any direction d \in \mathbb{R}^k of subspace B, the expectation of one-side difference \hat{y}(d) satisfies:
|
| 117 |
+
|
| 118 |
+
[p. 3 | section: 3.2. Bayesian Gradient Estimation | type: Equation]
|
| 119 |
+
\mathbb{E}[\hat{y}(d)] = d^{\mathsf{T}} B^{\mathsf{T}} q + O(\varepsilon L) \approx d^{\mathsf{T}} \tilde{q}. \tag{7}
|
| 120 |
+
|
| 121 |
+
[p. 4 | section: 3.2. Bayesian Gradient Estimation | type: TableGroup]
|
| 122 |
+
Table 1. Test accuracy (%) on RoBERTa-large (355M). We report the mean±std over 5 runs. The top two results are highlighted in bold . BSZO-B is the baseline version of BSZO without caching optimization. Метнор SST-2 RTE СВ WIC TREC Avg MEZO 92.22 (\pm 0.42) 66.35 (±3.06) 86.07 (±5.56) 55.20 (±3.73) 85.36 (±2.33) 77.04 MEZO-ADAM 92.34 ( \pm 0.50 ) 63.61 (± 1.41 ) 81.07 (\pm 2.71) 52.85 (\pm 4.19) 78.80 \ (\pm 5.76) 73.73 HıZOO 91.44 (\pm 0.45) 59.21 (\pm 2.46) 76.43 \ (\pm 1.96) 53.60 (\pm 2.93) 63.44 (\pm 2.61) 68.82 LOZO 91.83 ( \pm 0.30 ) 62.60 (\pm 2.31) 84.29 (\pm 3.87) 54.20~(\pm 1.32) 77.76 ( \pm 2.15 ) 74.14 BSZO 92.66 (\pm 0.21) 67.80 (\pm 1.52) 85.71 (\pm 1.79) 56.05 ( \pm 1.47 ) 84.16 ( \pm 0.54 ) 77.28 BSZO-B 92.27\ (\pm0.41) 68.38 ( \pm 1.94 ) 84.29 ( \pm 1.49 ) 57.21 (\pm 0.98) 84.80 (\pm 1.57) 77.39
|
| 123 |
+
|
| 124 |
+
[p. 4 | section: 3.2. Bayesian Gradient Estimation | type: Text]
|
| 125 |
+
Based on Lemma3.5, we can model the one-side difference \hat{y}(d) as a linear observation of the normalized subspace gradient \tilde{g} with Gaussian noise:
|
| 126 |
+
|
| 127 |
+
[p. 4 | section: 3.2. Bayesian Gradient Estimation | type: Equation]
|
| 128 |
+
\hat{y}(d) = d^{\top} \tilde{g} + \nu, \quad \nu \sim \mathcal{N}(0, \sigma_e^2 ||d||^2), (8)
|
| 129 |
+
|
| 130 |
+
[p. 4 | section: 3.2. Bayesian Gradient Estimation | type: Text]
|
| 131 |
+
where \nu represents comprehensive noise term. The justification of the variance definition is provided in Appendix B.2. Then, we adopt a Bayesian approach by placing a Gaussian prior on \tilde{g} , i.e., \tilde{g} \sim \mathcal{N}(0, \sigma_p^2 I_k) which make the posterior computable in closed-form (Kalman, 1960).
|
| 132 |
+
|
| 133 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 134 |
+
According to the standard Bayesian linear regression theory (Rasmussen & Williams, 2006), after m perturbations and observations (d^{(1)}, \hat{y}^{(1)}), \ldots, (d^{(m)}, \hat{y}^{(m)}) , the posterior \tilde{g}|Y \sim \mathcal{N}(\mu^{(m)}, \Sigma^{(m)}) is also a Gaussian distribution, where
|
| 135 |
+
|
| 136 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Equation]
|
| 137 |
+
\Sigma^{(m)} = \left(\sigma_p^{-2} I_k + D^{\top} R^{-1} D\right)^{-1}, \mu^{(m)} = \Sigma^{(m)} D^{\top} R^{-1} Y. (9)
|
| 138 |
+
|
| 139 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 140 |
+
Here, D = [d^{(1)}, \dots, d^{(m)}]^{\top} \in \mathbb{R}^{m \times k} is the design matrix, Y = [\hat{y}^{(1)}, \dots, \hat{y}^{(m)}]^{\top} \in \mathbb{R}^m is the observation vector, and R = \operatorname{diag}(\sigma_e^2 \|d^{(1)}\|^2, \dots, \sigma_e^2 \|d^{(m)}\|^2) is the noise covariance matrix. When m > k or \Sigma is already full-rank, we set the new sampling direction to the principal eigenvector of the covariance matrix, i.e., d^{(j)} = v_{\max}(\Sigma^{(j-1)}) .
|
| 141 |
+
|
| 142 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 143 |
+
After getting the posterior mean \mu^{(m)} , we can use it as the final displacement in subspace B, which means the parameters updated by:
|
| 144 |
+
|
| 145 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Equation]
|
| 146 |
+
\Delta \theta = -\eta B \mu^{(k)},\tag{10}
|
| 147 |
+
|
| 148 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 149 |
+
where \eta>0 is learning rate. In this way, we can use the finite k forward passes to update the parameters k times, with \mu^{(k)} controlling the step size in subspace. This means that, for the same batch, the parameters move along a "diagonal" direction rather than a single direction.
|
| 150 |
+
|
| 151 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 152 |
+
Corollary 3.6. Under coordinate-axis sampling, i.e., m = k and d^{(i)} = e_i (the i -th standard basis vector), then the
|
| 153 |
+
|
| 154 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 155 |
+
posterior mean and covariance reduce to:
|
| 156 |
+
|
| 157 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Equation]
|
| 158 |
+
\Sigma^{(k)} = \gamma I_k, \mu^{(k)} = \gamma Y, (11)
|
| 159 |
+
|
| 160 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 161 |
+
where \gamma:=\frac{\sigma_p^2}{\sigma_p^2+\sigma_e^2}\in(0,1) is the shrinkage factor.
|
| 162 |
+
|
| 163 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 164 |
+
Corollary 3.6 simplifies the form of the posterior distribution, thereby making the analysis and update easier. Thus, we adopt coordinate-axis sampling as the default sampling strategy in BSZO (for the first k sampling directions).
|
| 165 |
+
|
| 166 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 167 |
+
Theorem 3.7. Let \Delta \theta = -\eta B \mu^{(k)} . Under Assumptions 3.1 and Assumption3.2, we have:
|
| 168 |
+
|
| 169 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Equation]
|
| 170 |
+
\mathbb{E}[\Delta \theta] = -\eta \gamma k \cdot \nabla \mathcal{L}(\theta) + O(\varepsilon^3) \tag{12}
|
| 171 |
+
|
| 172 |
+
[p. 4 | section: 3.3. Posterior Update In Subspace | type: Text]
|
| 173 |
+
The above theorem shows that the expected update direction aligns with the negative gradient under coordinate-axis sampling. Furthermore, the analysis of the expected direction under adaptive sampling is provided in Theorem B.6 (Appendix B.5).
|
| 174 |
+
|
| 175 |
+
[p. 4 | section: 3.4. Algorithm | type: Text]
|
| 176 |
+
Clearly, the choice of \gamma is crucial. We observe that the norm of the projected gradient estimated via finite differences remains stable during the early and middle stages of optimization, but tends to grow in later stages due to numerical precision limitations, which restricts the achievable convergence accuracy. To this end, we design a residual-based mechanism that adaptively adjusts \sigma_e after the \tau -th sample:
|
| 177 |
+
|
| 178 |
+
[p. 4 | section: 3.4. Algorithm | type: Equation]
|
| 179 |
+
r_{\tau} := \frac{\hat{y}^{(\tau)} - d^{(\tau)^{\top}} \mu^{(\tau - 1)}}{\|d^{(\tau)}\|}, (\sigma_e^{(\tau)})^2 = (1 - \alpha)(\sigma_e^{(\tau - 1)})^2 + \alpha r_{\tau}^2, (13)
|
| 180 |
+
|
| 181 |
+
[p. 4 | section: 3.4. Algorithm | type: Text]
|
| 182 |
+
where \alpha \in (0,1) is the smoothing factor.
|
| 183 |
+
|
| 184 |
+
[p. 4 | section: 3.4. Algorithm | type: Text]
|
| 185 |
+
Corollary 3.6 shows that under coordinate-axis sampling, the posterior covariance \Sigma degenerates into a diagonal matrix with a single distinct eigenvalue, implying that any axis-aligned direction may serve as the adaptive sampling
|
| 186 |
+
|
| 187 |
+
[p. 5 | section: 3.4. Algorithm | type: Text]
|
| 188 |
+
271272
|
| 189 |
+
|
| 190 |
+
[p. 5 | section: 3.4. Algorithm | type: TableGroup]
|
| 191 |
+
Table 2. Memory complexity comparison of different methods. Method Memory Additional Space MeZO-SGD O(n) O(1) MeZO-Adam O(n) O(n) HiZOO O(n) O(n) LOZO O(n) O(1) BSZO (Ours) O(n) O(k^2)
|
| 192 |
+
|
| 193 |
+
[p. 5 | section: 3.4. Algorithm | type: Text]
|
| 194 |
+
direction when j>k. The residual-based adaptation breaks this degeneracy by differentiating the diagonal entries of \Sigma , thereby producing a meaningful adaptive sampling direction. However, the diagonal structure implies that the adaptive sampling direction always coincides with one of the coordinate axes, which can lead to redundant computation. To address this, we cache the (d,y) pairs from the first k samples within each batch. When j>k, we directly reuse the cached pair corresponding to the largest diagonal entry of \Sigma , eliminating the need for an additional forward pass. This extra sample leverages the updated residual to more precisely correct the step size along the direction of greatest uncertainty. In practice, we set m=k+1 by default.
|
| 195 |
+
|
| 196 |
+
[p. 5 | section: 3.4. Algorithm | type: Text]
|
| 197 |
+
The main procedure of BSZO is summarized in Algorithm 1. Following MeZO (Malladi et al., 2023), we store perturbation vectors via random seeds rather than explicitly, requiring only O(k^2) additional space. A basic version without caching is provided in Algorithm 2 (Appendix A), which supports arbitrary initial sampling directions and additional adaptive sampling steps. In this version, the adaptive sampling performs extra forward passes to obtain new function values. Typically, the result of this forward pass coincides with the cached value. However, under reduced precision (fp16 or bf16), certain GPU operations use nondeterministic algorithms (PyTorch Team, 2024), causing function evaluations to differ across calls even with identical inputs and random seeds. Moreover, due to numerical errors, parameters do not fully recover after perturbation and restoration. As a result, the extra forward pass in the basic version yields a value different from the cached one, better reflecting the local landscape at the perturbed point and leading to improved performance (as confirmed in Section 5.3). To examine this effect, we include the coordinate-axis sampling variant of Algorithm 2 as an experimental baseline (denoted as BSZO-B). Table 2 compares the memory complexity of different methods, showing that BSZO is also memory-efficient. We analyze the convergence properties of BSZO in the next section.
|
| 198 |
+
|
| 199 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Text]
|
| 200 |
+
Definition 4.1. Let \Sigma = \operatorname{Cov}(\zeta) be the covariance matrix of the gradient noise, the effective noise \sigma_e^2 can be decomposed
|
| 201 |
+
|
| 202 |
+
[p. 5 | section: 4. Theoretical Analysis | type: FigureGroup]
|
| 203 |
+
Figure 2. GPU memory usage comparison across different models. BSZO maintains memory consumption comparable to MeZO, while MeZO-Adam and HiZOO require significantly more memory due to storing optimizer states or Hessian estimates.
|
| 204 |
+
|
| 205 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Equation]
|
| 206 |
+
as: \sigma_c^2 = \sigma_c^2 + \operatorname{tr}(\Sigma), \tag{14}
|
| 207 |
+
|
| 208 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Text]
|
| 209 |
+
where \operatorname{tr}(\Sigma) \leq \sigma_g^2 (Assumption 3.2). The justification for this definition is provided by Lemma B.5 in Appendix B.2. For analytical tractability, we assume that \sigma_e is fixed (taking the worst-case noise across batches gives the same result). The convergence of BSZO is characterized by the following theorem:
|
| 210 |
+
|
| 211 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Text]
|
| 212 |
+
Theorem 4.2. Under Assumptions 3.1 and 3.2, let \tilde{n} = n + k + 1 be effective dimension. Suppose m = k and \eta < \frac{2}{L\gamma\tilde{n}} . Then, after T iterations, the following inequality holds:
|
| 213 |
+
|
| 214 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Equation]
|
| 215 |
+
\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{\mathcal{L}(\theta_0) - \mathcal{L}^*}{\beta(\eta)\eta\gamma kT} + \frac{L\eta\gamma(\tilde{n} \cdot tr(\Sigma) + n\sigma_{\varepsilon}^2)}{2\beta(\eta)}, (15)
|
| 216 |
+
|
| 217 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Text]
|
| 218 |
+
where \beta(\eta) := 1 - \frac{L\eta\gamma\tilde{n}}{2} and \sigma_e^2 = \sigma_\varepsilon^2 + tr(\Sigma) .
|
| 219 |
+
|
| 220 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Text]
|
| 221 |
+
Corollary 4.3. Let \eta = \frac{1}{L\gamma\bar{n}} , then \beta = 1/2 , which simplifies Theorem 4.2 to:
|
| 222 |
+
|
| 223 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Equation]
|
| 224 |
+
\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{2L\gamma \tilde{n}\Delta_0}{kT} + tr(\Sigma) + \frac{n}{\tilde{n}}\sigma_{\varepsilon}^2, \tag{16}
|
| 225 |
+
|
| 226 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Text]
|
| 227 |
+
where \Delta_0 := \mathcal{L}(\theta_0) - \mathcal{L}^* .
|
| 228 |
+
|
| 229 |
+
[p. 5 | section: 4. Theoretical Analysis | type: Text]
|
| 230 |
+
According to Corollary 4.3, the convergence rate of BSZO is improved by the factor of subspace dimension k. Although \gamma slightly reduces the convergence rate, it is crucial for training stability. We also analyze the convergence under adaptive sampling in Theorem B.7 (Appendix B.6).
|
| 231 |
+
|
| 232 |
+
[p. 5 | section: 5. Experiments | type: Text]
|
| 233 |
+
In this section, we evaluate the performance of BSZO and BSZO-B (Section 3.4) on various fine-tuning tasks in dif-
|
| 234 |
+
|
| 235 |
+
[p. 6 | section: 5. Experiments | type: TableGroup]
|
| 236 |
+
Table 3. Experiments on three different models (OPT-1.3B, Mistral-7B, OPT-13B). We show the test accuracy (%) of MeZO, MeZO-Adam, HiZOO, LOZO, BSZO, and BSZO-B on them, with the top two results highlighted in bold. BSZO-B is the baseline version of BSZO. Since fp16 can cause training crashes with Adam, we did not record the results of ZO-Adam for Mistral-7B. * indicates training collapse due to numerical overflow under fp16 precision. MODEL METHOD SST-2 RTE COPA WIC WSC TREC AVG SENTIMENT NLI REASONING TOPIC OPT-1.3B MEZO 91.74 64.98 76.0 58.78 59.62 80.6 71.95 OPT-1.3B MEZO-ADAM 93.35 60.29 75.0 56.58 62.50 79.4 71.19 OPT-1.3B HIZOO 91.51 62.09 77.0 56.58 63.46 66.2 69.48 OPT-1.3B LOZO 92.66 63.18 75.0 56.58 57.69 75.8 70.15 OPT-1.3B BSZO 92.43 66.79 79.0 59.88 64.42 87.0 74.92 OPT-1.3B BSZO-B 93.01 64.98 81.0 59.09 61.54 87.4 74.50 MISTRAL-7B MEZO 90.94 64.26 88.0 56.58 63.46 88.6 75.31 MISTRAL-7B HIZOO 93.01 63.90 90.0 55.64 63.46 * 73.20 MISTRAL-7B LOZO 92.43 61.37 86.0 57.83 63.46 * 72.22 MISTRAL-7B BSZO 94.50 75.81 87.0 60.03 59.62 90.0 77.83 MISTRAL-7B BSZO-B 94.04 78.70 87.0 59.72 60.58 91.0 78.51 OPT-13B MEZO 85.89 62.09 80.0 54.55 60.58 59.4 67.09 OPT-13B MEZO-ADAM 79.82 61.73 81.0 54.39 57.69 62.2 66.14 OPT-13B HIZOO 72.71 62.46 80.0 52.35 46.15 19.8 55.58 OPT-13B LOZO 86.12 57.04 80.0 55.96 59.62 60.4 66.52 OPT-13B BSZO 93.23 69.31 83.0 56.27 61.54 79.2 73.76 OPT-13B BSZO-B 91.86 71.84 85.0 53.14 64.42 80.8 74.51
|
| 237 |
+
|
| 238 |
+
[p. 6 | section: 5. Experiments | type: Text]
|
| 239 |
+
ferent language models, comparing them with several baselines: MeZO (Malladi et al., 2023) , MeZO-Adam (Mal ladi et al., 2023) , HiZOO (Zhao et al., 2025) , and LOZO (Chen et al., 2024) . Our experiments show that both variants achieve excellent robustness and strong accuracy across most scenarios, requiring only the GPU memory needed for forward propagation, making them more cost-effective than HiZOO and MeZO-Adam.
|
| 240 |
+
|
| 241 |
+
[p. 6 | section: 5.1. Experimental Setup | type: Text]
|
| 242 |
+
Language Models. The experiments in this paper center on two categories of models: masked Language Models (mLMs) and decoder-only Large Language Models (LLMs). For mLMs, we adopt RoBERTa-large (355M) (Liu et al., 2019) as the backbone model. For decoder-only LLMs, we select OPT-1.3B and OPT-13B (Zhang et al., 2022) , as well as Mistral-7B (Jiang et al., 2023) .
|
| 243 |
+
|
| 244 |
+
[p. 6 | section: 5.1. Experimental Setup | type: Text]
|
| 245 |
+
Datasets. We full fine-tune the above models on tasks from the GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) and TREC (Li & Roth, 2002) benchmarks, including Stanford Sentiment Treebank (SST-2), Boolean Questions (BoolQ) (Clark et al., 2019) , Recognizing Textual Entailment (RTE) (Dagan et al., 2005) , Choice of Plausible Alternatives (COPA) (Roemmele et al., 2011) , Word-in-Context (WIC) (Pilehvar & Camacho-Collados, 2019) , Winograd Schema Challenge (WSC) (Levesque et al., 2012) , CommitmentBank (CB) (De Marneffe et al.,
|
| 246 |
+
|
| 247 |
+
[p. 6 | section: 5.1. Experimental Setup | type: Text]
|
| 248 |
+
2019) , and TREC. Following HiZOO (Zhao et al., 2025) , we use the first n 1 samples (up to 1000) from the training set for training and the next n 2 samples for validation. The original validation set serves as the test set. See Table 6 in Appendix C for specific values of n 1 and n2.
|
| 249 |
+
|
| 250 |
+
[p. 6 | section: 5.1. Experimental Setup | type: Text]
|
| 251 |
+
Hyperparameters. For BSZO and BSZO-B, we set the default subspace dimension k = 2 and the number of samples m = k + 1. This results in 3 forward passes per step for BSZO (with caching) and 4 for BSZO-B (without caching). BSZO matches HiZOO's forward pass count. As discussed in Section 3.4, we report results for both BSZO and BSZO-B across all models, with particular focus on comparing them under reduced precision (Mistral-7B in fp16 and OPT-13B in bf16) to examine the caching effect. Other methods use their default hyperparameters. Given the slower convergence of zeroth-order methods, all experiments are trained for up to 20,000 steps (Zhang et al., 2024) , with early stopping applied when validation performance does not improve for 8 evaluations (4,000 steps). For every experiment, we set the perturbation scale to ε = 10 − 4 and the batch size to 16. Hyperparameters are tuned via grid search. We select the best configuration based on validation performance and report its test accuracy. Due to memory constraints, we load OPT-13B in bf16 precision and Mistral-7B in fp16 precision, while other models use fp32. All experiments are conducted on a single H200 GPU. More details are provided in Appendix C.
|
| 252 |
+
|
| 253 |
+
[p. 7 | section: 5.2. Performance in Masked Language Models | type: Text]
|
| 254 |
+
BSZO achieves stable and competitive performance on mLMs. As shown in Table 1, BSZO-B reaches 77.39% average accuracy on RoBERTa-large, surpassing MeZO (77.04%, +0.35%), MeZO-Adam (73.73%, +3.66%), Hi-ZOO (68.82%, +8.57%), and LOZO (74.14%, +3.25%). BSZO achieves 77.28% average accuracy, second only to BSZO-B. BSZO secures top result on SST-2 (92.66%), while BSZO-B excels on RTE (68.38%) and WIC (57.21%). Moreover, BSZO exhibits notably lower variance across tasks (see Table 11 for raw results). Both variants demonstrate strong and consistent performance across all tasks.
|
| 255 |
+
|
| 256 |
+
[p. 7 | section: 5.3. Performance in decoder-only models | type: Text]
|
| 257 |
+
BSZO performs well on larger LLMs. Table 3 shows that BSZO outperforms baselines on decoder-only models, with gains increasing as model size grows. BSZO-B typically maintains a small lead over BSZO.
|
| 258 |
+
|
| 259 |
+
[p. 7 | section: 5.3. Performance in decoder-only models | type: Text]
|
| 260 |
+
OPT-1.3B. BSZO achieves 74.92% average accuracy, the highest among all methods, beating MeZO (71.95%, +2.97%), MeZO-Adam (71.19%, +3.73%), HiZOO (69.48%, +5.44%), and LOZO (70.15%, +4.77%). BSZO-B reaches 74.50% average accuracy. BSZO secures top results on RTE (66.79%), WIC (59.88%), and WSC (64.42%), while BSZO-B excels on COPA (81.0%) and TREC (87.4%). Both variants perform well across most tasks.
|
| 261 |
+
|
| 262 |
+
[p. 7 | section: 5.3. Performance in decoder-only models | type: Text]
|
| 263 |
+
Mistral-7B (fp16). BSZO reaches 77.83% on average, ahead of MeZO (75.31%, +2.52%), HiZOO (73.20%, +4.63%), and LOZO (72.22%, +5.61%). It also achieves the best results on SST-2 (94.50%, +1.49% vs HiZOO) and WIC (60.03%, +3.45% vs MeZO). BSZO-B reaches 78.51% on average, excelling on RTE (78.70%) and TREC (91.0%). The small 0.68% gap shows that the two variants perform very similarly.
|
| 264 |
+
|
| 265 |
+
[p. 7 | section: 5.3. Performance in decoder-only models | type: Text]
|
| 266 |
+
OPT-13B (bf16). The gains grow larger here. BSZO reaches 73.76% on average, up 6.67% over MeZO (67.09%), 7.62% over MeZO-Adam (66.14%), and 7.24% over LOZO (66.52%). BSZO achieves strong results across tasks, including top performance on WIC (56.27%), with particularly notable gains on SST-2 (93.23%, +7.34% vs MeZO) and TREC (79.2%, +19.8% vs MeZO). BSZO-B reaches 74.51% on average (+7.42% vs MeZO), with stronger balance across tasks. BSZO-B maintains a slight edge with one additional forward pass, though the gap in average accuracy remains very small (0.75%).
|
| 267 |
+
|
| 268 |
+
[p. 7 | section: 5.3. Performance in decoder-only models | type: Text]
|
| 269 |
+
Robustness. Reduced precision exposes fragility in several baselines (Table 3) and Figure 1. HiZOO and LOZO are particularly affected: on Mistral-7B (fp16), both methods suffer from TREC training overflow (*). On OPT-13B (bf16), all baseline methods show varying degrees of performance degradation compared to OPT-1.3B, with HiZOO being especially severe—its average accuracy drops from 69.48% to 55.58%, with TREC collapsing to 19.8% and
|
| 270 |
+
|
| 271 |
+
[p. 7 | section: 5.3. Performance in decoder-only models | type: TableGroup]
|
| 272 |
+
Table 4. Memory usage (GB) and per-step time (ms) across different models. OPT-1.3B Mistral-7B OPT-13B Method Mem Time Mem Time Mem Time MeZO 9.1 109.7 18.3 283.9 30.0 464.2 MeZO-Adam 19.7 135.1 47.6 373.1 82.1 614.5 HiZOO 15.7 188.0 34.3 540.2 58.9 877.1 LOZO 9.3 102.0 18.3 274.2 30.0 452.0 BSZO 9.8 97.0 18.8 275.7 30.1 440.5
|
| 273 |
+
|
| 274 |
+
[p. 7 | section: 5.3. Performance in decoder-only models | type: Text]
|
| 275 |
+
WSC to 46.15%. We suspect H200's default TF32 mode introduces errors in Hessian-based estimates. In contrast, BSZO and BSZO-B remain stable throughout all precision settings, with BSZO-B even maintaining performance (from 74.50% to 74.51%).
|
| 276 |
+
|
| 277 |
+
[p. 7 | section: 5.4. Memory and Time Efficiency | type: Text]
|
| 278 |
+
BSZO keeps memory usage low. As shown in Figure 2 and Table 4, BSZO's memory footprint stays close to MeZO across three model scales—ranging from 1.00× to 1.08× of MeZO's usage. In contrast, HiZOO and MeZO-Adam need 1.73×–1.96× and 2.16×–2.74× more memory because they store additional optimizer states (momentum, Hessian estimates). BSZO avoids this overhead by using only O(k 2 ) extra space for the posterior covariance and adaptive noise estimation.
|
| 279 |
+
|
| 280 |
+
[p. 7 | section: 5.4. Memory and Time Efficiency | type: Text]
|
| 281 |
+
BSZO runs fast. Table 4 also reports per-step time. BSZO and LOZO are the fastest—both under 100ms per step on OPT-1.3B. HiZOO is roughly 2× slower due to Hessian estimation, and MeZO-Adam incurs extra cost from momentum updates.
|
| 282 |
+
|
| 283 |
+
[p. 7 | section: 5.5. Ablation Study | type: Text]
|
| 284 |
+
Table 5 shows ablation results on OPT-1.3B for two design choices of BSZO: subspace dimension k and sample count m. In Table 5( a), when m = k, RTE accuracy climbs from 60.29% (k = 1) to 67.51% (k = 4), while SST-2 peaks at k = 8 (93.23%), suggesting that increasing k generally improves performance. In Table 5( b), with extra refinement (m = k + 1), RTE performance improves consistently. Comparing to Table 5( a), m = k + 1 boosts RTE by 1-2% at most k levels (e.g., from 64.26% to 66.79% at k = 2, from 66.07% to 68.59% at k = 8). This confirms that the adaptive sampling step refines the posterior estimate (see Table 12 for more details). Table 5( c) investigates adaptive noise under bf16 precision on OPT-1.3B. As k grows, the gap between w/ and w/o adaptive noise becomes more pronounced: at k = 8, the adaptive variant leads by 8.67% on RTE, indicating that adaptive noise yields substantial gains in low-precision settings. Table 5( d) validates this on
|
| 285 |
+
|
| 286 |
+
[p. 8 | section: 5.5. Ablation Study | type: TableGroup]
|
| 287 |
+
Table 5. Ablation studies. (a) Effect of subspace dimension k with m = k on OPT-1.3B. (b) Effect of m = k + 1 on OPT-1.3B. (c) Effect of adaptive noise on OPT-1.3B (bf16). (d) Effect of adaptive noise on OPT-13B (bf16). Best results in bold. Full results in fp32 are in Table 12. (a) Effect of k (b) Effect of m (c) Adaptive Noise k SST-2 RTE k SST-2 RTE k w/ w/o 1 92.32 60.29 1 91.74 61.37 1 54.15 55.24 2 92.78 64.26 2 92.43 66.79 2 57.76 56.32 4 92.66 67.51 4 93.58 66.43 4 61.73 56.32 8 93.23 66.07 8 93.23 68.59 8 66.43 57.76
|
| 288 |
+
|
| 289 |
+
[p. 8 | section: 5.5. Ablation Study | type: Table]
|
| 290 |
+
(d) Adaptive Noise on OPT-13B (bf16) Method SST-2 RTE WSC COPA TREC WIC w/ adaptive w/o adaptive 93.23 91.97 69.31 63.18 61.54 58.65 83.00 85.00 79.20 75.00 56.27 54.70
|
| 291 |
+
|
| 292 |
+
[p. 8 | section: 5.5. Ablation Study | type: Text]
|
| 293 |
+
OPT-13B (bf16), where adaptive noise brings improvements on 5 out of 6 tasks, with RTE gaining 6.13%.
|
| 294 |
+
|
| 295 |
+
[p. 8 | section: 6. Conclusion | type: Text]
|
| 296 |
+
In this work, we introduce BSZO, which is the first zerothorder optimizer that applies Kalman filtering to aggregate gradient information across multiple perturbation directions for LLM fine-tuning. By treating finite-difference measurements as noisy observations of the true gradient, BSZO builds a posterior distribution over the projected gradient and refines it through Bayesian updates. We design a residual-based adaptive mechanism to adjust the perturbation scale adaptively without manual tuning. Our theoretical analysis shows that BSZO improves the convergence rate by a factor of k/γ over standard ZO methods. Experiments on RoBERTa, Mistral, and OPT show that BSZO achieves strong accuracy across various tasks, remains stable under fp16/bf16 precision where existing methods often collapse, and keeps memory usage close to inference-only baselines.
|
| 297 |
+
|
| 298 |
+
[p. 8 | section: Software and Data | type: Text]
|
| 299 |
+
Our implementation is available at com/AeonianQuill/BSZO . Datasets used in this work (GLUE, SuperGLUE, TREC) are publicly accessible and should be downloaded separately. Pre-trained models can also be obtained from Hugging Face.
|
| 300 |
+
|
| 301 |
+
[p. 8 | section: Impact Statement | type: Text]
|
| 302 |
+
This paper presents work whose goal is to advance the field of machine learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/paper.blocks.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/paper.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/parse_report.json
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875",
|
| 3 |
+
"pipeline": "marker_non_llm_v3",
|
| 4 |
+
"parser": "marker_single",
|
| 5 |
+
"formats": [
|
| 6 |
+
"markdown",
|
| 7 |
+
"chunks"
|
| 8 |
+
],
|
| 9 |
+
"llm_enabled": false,
|
| 10 |
+
"pdf_path": "/network/scratch/j/jianan.zhao/ReviewAgent/data/processed_papers/icml26_20260429_1952_duequeue/raw/9506ea3e-e66f-4fdc-be2e-f42de95f2875.pdf",
|
| 11 |
+
"pdf_sha256": "bae05633cd7b3542933c5ea974960e2abfe2f4b8b623f08a7f2b0ca9d45c993a",
|
| 12 |
+
"bytes": 523247,
|
| 13 |
+
"source": "https://koala.science/storage/pdfs/9506ea3e-e66f-4fdc-be2e-f42de95f2875.pdf",
|
| 14 |
+
"page_count": 23,
|
| 15 |
+
"ok": true,
|
| 16 |
+
"elapsed_seconds": 187.61,
|
| 17 |
+
"paper2markdown_v3": {
|
| 18 |
+
"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875",
|
| 19 |
+
"pipeline": "Paper2Markdown-V3",
|
| 20 |
+
"ok": true,
|
| 21 |
+
"page_count": 23,
|
| 22 |
+
"chunk_count": 313,
|
| 23 |
+
"main_body_chunk_count": 101,
|
| 24 |
+
"appendix_chunk_count": 208,
|
| 25 |
+
"reference_chunk_count": 4,
|
| 26 |
+
"model_text_chars": 38870,
|
| 27 |
+
"raw_markdown_chars": 136249,
|
| 28 |
+
"sanitized_chars": 79443,
|
| 29 |
+
"page_provenance": {
|
| 30 |
+
"min_page": 1,
|
| 31 |
+
"max_page": 23,
|
| 32 |
+
"invalid_count": 0
|
| 33 |
+
},
|
| 34 |
+
"marker_block_type_counts": {
|
| 35 |
+
"Caption": 2,
|
| 36 |
+
"Code": 1,
|
| 37 |
+
"Equation": 96,
|
| 38 |
+
"FigureGroup": 2,
|
| 39 |
+
"Footnote": 1,
|
| 40 |
+
"ListGroup": 10,
|
| 41 |
+
"PageFooter": 23,
|
| 42 |
+
"PageHeader": 26,
|
| 43 |
+
"SectionHeader": 57,
|
| 44 |
+
"Table": 4,
|
| 45 |
+
"TableGroup": 9,
|
| 46 |
+
"Text": 598
|
| 47 |
+
},
|
| 48 |
+
"asset_count_raw": 2,
|
| 49 |
+
"asset_count_model_kept": 2,
|
| 50 |
+
"asset_count_rejected": 0,
|
| 51 |
+
"asset_reject_reasons": {
|
| 52 |
+
"kept": 2
|
| 53 |
+
},
|
| 54 |
+
"artifact_leak_audit": {
|
| 55 |
+
"ok": true,
|
| 56 |
+
"hits": {
|
| 57 |
+
"Anonymous Authors": [],
|
| 58 |
+
"ACKNOWLEDGMENT": [],
|
| 59 |
+
"OpenReview": [],
|
| 60 |
+
"\"accept_label\"": [],
|
| 61 |
+
"\"decision\"": [],
|
| 62 |
+
"\"decision_tier\"": [],
|
| 63 |
+
"\"source_status\"": [],
|
| 64 |
+
"Meta-review": [],
|
| 65 |
+
"Official Review": [],
|
| 66 |
+
"official_reviews": [],
|
| 67 |
+
"meta_reviews": [],
|
| 68 |
+
"suggested_verdict_score": []
|
| 69 |
+
},
|
| 70 |
+
"artifact_count": 2
|
| 71 |
+
},
|
| 72 |
+
"default_model_input": "model_text_v3.txt",
|
| 73 |
+
"appendix_input": "appendix_text_v3.txt",
|
| 74 |
+
"reference_input": "reference_text_v3.txt"
|
| 75 |
+
}
|
| 76 |
+
}
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/reference_chunks.jsonl
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0101", "section": "References", "page_start": 8, "page_end": 8, "type": "ListGroup", "text": "Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in Neural Information Processing Systems , 33: 1877–1901, 2020. Chen, Y., Zhang, Y., Cao, L., Yuan, K., and Wen, Z. Enhancing zeroth-order fine-tuning for language models with low-rank structures. arXiv preprint arXiv:2410.07698 , 2024. Clark, C., Lee, K., Chang, M.-W., Kwiatkowski, T., Collins, M., and Toutanova, K. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of NAACL-HLT , pp. 2924–2936, 2019. Dagan, I., Glickman, O., and Magnini, B. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges Workshop , pp. 177–190. Springer, 2005. De Marneffe, M.-C., Simons, M., and Tonhauser, J. The CommitmentBank: Investigating projection in naturally occurring discourse. In Proceedings of Sinn und Bedeu tung , volume 23, pp. 107–124, 2019. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT , pp. 4171–4186, 2019. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning , pp. 2790–2799, 2019. Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de Las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P., Le Scao, T., Lavril, T., Wang, T., Lacroix, T., and El Sayed, W. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023. Kalman, R. E. A new approach to linear filtering and prediction problems. Journal of Basic Engineering , 82(1): 35–45, 1960. Levesque, H. J., Davis, E., and Morgenstern, L. The Winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning , 2012.", "source": "marker_v2", "marker_block_id": "/page/7/ListGroup/582"}
|
| 2 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0102", "section": "References", "page_start": 9, "page_end": 9, "type": "ListGroup", "text": "440 441 442 Li, X. and Roth, D. Learning question classifiers. In COL- ING 2002: The 19th International Conference on Com putational Linguistics , 2002. 443 444 445 446 447 Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 , 2019.", "source": "marker_v2", "marker_block_id": "/page/8/ListGroup/476"}
|
| 3 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0103", "section": "References", "page_start": 9, "page_end": 9, "type": "ListGroup", "text": "Liu, Y., Zhu, Z., Gong, C., Cheng, M., Hsieh, C.-J., and You, Y. Sparse MeZO: Less parameters for better performance in zeroth-order LLM fine-tuning. arXiv preprint arXiv:2402.15751 , 2024. Malladi, S., Gao, T., Nichani, E., Damian, A., Lee, J. D., Chen, D., and Arora, S. Fine-tuning language models with just forward passes. Advances in Neural Information Processing Systems , 36:53038–53075, 2023. Mania, H., Guy, A., and Recht, B. Simple random search of static linear policies is competitive for reinforcement learning. In Advances in Neural Information Processing Systems , volume 31, 2018. Pilehvar, M. T. and Camacho-Collados, J. WiC: the wordin-context dataset for evaluating context-sensitive meaning representations. In Proceedings of NAACL-HLT , pp. 1267–1273, 2019. PyTorch Team. Reproducibility. org/docs/stable/notes/randomness.html , 2024. Accessed: 2026-01-02. Rasmussen, C. E. and Williams, C. K. I. Gaussian Processes for Machine Learning . MIT Press, 2006. Roemmele, M., Bejan, C. A., and Gordon, A. S. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning , 2011. Salimans, T., Ho, J., Chen, X., Sidor, S., and Sutskever, I. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 , 2017. Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and De Freitas, N. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE , 104(1):148–175, 2016. Spall, J. C. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control , 37(3):332–341, 1992. Sun, Y., Huang, T., Ding, L., Shen, L., and Tao, D. TeZO: Empowering the low-rankness on the temporal dimension in the zeroth-order optimization for fine-tuning LLMs. arXiv preprint arXiv:2501.19057 , 2025.", "source": "marker_v2", "marker_block_id": "/page/8/ListGroup/477"}
|
| 4 |
+
{"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875", "chunk_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875:0104", "section": "References", "page_start": 9, "page_end": 9, "type": "ListGroup", "text": "Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Roziere, B., Goyal, N., Hambro, E., ` Azhar, F., et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceed ings of the 2018 EMNLP Workshop BlackboxNLP , pp. 353–355, 2018. Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems , volume 32, 2019. Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., Sridhar, A., Wang, T., and Zettlemoyer, L. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 , 2022. Zhang, Y., Li, P., Hong, J., Li, J., Zhang, Y., Zheng, W., Chen, P.-Y., Lee, J. D., Yin, W., Hong, M., Wang, Z., Liu, S., and Chen, T. Revisiting zeroth-order optimization for memory-efficient LLM fine-tuning: A benchmark. In International Conference on Machine Learning , 2024. Zhang, Z. Scalable derivative-free optimization algorithms with low-dimensional subspace techniques. arXiv preprint arXiv:2501.04536 , 2025. Zhao, Y., Dang, S., Ye, H., Dai, G., Qian, Y., and Tsang, I. W. Second-order fine-tuning without pain for LLMs: A Hessian informed zeroth-order optimizer. In International Conference on Learning Representations , 2025.", "source": "marker_v2", "marker_block_id": "/page/8/ListGroup/478"}
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/reference_text_v3.txt
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[p. 8 | section: References | type: ListGroup]
|
| 2 |
+
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in Neural Information Processing Systems , 33: 1877–1901, 2020. Chen, Y., Zhang, Y., Cao, L., Yuan, K., and Wen, Z. Enhancing zeroth-order fine-tuning for language models with low-rank structures. arXiv preprint arXiv:2410.07698 , 2024. Clark, C., Lee, K., Chang, M.-W., Kwiatkowski, T., Collins, M., and Toutanova, K. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of NAACL-HLT , pp. 2924–2936, 2019. Dagan, I., Glickman, O., and Magnini, B. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges Workshop , pp. 177–190. Springer, 2005. De Marneffe, M.-C., Simons, M., and Tonhauser, J. The CommitmentBank: Investigating projection in naturally occurring discourse. In Proceedings of Sinn und Bedeu tung , volume 23, pp. 107–124, 2019. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT , pp. 4171–4186, 2019. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning , pp. 2790–2799, 2019. Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de Las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P., Le Scao, T., Lavril, T., Wang, T., Lacroix, T., and El Sayed, W. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023. Kalman, R. E. A new approach to linear filtering and prediction problems. Journal of Basic Engineering , 82(1): 35–45, 1960. Levesque, H. J., Davis, E., and Morgenstern, L. The Winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning , 2012.
|
| 3 |
+
|
| 4 |
+
[p. 9 | section: References | type: ListGroup]
|
| 5 |
+
440 441 442 Li, X. and Roth, D. Learning question classifiers. In COL- ING 2002: The 19th International Conference on Com putational Linguistics , 2002. 443 444 445 446 447 Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 , 2019.
|
| 6 |
+
|
| 7 |
+
[p. 9 | section: References | type: ListGroup]
|
| 8 |
+
Liu, Y., Zhu, Z., Gong, C., Cheng, M., Hsieh, C.-J., and You, Y. Sparse MeZO: Less parameters for better performance in zeroth-order LLM fine-tuning. arXiv preprint arXiv:2402.15751 , 2024. Malladi, S., Gao, T., Nichani, E., Damian, A., Lee, J. D., Chen, D., and Arora, S. Fine-tuning language models with just forward passes. Advances in Neural Information Processing Systems , 36:53038–53075, 2023. Mania, H., Guy, A., and Recht, B. Simple random search of static linear policies is competitive for reinforcement learning. In Advances in Neural Information Processing Systems , volume 31, 2018. Pilehvar, M. T. and Camacho-Collados, J. WiC: the wordin-context dataset for evaluating context-sensitive meaning representations. In Proceedings of NAACL-HLT , pp. 1267–1273, 2019. PyTorch Team. Reproducibility. org/docs/stable/notes/randomness.html , 2024. Accessed: 2026-01-02. Rasmussen, C. E. and Williams, C. K. I. Gaussian Processes for Machine Learning . MIT Press, 2006. Roemmele, M., Bejan, C. A., and Gordon, A. S. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning , 2011. Salimans, T., Ho, J., Chen, X., Sidor, S., and Sutskever, I. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 , 2017. Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and De Freitas, N. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE , 104(1):148–175, 2016. Spall, J. C. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Transactions on Automatic Control , 37(3):332–341, 1992. Sun, Y., Huang, T., Ding, L., Shen, L., and Tao, D. TeZO: Empowering the low-rankness on the temporal dimension in the zeroth-order optimization for fine-tuning LLMs. arXiv preprint arXiv:2501.19057 , 2025.
|
| 9 |
+
|
| 10 |
+
[p. 9 | section: References | type: ListGroup]
|
| 11 |
+
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Roziere, B., Goyal, N., Hambro, E., ` Azhar, F., et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceed ings of the 2018 EMNLP Workshop BlackboxNLP , pp. 353–355, 2018. Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems , volume 32, 2019. Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., Sridhar, A., Wang, T., and Zettlemoyer, L. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 , 2022. Zhang, Y., Li, P., Hong, J., Li, J., Zhang, Y., Zheng, W., Chen, P.-Y., Lee, J. D., Yin, W., Hong, M., Wang, Z., Liu, S., and Chen, T. Revisiting zeroth-order optimization for memory-efficient LLM fine-tuning: A benchmark. In International Conference on Machine Learning , 2024. Zhang, Z. Scalable derivative-free optimization algorithms with low-dimensional subspace techniques. arXiv preprint arXiv:2501.04536 , 2025. Zhao, Y., Dang, S., Ye, H., Dai, G., Qian, Y., and Tsang, I. W. Second-order fine-tuning without pain for LLMs: A Hessian informed zeroth-order optimizer. In International Conference on Learning Representations , 2025.
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/sanitization_report.json
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"paper_id": "9506ea3e-e66f-4fdc-be2e-f42de95f2875",
|
| 3 |
+
"pipeline": "Paper2Markdown-V3",
|
| 4 |
+
"ok": true,
|
| 5 |
+
"page_count": 23,
|
| 6 |
+
"chunk_count": 313,
|
| 7 |
+
"main_body_chunk_count": 101,
|
| 8 |
+
"appendix_chunk_count": 208,
|
| 9 |
+
"reference_chunk_count": 4,
|
| 10 |
+
"model_text_chars": 38870,
|
| 11 |
+
"raw_markdown_chars": 136249,
|
| 12 |
+
"sanitized_chars": 79443,
|
| 13 |
+
"page_provenance": {
|
| 14 |
+
"min_page": 1,
|
| 15 |
+
"max_page": 23,
|
| 16 |
+
"invalid_count": 0
|
| 17 |
+
},
|
| 18 |
+
"marker_block_type_counts": {
|
| 19 |
+
"Caption": 2,
|
| 20 |
+
"Code": 1,
|
| 21 |
+
"Equation": 96,
|
| 22 |
+
"FigureGroup": 2,
|
| 23 |
+
"Footnote": 1,
|
| 24 |
+
"ListGroup": 10,
|
| 25 |
+
"PageFooter": 23,
|
| 26 |
+
"PageHeader": 26,
|
| 27 |
+
"SectionHeader": 57,
|
| 28 |
+
"Table": 4,
|
| 29 |
+
"TableGroup": 9,
|
| 30 |
+
"Text": 598
|
| 31 |
+
},
|
| 32 |
+
"asset_count_raw": 2,
|
| 33 |
+
"asset_count_model_kept": 2,
|
| 34 |
+
"asset_count_rejected": 0,
|
| 35 |
+
"asset_reject_reasons": {
|
| 36 |
+
"kept": 2
|
| 37 |
+
},
|
| 38 |
+
"artifact_leak_audit": {
|
| 39 |
+
"ok": true,
|
| 40 |
+
"hits": {
|
| 41 |
+
"Anonymous Authors": [],
|
| 42 |
+
"ACKNOWLEDGMENT": [],
|
| 43 |
+
"OpenReview": [],
|
| 44 |
+
"\"accept_label\"": [],
|
| 45 |
+
"\"decision\"": [],
|
| 46 |
+
"\"decision_tier\"": [],
|
| 47 |
+
"\"source_status\"": [],
|
| 48 |
+
"Meta-review": [],
|
| 49 |
+
"Official Review": [],
|
| 50 |
+
"official_reviews": [],
|
| 51 |
+
"meta_reviews": [],
|
| 52 |
+
"suggested_verdict_score": []
|
| 53 |
+
},
|
| 54 |
+
"artifact_count": 2
|
| 55 |
+
},
|
| 56 |
+
"default_model_input": "model_text_v3.txt",
|
| 57 |
+
"appendix_input": "appendix_text_v3.txt",
|
| 58 |
+
"reference_input": "reference_text_v3.txt"
|
| 59 |
+
}
|
icml26/9506ea3e-e66f-4fdc-be2e-f42de95f2875/sanitized_v3.txt
ADDED
|
@@ -0,0 +1,658 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{0}
|
| 2 |
+
# Abstract
|
| 3 |
+
Fine-tuning large language models (LLMs) with zeroth-order (ZO) optimization reduces memory by approximating gradients through function evaluations. However, existing methods essentially perform updates in a one-dimensional space, and suffer from collapse or substantial performance degradation under low-precision training. We introduce BSZO, an adaptive Bayesian Subspace Zeroth-Order Optimizer, which applies Kalman filtering to combine finite-difference information across multiple perturbation directions within a subspace. By treating each finite-difference measurement as a noisy observation, BSZO builds a posterior distribution over the subspace-projected gradient and updates it through Bayesian inference, with a residual-based adaptive mechanism to adapt to noise variations. Theoretical analysis shows that BSZO improves the convergence rate by a factor of k/γ compared to standard ZO methods. Experiments on RoBERTa, Mistral, and OPT models show that BSZO outperforms the baselines across various tasks, achieving up to 6.67% absolute average improvement on OPT-13B while remaining robust under fp16/bf16 precision and keeping memory usage close to inference-only baselines (1.00×–1.08× of MeZO).
|
| 4 |
+
# 1. Introduction
|
| 5 |
+
Large language models (LLMs) are getting increasingly important in natural language understanding and generation [\(Devlin et al.,](#page-7-0) [2019;](#page-7-0) [Brown et al.,](#page-7-1) [2020;](#page-7-1) [Touvron et al.,](#page-8-0) [2023\)](#page-8-0). However, adapting these models to downstream tasks through fine-tuning remains challenging due to their large scale. The standard approach, using first-order optimizers like Adam, requires consuming a large amount of GPU
|
| 6 |
+
memory. For a 13B-parameter model, this translates to over 100GB of GPU memory, roughly 10× the cost of inference alone [\(Malladi et al.,](#page-8-1) [2023\)](#page-8-1). Such requirements put full finetuning out of reach for most people, no matter in academia or industry.
|
| 7 |
+
Several strategies have been proposed to reduce memory burden. Parameter-efficient fine-tuning (PEFT) methods, including LoRA [\(Hu et al.,](#page-7-2) [2022\)](#page-7-2) and Adapters [\(Houlsby](#page-7-3) [et al.,](#page-7-3) [2019\)](#page-7-3), freeze the base model and only update a small set of additional parameters. But these methods still rely on backpropagation and may underperform full fine-tuning on difficult tasks. An alternative direction is zeroth-order (ZO) optimization, which estimates gradients using only forward passes. MeZO [\(Malladi et al.,](#page-8-1) [2023\)](#page-8-1) demonstrated that this approach can match the memory footprint of inference, while achieving reasonable accuracy. The catch? ZO methods converge slowly and require significantly more iterations than their first-order counterparts, due to the high variance inherent in finite-difference gradient estimates.
|
| 8 |
+
This raises a question: how can we achieve a better tradeoff between convergence speed and memory usage? We observe that the existing ZO methods have three main weaknesses. First, most existing ZO optimizers essentially perform updates along a single random direction within each batch. Even with increased forward passes and perturbation directions, they process each perturbation in isolation, simply averaging or using them independently—throwing away information about how these measurements relate to each other. Second, the noise level in ZO estimates varies significantly during training, yet most methods do not account for this effect. This rigidity leads to poor adaptation: updates may oscillate wildly around local minima, jump out of the basin, and finally cause training collapse. Moreover, reduced-precision training (fp16/bf16) can cause these methods to collapse or suffer substantial performance degradation, as we show in Figure [1](#page-1-0) and Table [3.](#page-5-0)
|
| 9 |
+
We propose Bayesian Subspace Zeroth-order Optimization (BSZO) to address these limitations. The main idea is to treat gradient estimation as an inference problem. At each step, we sample k random directions to form a lowdimensional subspace [\(Zhang,](#page-8-2) [2025\)](#page-8-2) and model the projected gradient as a latent variable. Instead of treating
|
| 10 |
+
{1}------------------------------------------------
|
| 11 |
+
<span id="page-1-0"></span>
|
| 12 |
+
*Figure 1.* Training loss on SST-2 with OPT-13B under bf16 precision. (a)–(b) Existing ZO methods exhibit erratic loss curves across different learning rates, with some runs failing to converge or even diverging. BSZO achieves smooth and steady convergence. (c) Comparison under each method's best-tuned learning rate.
|
| 13 |
+
each finite-difference query as providing an independent estimate, we use Kalman filtering to aggregate observations—essentially asking: given what we have measured so far, what is our best guess of the true gradient? This Bayesian formulation accounts for measurement noise and produces more accurate estimates from the same number of forward passes. We further introduce an adaptive mechanism that tracks prediction residuals and adjusts the noise variance on the fly, allowing the algorithm to respond to changing curvature conditions during training.
|
| 14 |
+
Our contributions can be summarized as follows:
|
| 15 |
+
- 1. We propose BSZO, a zeroth-order optimizer that uses Bayesian inference to aggregate gradient information across multiple perturbation directions within a subspace. To our knowledge, this is the first application of Bayesian inference and Kalman filtering to ZO optimization for LLMs.
|
| 16 |
+
- 2. We design a residual-based adaptive scheme that enables BSZO to adjust the parameter update scale adaptively without manual tuning.
|
| 17 |
+
- 3. We analyze the convergence of BSZO and show that the rate improves by a factor of k/γ compared to standard ZO methods.
|
| 18 |
+
- 4. Experiments on multiple LLMs and benchmarks show that BSZO achieves strong performance across diverse tasks while remaining robust under low-precision training and maintaining memory consumption comparable to MeZO.
|
| 19 |
+
# 2. Related Work
|
| 20 |
+
> Zeroth-Order Optimization for LLMs. Classical derivative-free methods achieve strong sample efficiency via surrogate modeling, but their per-iteration cost grows
|
| 21 |
+
rapidly with dimension, making them impractical at LLM scale [\(Zhang,](#page-8-2) [2025\)](#page-8-2). The SPSA estimator [\(Spall,](#page-8-3) [1992\)](#page-8-3) offers a scalable alternative by approximating gradients through random perturbations. Building on this, MeZO [\(Malladi et al.,](#page-8-1) [2023\)](#page-8-1) introduced memory-efficient ZO fine-tuning for LLMs, matching inference-time memory by regenerating perturbations from random seeds. Follow-up methods target different bottlenecks: Sparse-MeZO [\(Liu](#page-8-4) [et al.,](#page-8-4) [2024\)](#page-8-4) restricts updates to influential parameters, HiZOO [\(Zhao et al.,](#page-8-5) [2025\)](#page-8-5) leverages diagonal Hessian estimates for adaptive preconditioning, LOZO [\(Chen et al.,](#page-7-4) [2024\)](#page-7-4) exploits low-rank gradient structure, and TeZO [\(Sun et al.,](#page-8-6) [2025\)](#page-8-6) captures temporal correlations across iterations. Despite these advances, most methods adhere to the "one batch, one update" paradigm, overlooking the possibility that multiple function evaluations within a batch could support multiple parameter updates. Moreover, some of these methods incur substantial memory overhead; while still lower than full fine-tuning, this conflicts with the original motivation of ZO optimization—minimizing memory consumption. Since low-precision fine-tuning is essential in memory-constrained scenarios, the robustness of these methods also warrants further evaluation.
|
| 22 |
+
Population-Based Gradient Estimation. An alternative strategy evaluates multiple perturbations per iteration and aggregates them into a single update. Evolution Strategies [\(Salimans et al.,](#page-8-7) [2017\)](#page-8-7) and Augmented Random Search [\(Mania et al.,](#page-8-8) [2018\)](#page-8-8) popularized this paradigm in reinforcement learning. However, these methods typically require a large number of function evaluations per batch to obtain reliable gradient estimates. Given that each forward pass through an LLM is already computationally expensive, such sample-intensive approaches become impractical for language model fine-tuning. This raises a natural question: how can we extract more information from a limited number of function evaluations? Our work addresses this by treating finite-difference measurements as noisy linear ob
|
| 23 |
+
{2}------------------------------------------------
|
| 24 |
+
servations of the underlying gradient and applying Bayesian inference to fuse information across directions.
|
| 25 |
+
Bayesian Inference for Optimization. Bayesian methods provide a principled way to integrate observations with prior knowledge while quantifying uncertainty. Kalman filtering (Kalman, 1960) is the canonical example: it sequentially updates a Gaussian belief over a hidden state as new measurements arrive. Gaussian processes extend this idea to function-space modeling and underpin Bayesian optimization (Shahriari et al., 2016; Rasmussen & Williams, 2006). Our work adapts the Kalman perspective to ZO gradient estimation: we model the projected gradient as a hidden state, interpret each perturbation query as a noisy linear measurement, and update a posterior that pools information across all sampled directions within an iteration. Leveraging the flexibility of the Bayesian framework, we further design an adaptive residual mechanism that effectively fuses both historical and current-batch information. This yields improved gradient estimates without additional memory overhead.
|
| 26 |
+
#### 3. Method
|
| 27 |
+
141142
|
| 28 |
+
148149
|
| 29 |
+
150151
|
| 30 |
+
154155
|
| 31 |
+
In this section, we present the Bayesian Subspace Zerothorder Optimization (BSZO) algorithm, which controls the step size of subspace by the Bayesian method.
|
| 32 |
+
#### 3.1. Preliminaries
|
| 33 |
+
We consider the stochastic optimization problem:
|
| 34 |
+
$$\min_{\theta \in \mathbb{R}^n} \mathcal{L}(\theta) := \mathbb{E}_{\xi \sim \mathcal{D}}[\mathcal{L}(\theta; \xi)], \tag{1}$$
|
| 35 |
+
where $\theta \in \mathbb{R}^n$ denotes the model parameters, $\mathcal{D}$ is the training dataset, and $\mathcal{L}(\theta;\xi)$ is the loss on a minibatch $\xi$ . We denote the optimal value by $\mathcal{L}^* := \min_{\theta} \mathcal{L}(\theta)$ .
|
| 36 |
+
<span id="page-2-1"></span>**Assumption 3.1.** The function $\mathcal{L}$ is L-smooth, i.e., there exists L > 0 such that for all $\theta, \theta' \in \mathbb{R}^n$ ,
|
| 37 |
+
$$\|\mathcal{L}(\theta) - \mathcal{L}(\theta')\| \le L\|\theta - \theta'\|. \tag{2}$$
|
| 38 |
+
Equivalently,
|
| 39 |
+
$$\mathcal{L}(\theta) \le \mathcal{L}(\theta') + \nabla \mathcal{L}(\theta')^{\top} (\theta - \theta') + \frac{L}{2} \|\theta - \theta'\|^{2}.$$
|
| 40 |
+
(3)
|
| 41 |
+
<span id="page-2-2"></span>**Assumption 3.2.** The stochastic gradient $\nabla \mathcal{L}(\theta, \xi)$ has bounded variance, i.e., there exists $\sigma_g^2 \geq 0$ such that:
|
| 42 |
+
$$\mathbb{E}_{\xi}[\|\nabla \mathcal{L}(\theta;\xi) - \nabla \mathcal{L}(\theta)\|^{2}] \le \sigma_{q}^{2}, \quad \forall \theta \in \mathbb{R}^{n}$$
|
| 43 |
+
(4)
|
| 44 |
+
**Definition 3.3.** Given a set of k perturbation vectors $\{z_1, z_2, \ldots, z_k\}$ , where $z_i \in \mathbb{R}^n$ is from Gaussian distribution $\mathcal{N}(0, I_n)$ , define the subspace basis matrix $B = [z_1, z_2, \ldots, z_k] \in \mathbb{R}^{n \times k}$ .
|
| 45 |
+
<span id="page-2-3"></span>**Algorithm 1** Bayesian Subspace Zeroth-Order Optimization (BSZO)
|
| 46 |
+
**Input:** parameters $\theta$ , learning rate $\eta$ , perturbation scale $\varepsilon$ , subspace dimension k, sampling steps m, prior variance $\sigma_n^2,$ noise variance $\sigma_e^2,$ smoothing factor $\alpha,$ max step Tfor t = 1 to T do Sample k random seeds $\{s_i\}_{i=1}^k$ Initialize $\mu \leftarrow \mathbf{0}_k$ , $\Sigma \leftarrow \sigma_p^2 I_k$ , $f_0 \leftarrow \mathcal{L}(\theta)$ Initialize cache $Y \leftarrow \{\}$ for $\tau = 1$ to m do if $\tau \leq k$ then $d \leftarrow e_{\tau}$ $\theta \leftarrow \theta + \varepsilon \cdot \text{Randn}(n, s_{\tau})$ $y \leftarrow (\mathcal{L}(\theta) - f_0)/\varepsilon$ $\theta \leftarrow \theta - \varepsilon \cdot \text{RANDN}(n, s_{\tau})$ $Y[\tau] \leftarrow y$
|
| 47 |
+
$$\begin{split} r &\leftarrow (y - d^\top \mu) / \|d\|, \quad \sigma_e^2 \leftarrow (1 - \alpha) \sigma_e^2 + \alpha r^2 \\ j &\leftarrow \arg\max_i \Sigma_{ii} \quad \triangleright \text{Find max uncertainty axis} \end{split}$$
|
| 48 |
+
$y \leftarrow Y[j]$ ▶ Reuse cached value $K \leftarrow \Sigma d / (d^{\top} \Sigma d + \sigma_e^2)$ $\mu \leftarrow \mu + K(y - d^{\mathsf{T}}\mu), \quad \Sigma \leftarrow \Sigma - Kd^{\mathsf{T}}\Sigma$ end for for i = 1 to k do $\theta \leftarrow \theta - \eta \cdot \mu_i \cdot \text{RANDN}(n, s_i)$ end for end for return $\theta$ RANDN(n, s): returns n-dim Gaussian vector seeded by s
|
| 49 |
+
**Definition 3.4.** The one-side difference of $\mathcal{L}$ along the displacement $d \in \mathbb{R}^k$ in subspace B on minibatch $\xi$ is defined as follows:
|
| 50 |
+
$$\hat{y}(\theta; \xi, d) = \frac{\mathcal{L}(\theta + \varepsilon B d; \xi) - \mathcal{L}(\theta; \xi)}{\varepsilon}, \tag{5}$$
|
| 51 |
+
where $\varepsilon > 0$ is a small constant.
|
| 52 |
+
### 3.2. Bayesian Gradient Estimation
|
| 53 |
+
For the $\mathcal{L}(\theta+\varepsilon Bd)$ , the subspace gradient can be obtained through the chain rule:
|
| 54 |
+
$$g_d := \nabla_d \mathcal{L}(\theta + \varepsilon Bd \mid d = 0) = \varepsilon B^T g,$$
|
| 55 |
+
(6)
|
| 56 |
+
where $g:=\nabla\mathcal{L}$ is the real gradient of $\mathcal{L}$ . In order to keep numerical accuracy controllable, we introduce the concept of normalized subspace gradient as $\tilde{g}:=B^\top g=\frac{g_s}{\varepsilon}\in\mathbb{R}^k$ .
|
| 57 |
+
<span id="page-2-0"></span>**Lemma 3.5.** For any direction $d \in \mathbb{R}^k$ of subspace B, the expectation of one-side difference $\hat{y}(d)$ satisfies:
|
| 58 |
+
$$\mathbb{E}[\hat{y}(d)] = d^{\mathsf{T}} B^{\mathsf{T}} q + O(\varepsilon L) \approx d^{\mathsf{T}} \tilde{q}. \tag{7}$$
|
| 59 |
+
{3}------------------------------------------------
|
| 60 |
+
<span id="page-3-2"></span>*Table 1.* Test accuracy (%) on RoBERTa-large (355M). We report the mean±std over 5 runs. The top two results are highlighted in **bold**. BSZO-B is the baseline version of BSZO without caching optimization.
|
| 61 |
+
| Метнор | SST-2 | RTE | СВ | WIC | TREC | Avg |
|
| 62 |
+
|-----------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|-------|
|
| 63 |
+
| MEZO | $92.22 (\pm 0.42)$ | 66.35 (±3.06) | <b>86.07</b> (±5.56) | 55.20 (±3.73) | <b>85.36</b> (±2.33) | 77.04 |
|
| 64 |
+
| MEZO-ADAM | <b>92.34</b> ( $\pm 0.50$ ) | 63.61 (± <b>1.41</b> ) | $81.07 (\pm 2.71)$ | $52.85 (\pm 4.19)$ | $78.80 \ (\pm 5.76)$ | 73.73 |
|
| 65 |
+
| HıZOO | $91.44 (\pm 0.45)$ | $59.21 (\pm 2.46)$ | $76.43 \ (\pm 1.96)$ | $53.60 (\pm 2.93)$ | $63.44 (\pm 2.61)$ | 68.82 |
|
| 66 |
+
| LOZO | 91.83 ( $\pm$ <b>0.30</b> ) | $62.60 (\pm 2.31)$ | $84.29 (\pm 3.87)$ | $54.20~(\pm 1.32)$ | 77.76 ( $\pm 2.15$ ) | 74.14 |
|
| 67 |
+
| BSZO | 92.66 $(\pm 0.21)$ | 67.80 $(\pm 1.52)$ | 85.71 $(\pm 1.79)$ | <b>56.05</b> ( $\pm 1.47$ ) | 84.16 ( $\pm$ <b>0.54</b> ) | 77.28 |
|
| 68 |
+
| BSZO-B | $92.27\ (\pm0.41)$ | <b>68.38</b> ( $\pm 1.94$ ) | 84.29 ( $\pm$ <b>1.49</b> ) | <b>57.21</b> $(\pm 0.98)$ | 84.80 $(\pm 1.57)$ | 77.39 |
|
| 69 |
+
Based on Lemma3.5, we can model the one-side difference $\hat{y}(d)$ as a linear observation of the normalized subspace gradient $\tilde{g}$ with Gaussian noise:
|
| 70 |
+
$$\hat{y}(d) = d^{\top} \tilde{g} + \nu, \quad \nu \sim \mathcal{N}(0, \sigma_e^2 ||d||^2),$$
|
| 71 |
+
(8)
|
| 72 |
+
where $\nu$ represents comprehensive noise term. The justification of the variance definition is provided in Appendix B.2. Then, we adopt a Bayesian approach by placing a Gaussian prior on $\tilde{g}$ , i.e., $\tilde{g} \sim \mathcal{N}(0, \sigma_p^2 I_k)$ which make the posterior computable in closed-form (Kalman, 1960).
|
| 73 |
+
### 3.3. Posterior Update In Subspace
|
| 74 |
+
According to the standard Bayesian linear regression theory (Rasmussen & Williams, 2006), after m perturbations and observations $(d^{(1)}, \hat{y}^{(1)}), \ldots, (d^{(m)}, \hat{y}^{(m)})$ , the posterior $\tilde{g}|Y \sim \mathcal{N}(\mu^{(m)}, \Sigma^{(m)})$ is also a Gaussian distribution, where
|
| 75 |
+
$$\Sigma^{(m)} = \left(\sigma_p^{-2} I_k + D^{\top} R^{-1} D\right)^{-1},$$
|
| 76 |
+
$$\mu^{(m)} = \Sigma^{(m)} D^{\top} R^{-1} Y.$$
|
| 77 |
+
(9)
|
| 78 |
+
Here, $D = [d^{(1)}, \dots, d^{(m)}]^{\top} \in \mathbb{R}^{m \times k}$ is the design matrix, $Y = [\hat{y}^{(1)}, \dots, \hat{y}^{(m)}]^{\top} \in \mathbb{R}^m$ is the observation vector, and $R = \operatorname{diag}(\sigma_e^2 \|d^{(1)}\|^2, \dots, \sigma_e^2 \|d^{(m)}\|^2)$ is the noise covariance matrix. When m > k or $\Sigma$ is already full-rank, we set the new sampling direction to the principal eigenvector of the covariance matrix, i.e., $d^{(j)} = v_{\max}(\Sigma^{(j-1)})$ .
|
| 79 |
+
After getting the posterior mean $\mu^{(m)}$ , we can use it as the final displacement in subspace B, which means the parameters updated by:
|
| 80 |
+
$$\Delta \theta = -\eta B \mu^{(k)},\tag{10}$$
|
| 81 |
+
where $\eta>0$ is learning rate. In this way, we can use the finite k forward passes to update the parameters k times, with $\mu^{(k)}$ controlling the step size in subspace. This means that, for the same batch, the parameters move along a "diagonal" direction rather than a single direction.
|
| 82 |
+
<span id="page-3-0"></span>**Corollary 3.6.** Under coordinate-axis sampling, i.e., m = k and $d^{(i)} = e_i$ (the *i*-th standard basis vector), then the
|
| 83 |
+
posterior mean and covariance reduce to:
|
| 84 |
+
$$\Sigma^{(k)} = \gamma I_k,$$
|
| 85 |
+
$$\mu^{(k)} = \gamma Y,$$
|
| 86 |
+
(11)
|
| 87 |
+
where $\gamma:=\frac{\sigma_p^2}{\sigma_p^2+\sigma_e^2}\in(0,1)$ is the shrinkage factor.
|
| 88 |
+
Corollary 3.6 simplifies the form of the posterior distribution, thereby making the analysis and update easier. Thus, we adopt coordinate-axis sampling as the default sampling strategy in BSZO (for the first k sampling directions).
|
| 89 |
+
<span id="page-3-3"></span>**Theorem 3.7.** Let $\Delta \theta = -\eta B \mu^{(k)}$ . Under Assumptions 3.1 and Assumption3.2, we have:
|
| 90 |
+
$$\mathbb{E}[\Delta \theta] = -\eta \gamma k \cdot \nabla \mathcal{L}(\theta) + O(\varepsilon^3) \tag{12}$$
|
| 91 |
+
The above theorem shows that the expected update direction aligns with the negative gradient under coordinate-axis sampling. Furthermore, the analysis of the expected direction under adaptive sampling is provided in Theorem B.6 (Appendix B.5).
|
| 92 |
+
#### <span id="page-3-1"></span>3.4. Algorithm
|
| 93 |
+
Clearly, the choice of $\gamma$ is crucial. We observe that the norm of the projected gradient estimated via finite differences remains stable during the early and middle stages of optimization, but tends to grow in later stages due to numerical precision limitations, which restricts the achievable convergence accuracy. To this end, we design a residual-based mechanism that adaptively adjusts $\sigma_e$ after the $\tau$ -th sample:
|
| 94 |
+
$$r_{\tau} := \frac{\hat{y}^{(\tau)} - d^{(\tau)^{\top}} \mu^{(\tau - 1)}}{\|d^{(\tau)}\|},$$
|
| 95 |
+
$$(\sigma_e^{(\tau)})^2 = (1 - \alpha)(\sigma_e^{(\tau - 1)})^2 + \alpha r_{\tau}^2,$$
|
| 96 |
+
(13)
|
| 97 |
+
where $\alpha \in (0,1)$ is the smoothing factor.
|
| 98 |
+
Corollary 3.6 shows that under coordinate-axis sampling, the posterior covariance $\Sigma$ degenerates into a diagonal matrix with a single distinct eigenvalue, implying that any axis-aligned direction may serve as the adaptive sampling
|
| 99 |
+
{4}------------------------------------------------
|
| 100 |
+
271272
|
| 101 |
+
<span id="page-4-0"></span>Table 2. Memory complexity comparison of different methods.
|
| 102 |
+
| Method | Memory | Additional Space |
|
| 103 |
+
|-------------|--------|------------------|
|
| 104 |
+
| MeZO-SGD | O(n) | O(1) |
|
| 105 |
+
| MeZO-Adam | O(n) | O(n) |
|
| 106 |
+
| HiZOO | O(n) | O(n) |
|
| 107 |
+
| LOZO | O(n) | O(1) |
|
| 108 |
+
| BSZO (Ours) | O(n) | $O(k^2)$ |
|
| 109 |
+
direction when j>k. The residual-based adaptation breaks this degeneracy by differentiating the diagonal entries of $\Sigma$ , thereby producing a meaningful adaptive sampling direction. However, the diagonal structure implies that the adaptive sampling direction always coincides with one of the coordinate axes, which can lead to redundant computation. To address this, we cache the (d,y) pairs from the first k samples within each batch. When j>k, we directly reuse the cached pair corresponding to the largest diagonal entry of $\Sigma$ , eliminating the need for an additional forward pass. This extra sample leverages the updated residual to more precisely correct the step size along the direction of greatest uncertainty. In practice, we set m=k+1 by default.
|
| 110 |
+
The main procedure of BSZO is summarized in Algorithm 1. Following MeZO (Malladi et al., 2023), we store perturbation vectors via random seeds rather than explicitly, requiring only $O(k^2)$ additional space. A basic version without caching is provided in Algorithm 2 (Appendix A), which supports arbitrary initial sampling directions and additional adaptive sampling steps. In this version, the adaptive sampling performs extra forward passes to obtain new function values. Typically, the result of this forward pass coincides with the cached value. However, under reduced precision (fp16 or bf16), certain GPU operations use nondeterministic algorithms (PyTorch Team, 2024), causing function evaluations to differ across calls even with identical inputs and random seeds. Moreover, due to numerical errors, parameters do not fully recover after perturbation and restoration. As a result, the extra forward pass in the basic version yields a value different from the cached one, better reflecting the local landscape at the perturbed point and leading to improved performance (as confirmed in Section 5.3). To examine this effect, we include the coordinate-axis sampling variant of Algorithm 2 as an experimental baseline (denoted as BSZO-B). Table 2 compares the memory complexity of different methods, showing that BSZO is also memory-efficient. We analyze the convergence properties of BSZO in the next section.
|
| 111 |
+
#### 4. Theoretical Analysis
|
| 112 |
+
**Definition 4.1.** Let $\Sigma = \operatorname{Cov}(\zeta)$ be the covariance matrix of the gradient noise, the effective noise $\sigma_e^2$ can be decomposed
|
| 113 |
+
<span id="page-4-3"></span>
|
| 114 |
+
Figure 2. GPU memory usage comparison across different models. BSZO maintains memory consumption comparable to MeZO, while MeZO-Adam and HiZOO require significantly more memory due to storing optimizer states or Hessian estimates.
|
| 115 |
+
as:
|
| 116 |
+
$$\sigma_c^2 = \sigma_c^2 + \operatorname{tr}(\Sigma), \tag{14}$$
|
| 117 |
+
where $\operatorname{tr}(\Sigma) \leq \sigma_g^2$ (Assumption 3.2). The justification for this definition is provided by Lemma B.5 in Appendix B.2. For analytical tractability, we assume that $\sigma_e$ is fixed (taking the worst-case noise across batches gives the same result). The convergence of BSZO is characterized by the following theorem:
|
| 118 |
+
<span id="page-4-1"></span>**Theorem 4.2.** Under Assumptions 3.1 and 3.2, let $\tilde{n} = n + k + 1$ be effective dimension. Suppose m = k and $\eta < \frac{2}{L\gamma\tilde{n}}$ . Then, after T iterations, the following inequality holds:
|
| 119 |
+
$$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{\mathcal{L}(\theta_0) - \mathcal{L}^*}{\beta(\eta)\eta\gamma kT} + \frac{L\eta\gamma(\tilde{n} \cdot tr(\Sigma) + n\sigma_{\varepsilon}^2)}{2\beta(\eta)},$$
|
| 120 |
+
(15)
|
| 121 |
+
where $\beta(\eta) := 1 - \frac{L\eta\gamma\tilde{n}}{2}$ and $\sigma_e^2 = \sigma_\varepsilon^2 + tr(\Sigma)$ .
|
| 122 |
+
<span id="page-4-2"></span>**Corollary 4.3.** Let $\eta = \frac{1}{L\gamma\bar{n}}$ , then $\beta = 1/2$ , which simplifies Theorem 4.2 to:
|
| 123 |
+
$$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{2L\gamma \tilde{n}\Delta_0}{kT} + tr(\Sigma) + \frac{n}{\tilde{n}}\sigma_{\varepsilon}^2, \tag{16}$$
|
| 124 |
+
where $\Delta_0 := \mathcal{L}(\theta_0) - \mathcal{L}^*$ .
|
| 125 |
+
According to Corollary 4.3, the convergence rate of BSZO is improved by the factor of subspace dimension k. Although $\gamma$ slightly reduces the convergence rate, it is crucial for training stability. We also analyze the convergence under adaptive sampling in Theorem B.7 (Appendix B.6).
|
| 126 |
+
#### 5. Experiments
|
| 127 |
+
In this section, we evaluate the performance of BSZO and BSZO-B (Section 3.4) on various fine-tuning tasks in dif-
|
| 128 |
+
{5}------------------------------------------------
|
| 129 |
+
<span id="page-5-0"></span>*Table 3.* Experiments on three different models (OPT-1.3B, Mistral-7B, OPT-13B). We show the test accuracy (%) of MeZO, MeZO-Adam, HiZOO, LOZO, BSZO, and BSZO-B on them, with the top two results highlighted in bold. BSZO-B is the baseline version of BSZO. Since fp16 can cause training crashes with Adam, we did not record the results of ZO-Adam for Mistral-7B. \* indicates training collapse due to numerical overflow under fp16 precision.
|
| 130 |
+
| MODEL | METHOD | SST-2 | RTE | COPA | WIC | WSC | TREC | AVG |
|
| 131 |
+
|------------|-----------|-----------|-------|-----------|-------|-------|-------|-------|
|
| 132 |
+
| | | SENTIMENT | NLI | REASONING | | | TOPIC | |
|
| 133 |
+
| OPT-1.3B | MEZO | 91.74 | 64.98 | 76.0 | 58.78 | 59.62 | 80.6 | 71.95 |
|
| 134 |
+
| OPT-1.3B | MEZO-ADAM | 93.35 | 60.29 | 75.0 | 56.58 | 62.50 | 79.4 | 71.19 |
|
| 135 |
+
| OPT-1.3B | HIZOO | 91.51 | 62.09 | 77.0 | 56.58 | 63.46 | 66.2 | 69.48 |
|
| 136 |
+
| OPT-1.3B | LOZO | 92.66 | 63.18 | 75.0 | 56.58 | 57.69 | 75.8 | 70.15 |
|
| 137 |
+
| OPT-1.3B | BSZO | 92.43 | 66.79 | 79.0 | 59.88 | 64.42 | 87.0 | 74.92 |
|
| 138 |
+
| OPT-1.3B | BSZO-B | 93.01 | 64.98 | 81.0 | 59.09 | 61.54 | 87.4 | 74.50 |
|
| 139 |
+
| MISTRAL-7B | MEZO | 90.94 | 64.26 | 88.0 | 56.58 | 63.46 | 88.6 | 75.31 |
|
| 140 |
+
| MISTRAL-7B | HIZOO | 93.01 | 63.90 | 90.0 | 55.64 | 63.46 | * | 73.20 |
|
| 141 |
+
| MISTRAL-7B | LOZO | 92.43 | 61.37 | 86.0 | 57.83 | 63.46 | * | 72.22 |
|
| 142 |
+
| MISTRAL-7B | BSZO | 94.50 | 75.81 | 87.0 | 60.03 | 59.62 | 90.0 | 77.83 |
|
| 143 |
+
| MISTRAL-7B | BSZO-B | 94.04 | 78.70 | 87.0 | 59.72 | 60.58 | 91.0 | 78.51 |
|
| 144 |
+
| OPT-13B | MEZO | 85.89 | 62.09 | 80.0 | 54.55 | 60.58 | 59.4 | 67.09 |
|
| 145 |
+
| OPT-13B | MEZO-ADAM | 79.82 | 61.73 | 81.0 | 54.39 | 57.69 | 62.2 | 66.14 |
|
| 146 |
+
| OPT-13B | HIZOO | 72.71 | 62.46 | 80.0 | 52.35 | 46.15 | 19.8 | 55.58 |
|
| 147 |
+
| OPT-13B | LOZO | 86.12 | 57.04 | 80.0 | 55.96 | 59.62 | 60.4 | 66.52 |
|
| 148 |
+
| OPT-13B | BSZO | 93.23 | 69.31 | 83.0 | 56.27 | 61.54 | 79.2 | 73.76 |
|
| 149 |
+
| OPT-13B | BSZO-B | 91.86 | 71.84 | 85.0 | 53.14 | 64.42 | 80.8 | 74.51 |
|
| 150 |
+
ferent language models, comparing them with several baselines: MeZO [\(Malladi et al.,](#page-8-1) [2023\)](#page-8-1), MeZO-Adam [\(Mal](#page-8-1)[ladi et al.,](#page-8-1) [2023\)](#page-8-1), HiZOO [\(Zhao et al.,](#page-8-5) [2025\)](#page-8-5), and LOZO [\(Chen et al.,](#page-7-4) [2024\)](#page-7-4). Our experiments show that both variants achieve excellent robustness and strong accuracy across most scenarios, requiring only the GPU memory needed for forward propagation, making them more cost-effective than HiZOO and MeZO-Adam.
|
| 151 |
+
#### 5.1. Experimental Setup
|
| 152 |
+
328 329 Language Models. The experiments in this paper center on two categories of models: masked Language Models (mLMs) and decoder-only Large Language Models (LLMs). For mLMs, we adopt RoBERTa-large (355M) [\(Liu et al.,](#page-8-12) [2019\)](#page-8-12) as the backbone model. For decoder-only LLMs, we select OPT-1.3B and OPT-13B [\(Zhang et al.,](#page-8-13) [2022\)](#page-8-13), as well as Mistral-7B [\(Jiang et al.,](#page-7-6) [2023\)](#page-7-6).
|
| 153 |
+
Datasets. We full fine-tune the above models on tasks from the GLUE [\(Wang et al.,](#page-8-14) [2018\)](#page-8-14), SuperGLUE [\(Wang](#page-8-15) [et al.,](#page-8-15) [2019\)](#page-8-15) and TREC [\(Li & Roth,](#page-8-16) [2002\)](#page-8-16) benchmarks, including Stanford Sentiment Treebank (SST-2), Boolean Questions (BoolQ) [\(Clark et al.,](#page-7-7) [2019\)](#page-7-7), Recognizing Textual Entailment (RTE) [\(Dagan et al.,](#page-7-8) [2005\)](#page-7-8), Choice of Plausible Alternatives (COPA) [\(Roemmele et al.,](#page-8-17) [2011\)](#page-8-17), Word-in-Context (WIC) [\(Pilehvar & Camacho-Collados,](#page-8-18) [2019\)](#page-8-18), Winograd Schema Challenge (WSC) [\(Levesque](#page-7-9) [et al.,](#page-7-9) [2012\)](#page-7-9), CommitmentBank (CB) [\(De Marneffe et al.,](#page-7-10) [2019\)](#page-7-10), and TREC. Following HiZOO [\(Zhao et al.,](#page-8-5) [2025\)](#page-8-5), we use the first n<sup>1</sup> samples (up to 1000) from the training set for training and the next n<sup>2</sup> samples for validation. The original validation set serves as the test set. See Table [6](#page-18-0) in Appendix [C](#page-18-1) for specific values of n<sup>1</sup> and n2.
|
| 154 |
+
Hyperparameters. For BSZO and BSZO-B, we set the default subspace dimension k = 2 and the number of samples m = k + 1. This results in 3 forward passes per step for BSZO (with caching) and 4 for BSZO-B (without caching). BSZO matches HiZOO's forward pass count. As discussed in Section [3.4,](#page-3-1) we report results for both BSZO and BSZO-B across all models, with particular focus on comparing them under reduced precision (Mistral-7B in fp16 and OPT-13B in bf16) to examine the caching effect. Other methods use their default hyperparameters. Given the slower convergence of zeroth-order methods, all experiments are trained for up to 20,000 steps [\(Zhang et al.,](#page-8-19) [2024\)](#page-8-19), with early stopping applied when validation performance does not improve for 8 evaluations (4,000 steps). For every experiment, we set the perturbation scale to ε = 10<sup>−</sup><sup>4</sup> and the batch size to 16. Hyperparameters are tuned via grid search. We select the best configuration based on validation performance and report its test accuracy. Due to memory constraints, we load OPT-13B in bf16 precision and Mistral-7B in fp16 precision, while other models use fp32. All experiments are conducted on a single H200 GPU. More details are provided in Appendix [C.](#page-18-1)
|
| 155 |
+
{6}------------------------------------------------
|
| 156 |
+
#### 5.2. Performance in Masked Language Models
|
| 157 |
+
BSZO achieves stable and competitive performance on mLMs. As shown in Table [1,](#page-3-2) BSZO-B reaches 77.39% average accuracy on RoBERTa-large, surpassing MeZO (77.04%, +0.35%), MeZO-Adam (73.73%, +3.66%), Hi-ZOO (68.82%, +8.57%), and LOZO (74.14%, +3.25%). BSZO achieves 77.28% average accuracy, second only to BSZO-B. BSZO secures top result on SST-2 (92.66%), while BSZO-B excels on RTE (68.38%) and WIC (57.21%). Moreover, BSZO exhibits notably lower variance across tasks (see Table [11](#page-22-0) for raw results). Both variants demonstrate strong and consistent performance across all tasks.
|
| 158 |
+
#### <span id="page-6-0"></span>5.3. Performance in decoder-only models
|
| 159 |
+
BSZO performs well on larger LLMs. Table [3](#page-5-0) shows that BSZO outperforms baselines on decoder-only models, with gains increasing as model size grows. BSZO-B typically maintains a small lead over BSZO.
|
| 160 |
+
OPT-1.3B. BSZO achieves 74.92% average accuracy, the highest among all methods, beating MeZO (71.95%, +2.97%), MeZO-Adam (71.19%, +3.73%), HiZOO (69.48%, +5.44%), and LOZO (70.15%, +4.77%). BSZO-B reaches 74.50% average accuracy. BSZO secures top results on RTE (66.79%), WIC (59.88%), and WSC (64.42%), while BSZO-B excels on COPA (81.0%) and TREC (87.4%). Both variants perform well across most tasks.
|
| 161 |
+
Mistral-7B (fp16). BSZO reaches 77.83% on average, ahead of MeZO (75.31%, +2.52%), HiZOO (73.20%, +4.63%), and LOZO (72.22%, +5.61%). It also achieves the best results on SST-2 (94.50%, +1.49% vs HiZOO) and WIC (60.03%, +3.45% vs MeZO). BSZO-B reaches 78.51% on average, excelling on RTE (78.70%) and TREC (91.0%). The small 0.68% gap shows that the two variants perform very similarly.
|
| 162 |
+
OPT-13B (bf16). The gains grow larger here. BSZO reaches 73.76% on average, up 6.67% over MeZO (67.09%), 7.62% over MeZO-Adam (66.14%), and 7.24% over LOZO (66.52%). BSZO achieves strong results across tasks, including top performance on WIC (56.27%), with particularly notable gains on SST-2 (93.23%, +7.34% vs MeZO) and TREC (79.2%, +19.8% vs MeZO). BSZO-B reaches 74.51% on average (+7.42% vs MeZO), with stronger balance across tasks. BSZO-B maintains a slight edge with one additional forward pass, though the gap in average accuracy remains very small (0.75%).
|
| 163 |
+
Robustness. Reduced precision exposes fragility in several baselines (Table [3\)](#page-5-0) and Figure [1.](#page-1-0) HiZOO and LOZO are particularly affected: on Mistral-7B (fp16), both methods suffer from TREC training overflow (\*). On OPT-13B (bf16), all baseline methods show varying degrees of performance degradation compared to OPT-1.3B, with HiZOO being especially severe—its average accuracy drops from 69.48% to 55.58%, with TREC collapsing to 19.8% and
|
| 164 |
+
<span id="page-6-1"></span>*Table 4.* Memory usage (GB) and per-step time (ms) across different models.
|
| 165 |
+
| | OPT-1.3B | | | Mistral-7B | OPT-13B | | |
|
| 166 |
+
|-----------|----------|-------|------|------------|---------|-------|--|
|
| 167 |
+
| Method | Mem | Time | Mem | Time | Mem | Time | |
|
| 168 |
+
| MeZO | 9.1 | 109.7 | 18.3 | 283.9 | 30.0 | 464.2 | |
|
| 169 |
+
| MeZO-Adam | 19.7 | 135.1 | 47.6 | 373.1 | 82.1 | 614.5 | |
|
| 170 |
+
| HiZOO | 15.7 | 188.0 | 34.3 | 540.2 | 58.9 | 877.1 | |
|
| 171 |
+
| LOZO | 9.3 | 102.0 | 18.3 | 274.2 | 30.0 | 452.0 | |
|
| 172 |
+
| BSZO | 9.8 | 97.0 | 18.8 | 275.7 | 30.1 | 440.5 | |
|
| 173 |
+
WSC to 46.15%. We suspect H200's default TF32 mode introduces errors in Hessian-based estimates. In contrast, BSZO and BSZO-B remain stable throughout all precision settings, with BSZO-B even maintaining performance (from 74.50% to 74.51%).
|
| 174 |
+
## 5.4. Memory and Time Efficiency
|
| 175 |
+
BSZO keeps memory usage low. As shown in Figure [2](#page-4-3) and Table [4,](#page-6-1) BSZO's memory footprint stays close to MeZO across three model scales—ranging from 1.00× to 1.08× of MeZO's usage. In contrast, HiZOO and MeZO-Adam need 1.73×–1.96× and 2.16×–2.74× more memory because they store additional optimizer states (momentum, Hessian estimates). BSZO avoids this overhead by using only O(k 2 ) extra space for the posterior covariance and adaptive noise estimation.
|
| 176 |
+
BSZO runs fast. Table [4](#page-6-1) also reports per-step time. BSZO and LOZO are the fastest—both under 100ms per step on OPT-1.3B. HiZOO is roughly 2× slower due to Hessian estimation, and MeZO-Adam incurs extra cost from momentum updates.
|
| 177 |
+
## 5.5. Ablation Study
|
| 178 |
+
Table [5](#page-7-11) shows ablation results on OPT-1.3B for two design choices of BSZO: subspace dimension k and sample count m. In Table [5\(](#page-7-11)a), when m = k, RTE accuracy climbs from 60.29% (k = 1) to 67.51% (k = 4), while SST-2 peaks at k = 8 (93.23%), suggesting that increasing k generally improves performance. In Table [5\(](#page-7-11)b), with extra refinement (m = k + 1), RTE performance improves consistently. Comparing to Table [5\(](#page-7-11)a), m = k + 1 boosts RTE by 1-2% at most k levels (e.g., from 64.26% to 66.79% at k = 2, from 66.07% to 68.59% at k = 8). This confirms that the adaptive sampling step refines the posterior estimate (see Table [12](#page-22-1) for more details). Table [5\(](#page-7-11)c) investigates adaptive noise under bf16 precision on OPT-1.3B. As k grows, the gap between w/ and w/o adaptive noise becomes more pronounced: at k = 8, the adaptive variant leads by 8.67% on RTE, indicating that adaptive noise yields substantial gains in low-precision settings. Table [5\(](#page-7-11)d) validates this on
|
| 179 |
+
{7}------------------------------------------------
|
| 180 |
+
<span id="page-7-11"></span>*Table 5.* Ablation studies. (a) Effect of subspace dimension k with m = k on OPT-1.3B. (b) Effect of m = k + 1 on OPT-1.3B. (c) Effect of adaptive noise on OPT-1.3B (bf16). (d) Effect of adaptive noise on OPT-13B (bf16). Best results in bold. Full results in fp32 are in Table [12.](#page-22-1)
|
| 181 |
+
| | | (a) Effect of k<br>(b) Effect of m | | | | (c) Adaptive Noise | | | |
|
| 182 |
+
|---|-------|------------------------------------|---|-------|-------|--------------------|-------|-------|--|
|
| 183 |
+
| k | SST-2 | RTE | k | SST-2 | RTE | k | w/ | w/o | |
|
| 184 |
+
| 1 | 92.32 | 60.29 | 1 | 91.74 | 61.37 | 1 | 54.15 | 55.24 | |
|
| 185 |
+
| 2 | 92.78 | 64.26 | 2 | 92.43 | 66.79 | 2 | 57.76 | 56.32 | |
|
| 186 |
+
| 4 | 92.66 | 67.51 | 4 | 93.58 | 66.43 | 4 | 61.73 | 56.32 | |
|
| 187 |
+
| 8 | 93.23 | 66.07 | 8 | 93.23 | 68.59 | 8 | 66.43 | 57.76 | |
|
| 188 |
+
| (d) Adaptive Noise on OPT-13B (bf16) | | | | | | | | |
|
| 189 |
+
|--------------------------------------|----------------|----------------|----------------|----------------|----------------|----------------|--|--|
|
| 190 |
+
| Method | SST-2 | RTE | WSC | COPA | TREC | WIC | | |
|
| 191 |
+
| w/ adaptive<br>w/o adaptive | 93.23<br>91.97 | 69.31<br>63.18 | 61.54<br>58.65 | 83.00<br>85.00 | 79.20<br>75.00 | 56.27<br>54.70 | | |
|
| 192 |
+
OPT-13B (bf16), where adaptive noise brings improvements on 5 out of 6 tasks, with RTE gaining 6.13%.
|
| 193 |
+
# 6. Conclusion
|
| 194 |
+
In this work, we introduce BSZO, which is the first zerothorder optimizer that applies Kalman filtering to aggregate gradient information across multiple perturbation directions for LLM fine-tuning. By treating finite-difference measurements as noisy observations of the true gradient, BSZO builds a posterior distribution over the projected gradient and refines it through Bayesian updates. We design a residual-based adaptive mechanism to adjust the perturbation scale adaptively without manual tuning. Our theoretical analysis shows that BSZO improves the convergence rate by a factor of k/γ over standard ZO methods. Experiments on RoBERTa, Mistral, and OPT show that BSZO achieves strong accuracy across various tasks, remains stable under fp16/bf16 precision where existing methods often collapse, and keeps memory usage close to inference-only baselines.
|
| 195 |
+
# Software and Data
|
| 196 |
+
Our implementation is available at [ [com/AeonianQuill/BSZO]( Datasets used in this work (GLUE, SuperGLUE, TREC) are publicly accessible and should be downloaded separately. Pre-trained models can also be obtained from Hugging Face.
|
| 197 |
+
# Impact Statement
|
| 198 |
+
This paper presents work whose goal is to advance the field of machine learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
|
| 199 |
+
# References
|
| 200 |
+
- <span id="page-7-1"></span>Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. *Advances in Neural Information Processing Systems*, 33: 1877–1901, 2020.
|
| 201 |
+
- <span id="page-7-4"></span>Chen, Y., Zhang, Y., Cao, L., Yuan, K., and Wen, Z. Enhancing zeroth-order fine-tuning for language models with low-rank structures. *arXiv preprint arXiv:2410.07698*, 2024.
|
| 202 |
+
- <span id="page-7-7"></span>Clark, C., Lee, K., Chang, M.-W., Kwiatkowski, T., Collins, M., and Toutanova, K. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In *Proceedings of NAACL-HLT*, pp. 2924–2936, 2019.
|
| 203 |
+
- <span id="page-7-8"></span>Dagan, I., Glickman, O., and Magnini, B. The PASCAL recognising textual entailment challenge. In *Machine Learning Challenges Workshop*, pp. 177–190. Springer, 2005.
|
| 204 |
+
- <span id="page-7-10"></span>De Marneffe, M.-C., Simons, M., and Tonhauser, J. The CommitmentBank: Investigating projection in naturally occurring discourse. In *Proceedings of Sinn und Bedeutung*, volume 23, pp. 107–124, 2019.
|
| 205 |
+
- <span id="page-7-0"></span>Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. *Proceedings of NAACL-HLT*, pp. 4171–4186, 2019.
|
| 206 |
+
- <span id="page-7-3"></span>Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameter-efficient transfer learning for NLP. In *International Conference on Machine Learning*, pp. 2790–2799, 2019.
|
| 207 |
+
- <span id="page-7-2"></span>Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. LoRA: Low-rank adaptation of large language models. In *International Conference on Learning Representations*, 2022.
|
| 208 |
+
- <span id="page-7-6"></span>Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de Las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P., Le Scao, T., Lavril, T., Wang, T., Lacroix, T., and El Sayed, W. Mistral 7b. *arXiv preprint arXiv:2310.06825*, 2023.
|
| 209 |
+
- <span id="page-7-5"></span>Kalman, R. E. A new approach to linear filtering and prediction problems. *Journal of Basic Engineering*, 82(1): 35–45, 1960.
|
| 210 |
+
- <span id="page-7-9"></span>Levesque, H. J., Davis, E., and Morgenstern, L. The Winograd schema challenge. In *Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning*, 2012.
|
| 211 |
+
{8}------------------------------------------------
|
| 212 |
+
- <span id="page-8-16"></span>440 441 442 Li, X. and Roth, D. Learning question classifiers. In *COL-ING 2002: The 19th International Conference on Computational Linguistics*, 2002.
|
| 213 |
+
- <span id="page-8-12"></span>443 444 445 446 447 Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*, 2019.
|
| 214 |
+
- <span id="page-8-4"></span>Liu, Y., Zhu, Z., Gong, C., Cheng, M., Hsieh, C.-J., and You, Y. Sparse MeZO: Less parameters for better performance in zeroth-order LLM fine-tuning. *arXiv preprint arXiv:2402.15751*, 2024.
|
| 215 |
+
- <span id="page-8-1"></span>Malladi, S., Gao, T., Nichani, E., Damian, A., Lee, J. D., Chen, D., and Arora, S. Fine-tuning language models with just forward passes. *Advances in Neural Information Processing Systems*, 36:53038–53075, 2023.
|
| 216 |
+
- <span id="page-8-8"></span>Mania, H., Guy, A., and Recht, B. Simple random search of static linear policies is competitive for reinforcement learning. In *Advances in Neural Information Processing Systems*, volume 31, 2018.
|
| 217 |
+
- <span id="page-8-18"></span>Pilehvar, M. T. and Camacho-Collados, J. WiC: the wordin-context dataset for evaluating context-sensitive meaning representations. In *Proceedings of NAACL-HLT*, pp. 1267–1273, 2019.
|
| 218 |
+
- <span id="page-8-11"></span>PyTorch Team. Reproducibility. [ [org/docs/stable/notes/randomness.html]( 2024. Accessed: 2026-01-02.
|
| 219 |
+
- <span id="page-8-10"></span>Rasmussen, C. E. and Williams, C. K. I. *Gaussian Processes for Machine Learning*. MIT Press, 2006.
|
| 220 |
+
- <span id="page-8-17"></span>Roemmele, M., Bejan, C. A., and Gordon, A. S. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In *AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning*, 2011.
|
| 221 |
+
- <span id="page-8-7"></span>Salimans, T., Ho, J., Chen, X., Sidor, S., and Sutskever, I. Evolution strategies as a scalable alternative to reinforcement learning. *arXiv preprint arXiv:1703.03864*, 2017.
|
| 222 |
+
- <span id="page-8-9"></span>Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and De Freitas, N. Taking the human out of the loop: A review of Bayesian optimization. *Proceedings of the IEEE*, 104(1):148–175, 2016.
|
| 223 |
+
- <span id="page-8-3"></span>Spall, J. C. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. *IEEE Transactions on Automatic Control*, 37(3):332–341, 1992.
|
| 224 |
+
- <span id="page-8-6"></span>Sun, Y., Huang, T., Ding, L., Shen, L., and Tao, D. TeZO: Empowering the low-rankness on the temporal dimension in the zeroth-order optimization for fine-tuning LLMs. *arXiv preprint arXiv:2501.19057*, 2025.
|
| 225 |
+
- <span id="page-8-0"></span>Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Roziere, B., Goyal, N., Hambro, E., ` Azhar, F., et al. LLaMA: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*, 2023.
|
| 226 |
+
- <span id="page-8-14"></span>Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the 2018 EMNLP Workshop BlackboxNLP*, pp. 353–355, 2018.
|
| 227 |
+
- <span id="page-8-15"></span>Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In *Advances in Neural Information Processing Systems*, volume 32, 2019.
|
| 228 |
+
- <span id="page-8-13"></span>Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., Sridhar, A., Wang, T., and Zettlemoyer, L. OPT: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*, 2022.
|
| 229 |
+
- <span id="page-8-19"></span>Zhang, Y., Li, P., Hong, J., Li, J., Zhang, Y., Zheng, W., Chen, P.-Y., Lee, J. D., Yin, W., Hong, M., Wang, Z., Liu, S., and Chen, T. Revisiting zeroth-order optimization for memory-efficient LLM fine-tuning: A benchmark. In *International Conference on Machine Learning*, 2024.
|
| 230 |
+
- <span id="page-8-2"></span>Zhang, Z. Scalable derivative-free optimization algorithms with low-dimensional subspace techniques. *arXiv preprint arXiv:2501.04536*, 2025.
|
| 231 |
+
- <span id="page-8-5"></span>Zhao, Y., Dang, S., Ye, H., Dai, G., Qian, Y., and Tsang, I. W. Second-order fine-tuning without pain for LLMs: A Hessian informed zeroth-order optimizer. In *International Conference on Learning Representations*, 2025.
|
| 232 |
+
{9}------------------------------------------------
|
| 233 |
+
### <span id="page-9-1"></span>A. BSZO Basic Algorithm
|
| 234 |
+
515516
|
| 235 |
+
537538
|
| 236 |
+
We provide a basic version of BSZO without caching optimization, which supports arbitrary sampling directions when m > k.
|
| 237 |
+
#### <span id="page-9-0"></span>Algorithm 2 Bayesian Subspace Zeroth-Order Optimization (Basic Version)
|
| 238 |
+
```
|
| 239 |
+
Input: parameters \theta, learning rate \eta, perturbation scale \varepsilon, subspace dimension k, sampling steps m, prior variance \sigma_p^2,
|
| 240 |
+
noise variance \sigma_e^2, smoothing factor \alpha, max step T
|
| 241 |
+
for t = 1 to T do
|
| 242 |
+
Sample k random seeds \{s_i\}_{i=1}^k
|
| 243 |
+
Initialize \mu \leftarrow \mathbf{0}_k, \Sigma \leftarrow \sigma_p^2 I_k, f_0 \leftarrow \mathcal{L}(\theta)
|
| 244 |
+
for \tau = 1 to m do
|
| 245 |
+
d \leftarrow d_{\tau} \text{ if } \tau \leq k, \text{ else } d \leftarrow \arg \max_{\|v\|=1} v^{\top} \Sigma v
|
| 246 |
+
for i = 1 to k do
|
| 247 |
+
\theta \leftarrow \theta + \varepsilon \cdot d_i \cdot \text{RANDN}(n, s_i) \text{ if } d_i > 10^{-10}
|
| 248 |
+
end for
|
| 249 |
+
y \leftarrow (\mathcal{L}(\theta) - f_0)/\varepsilon
|
| 250 |
+
for i = 1 to k do
|
| 251 |
+
\theta \leftarrow \theta - \varepsilon \cdot d_i \cdot \text{RANDN}(n, s_i) \text{ if } d_i > 10^{-10}
|
| 252 |
+
\begin{aligned} r &\leftarrow (y - d^{\top} \mu) / \|d\|, \quad \sigma_e^2 \leftarrow (1 - \alpha) \sigma_e^2 + \alpha r^2 \\ K &\leftarrow \Sigma d / (d^{\top} \Sigma d + \sigma_e^2) \end{aligned}
|
| 253 |
+
\mu \leftarrow \mu + K(y - d^{\mathsf{T}}\mu), \quad \Sigma \leftarrow \Sigma - Kd^{\mathsf{T}}\Sigma
|
| 254 |
+
end for
|
| 255 |
+
for i = 1 to k do
|
| 256 |
+
\theta \leftarrow \theta - \eta \cdot \mu_i \cdot \text{RANDN}(n, s_i)
|
| 257 |
+
end for
|
| 258 |
+
end for
|
| 259 |
+
return \theta
|
| 260 |
+
RANDN(n, s): returns n-dim Gaussian vector seeded by s
|
| 261 |
+
```
|
| 262 |
+
#### **B.** Theoretical Proofs
|
| 263 |
+
### **B.1. Auxiliary Lemmas**
|
| 264 |
+
<span id="page-9-3"></span>**Lemma B.1** (Expectation of Direction Derivative). *Under Assumption 3.1, the one-sided difference satisfies:*
|
| 265 |
+
$$\mathbb{E}[\hat{y}(d)] = d^{\mathsf{T}}B^{\mathsf{T}}g + O(\varepsilon L) \tag{17}$$
|
| 266 |
+
*Proof.* By $\mathbb{E}[\mathcal{L}(\theta;\xi)] = \mathcal{L}(\theta)$ :
|
| 267 |
+
$$\mathbb{E}[\hat{y}(d)] = \frac{\mathcal{L}(\theta_0 + \varepsilon B d) - \mathcal{L}(\theta_0)}{\varepsilon}$$
|
| 268 |
+
(18)
|
| 269 |
+
By Taylor expansion at $\theta_0$ :
|
| 270 |
+
$$\mathcal{L}(\theta_0 + \varepsilon Bd) = \mathcal{L}(\theta_0) + \varepsilon \langle \nabla \mathcal{L}(\theta_0), Bd \rangle + \frac{\varepsilon^2}{2} (Bd)^\top H(Bd) + O(\varepsilon^3)$$
|
| 271 |
+
(19)
|
| 272 |
+
where $H = \nabla^2 \mathcal{L}(\theta_0)$ is the Hessian.
|
| 273 |
+
Substituting:
|
| 274 |
+
$$\mathbb{E}[\hat{y}(d)] = \frac{\varepsilon d^{\top} B^{\top} g + \frac{\varepsilon^2}{2} d^{\top} B^{\top} H B d + O(\varepsilon^3)}{\varepsilon} = d^{\top} B^{\top} g + \frac{\varepsilon}{2} d^{\top} B^{\top} H B d + O(\varepsilon^2) \tag{20}$$
|
| 275 |
+
<span id="page-9-2"></span>By Assumption 3.1, $||H|| \le L$ , so $|d^{\top}B^{\top}HBd| \le L||Bd||^2$ . Thus the bias is $O(\varepsilon L)$ .
|
| 276 |
+
{10}------------------------------------------------
|
| 277 |
+
- Lemma B.2 (Variance of Direction Derivative). Let $\Sigma = Cov(\zeta)$ where $\zeta = \nabla \mathcal{L}(\theta; \xi) \nabla \mathcal{L}(\theta)$ . When using the same mini-batch $\xi$ for both function evaluations:
|
| 278 |
+
- (a) Conditional variance: $Var(\hat{y}(d)|B) = (Bd)^{\top}\Sigma(Bd) + O(\varepsilon^4)$
|
| 279 |
+
- (b) Unconditional variance: $\mathbb{E}_B[Var(\hat{y}(d)|B)] = tr(\Sigma) + O(\varepsilon^4)$
|
| 280 |
+
- *Proof.* Key insight: Using the same mini-batch $\xi$ for both evaluations causes the noise to be correlated, not independent.
|
| 281 |
+
#### (a) Conditional variance derivation:
|
| 282 |
+
For fixed $\xi$ , Taylor expand the random loss $\mathcal{L}(\theta; \xi)$ at $\theta_0$ :
|
| 283 |
+
$$\mathcal{L}(\theta_0 + \varepsilon Bd; \xi) = \mathcal{L}(\theta_0; \xi) + \varepsilon \langle \nabla \mathcal{L}(\theta_0; \xi), Bd \rangle + O(\varepsilon^2)$$
|
| 284 |
+
(21)
|
| 285 |
+
Since both evaluations use the same $\xi$ , the base term $\mathcal{L}(\theta_0; \xi)$ cancels:
|
| 286 |
+
$$\hat{y}(d) = \frac{\mathcal{L}(\theta_0 + \varepsilon Bd; \xi) - \mathcal{L}(\theta_0; \xi)}{\varepsilon} = \langle \nabla \mathcal{L}(\theta_0; \xi), Bd \rangle + O(\varepsilon)$$
|
| 287 |
+
(22)
|
| 288 |
+
Let $\nabla \mathcal{L}(\theta_0; \xi) = \nabla \mathcal{L}(\theta_0) + \zeta$ where $\zeta$ is zero-mean noise with $Cov(\zeta) = \Sigma$ . Given B:
|
| 289 |
+
$$\operatorname{Var}(\hat{y}(d)|B) = \operatorname{Var}(\langle \zeta, Bd \rangle | B) = (Bd)^{\top} \operatorname{Cov}(\zeta)(Bd) = (Bd)^{\top} \Sigma(Bd) + O(\varepsilon^{2})$$
|
| 290 |
+
(23)
|
| 291 |
+
#### (b) Unconditional variance derivation:
|
| 292 |
+
For coordinate-axis sampling $d = e_i$ , we have $Bd = z_i \sim \mathcal{N}(0, I_n)$ .
|
| 293 |
+
Taking expectation over B:
|
| 294 |
+
$$\mathbb{E}_B[(Bd)^{\top}\Sigma(Bd)] = \mathbb{E}[z_i^{\top}\Sigma z_i]$$
|
| 295 |
+
(24)
|
| 296 |
+
By the trace trick:
|
| 297 |
+
$$\mathbb{E}[z_i^{\top} \Sigma z_i] = \mathbb{E}[\operatorname{tr}(\Sigma z_i z_i^{\top})] = \operatorname{tr}(\Sigma \cdot \mathbb{E}[z_i z_i^{\top}]) = \operatorname{tr}(\Sigma \cdot I_n) = \operatorname{tr}(\Sigma)$$
|
| 298 |
+
(25)
|
| 299 |
+
By Assumption 3.2,
|
| 300 |
+
$$\operatorname{tr}(\Sigma) = \mathbb{E}[\|\zeta\|^2] \le \sigma_q^2$$
|
| 301 |
+
.
|
| 302 |
+
**Lemma B.3** (High-Dimensional Approximate Orthogonality). Let $z_1, \ldots, z_k \stackrel{iid}{\sim} \mathcal{N}(0, I_n)$ . When $n \gg k^2$ :
|
| 303 |
+
- (a) $||z_i||^2 = n \pm O(\sqrt{n})$
|
| 304 |
+
- (b) For $i \neq j$ : $\frac{z_i^\top z_j}{\|z_i\| \|z_j\|} = O(1/\sqrt{n})$
|
| 305 |
+
#### *Proof.* (a) Norm concentration:
|
| 306 |
+
Since $||z_i||^2 = \sum_{j=1}^n z_{ij}^2 \sim \chi^2(n)$ , we have:
|
| 307 |
+
$$\mathbb{E}[\|z_i\|^2] = n, \quad \text{Var}(\|z_i\|^2) = 2n \tag{26}$$
|
| 308 |
+
By Chebyshev inequality or sub-Gaussian concentration:
|
| 309 |
+
$$\mathbb{P}\left(\left|\|z_i\|^2 - n\right| > t\sqrt{n}\right) \le 2e^{-ct^2} \tag{27}$$
|
| 310 |
+
Thus $||z_i||^2 = n \pm O(\sqrt{n})$ with high probability.
|
| 311 |
+
#### (b) Approximate orthogonality:
|
| 312 |
+
{11}------------------------------------------------
|
| 313 |
+
605 For independent z<sup>i</sup> , z<sup>j</sup> ∼ N (0, In), the inner product z ⊤ i z<sup>j</sup> = P<sup>n</sup> <sup>l</sup>=1 zilzjl is a sum of n independent random variables with:
|
| 314 |
+
$$\mathbb{E}[z_i^\top z_j] = 0, \quad \operatorname{Var}(z_i^\top z_j) = \sum_{l=1}^n \operatorname{Var}(z_{il} z_{jl}) = n$$
|
| 315 |
+
(28)
|
| 316 |
+
Thus z ⊤ i z<sup>j</sup> = O( √ n) with high probability. Since ∥zi∥∥zj∥ = O(n):
|
| 317 |
+
$$\cos \theta_{ij} = \frac{z_i^\top z_j}{\|z_i\| \|z_j\|} = O\left(\frac{\sqrt{n}}{n}\right) = O\left(\frac{1}{\sqrt{n}}\right) \to 0$$
|
| 318 |
+
(29)
|
| 319 |
+
This shows that random Gaussian vectors are approximately orthogonal in high dimensions.
|
| 320 |
+
<span id="page-11-2"></span>Lemma B.4 (Isserlis' Theorem Application). *For* z ∼ N (0, In) *and symmetric matrices* A, B*:*
|
| 321 |
+
$$\mathbb{E}[(z^{\top}Az)(z^{\top}Bz)] = tr(A)tr(B) + 2tr(AB)$$
|
| 322 |
+
(30)
|
| 323 |
+
*In particular, for* A = I<sup>n</sup> *and* B = Σ*:*
|
| 324 |
+
$$\mathbb{E}[\|z\|^2 \cdot z^{\mathsf{T}} \Sigma z] = (n+2) tr(\Sigma) \tag{31}$$
|
| 325 |
+
*Proof.* By Isserlis' theorem (Wick's theorem), for z ∼ N (0, In):
|
| 326 |
+
$$\mathbb{E}[z_i z_j z_k z_l] = \delta_{ij} \delta_{kl} + \delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk}$$
|
| 327 |
+
(32)
|
| 328 |
+
Expanding the quadratic forms:
|
| 329 |
+
$$(z^{\top}Az)(z^{\top}Bz) = \sum_{i,j,k,l} A_{ij}B_{kl}z_iz_jz_kz_l$$
|
| 330 |
+
(33)
|
| 331 |
+
Taking expectation:
|
| 332 |
+
$$\mathbb{E}[(z^{\top}Az)(z^{\top}Bz)] = \sum_{i,j,k,l} A_{ij}B_{kl}(\delta_{ij}\delta_{kl} + \delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk})$$
|
| 333 |
+
(34)
|
| 334 |
+
$$= \sum_{i,k} A_{ii} B_{kk} + \sum_{i,j} A_{ij} B_{ij} + \sum_{i,j} A_{ij} B_{ji}$$
|
| 335 |
+
(35)
|
| 336 |
+
$$= \operatorname{tr}(A)\operatorname{tr}(B) + \operatorname{tr}(AB) + \operatorname{tr}(AB^{\top})$$
|
| 337 |
+
(36)
|
| 338 |
+
For symmetric A, B: tr(AB<sup>⊤</sup>) = tr(AB), thus:
|
| 339 |
+
$$\mathbb{E}[(z^{\top}Az)(z^{\top}Bz)] = \operatorname{tr}(A)\operatorname{tr}(B) + 2\operatorname{tr}(AB)$$
|
| 340 |
+
(37)
|
| 341 |
+
Setting A = In, B = Σ:
|
| 342 |
+
$$\mathbb{E}[\|z\|^2 \cdot z^{\top} \Sigma z] = n \cdot \operatorname{tr}(\Sigma) + 2\operatorname{tr}(\Sigma) = (n+2)\operatorname{tr}(\Sigma)$$
|
| 343 |
+
(38)
|
| 344 |
+
### <span id="page-11-0"></span>B.2. Noise Variance Justification
|
| 345 |
+
The observation model yˆ(d) = d <sup>⊤</sup>g˜ + ν with ν ∼ N (0, σ<sup>2</sup> <sup>e</sup> ∥d∥ 2 ) is justified as follows.
|
| 346 |
+
<span id="page-11-1"></span>Lemma B.5 (Effective Noise Decomposition). *The effective noise variance decomposes as:*
|
| 347 |
+
$$\sigma_e^2 = \sigma_\varepsilon^2 + tr(\Sigma) \tag{39}$$
|
| 348 |
+
*where* σ 2 ε *is the finite-difference approximation error and tr*(Σ) *is the gradient noise variance.*
|
| 349 |
+
{12}------------------------------------------------
|
| 350 |
+
#### *Proof.* Step 1: Decomposition of the observation.
|
| 351 |
+
For coordinate-axis sampling with $d = e_i$ , the direction in parameter space is $z_i = Bd = Be_i$ (the *i*-th column of *B*), where $z_i \sim \mathcal{N}(0, I_n)$ .
|
| 352 |
+
The observation can be decomposed as:
|
| 353 |
+
$$y_i = \underbrace{z_i^{\mathsf{T}} g}_{\text{true signal gradient noise}} + \underbrace{z_i^{\mathsf{T}} \zeta}_{\text{finite-diff error}} + \underbrace{\epsilon_i}_{\text{finite-diff error}}$$
|
| 354 |
+
$$\tag{40}$$
|
| 355 |
+
where:
|
| 356 |
+
- $q = \nabla \mathcal{L}(\theta)$ is the true gradient
|
| 357 |
+
- $\zeta = \nabla \mathcal{L}(\theta; \xi) \nabla \mathcal{L}(\theta)$ is the stochastic gradient noise with $\mathbb{E}[\zeta] = 0$ and $\text{Cov}(\zeta) = \Sigma$
|
| 358 |
+
- $\epsilon_i \sim \mathcal{N}(0, \sigma_{\varepsilon}^2)$ is the finite-difference truncation error, independent of $\zeta$ and $z_i$
|
| 359 |
+
### Step 2: Identifying the noise term.
|
| 360 |
+
The observation noise is defined as $\nu_i := y_i - z_i^{\top} g = z_i^{\top} \zeta + \epsilon_i$ .
|
| 361 |
+
Since $\mathbb{E}[\zeta] = 0$ and $\mathbb{E}[\epsilon_i] = 0$ , we have $\mathbb{E}[\nu_i | z_i] = 0$ .
|
| 362 |
+
#### Step 3: Conditional variance (given $z_i$ ).
|
| 363 |
+
Since $\zeta$ and $\epsilon_i$ are independent:
|
| 364 |
+
$$Var(\nu_i|z_i) = Var(z_i^{\top} \zeta|z_i) + Var(\epsilon_i) = z_i^{\top} \Sigma z_i + \sigma_{\varepsilon}^2$$
|
| 365 |
+
(41)
|
| 366 |
+
#### Step 4: Unconditional variance (taking expectation over $z_i$ ).
|
| 367 |
+
Using the trace trick from Lemma B.2(b):
|
| 368 |
+
$$\mathbb{E}_{z_i}[z_i^{\top} \Sigma z_i] = \mathbb{E}[\operatorname{tr}(\Sigma z_i z_i^{\top})] = \operatorname{tr}(\Sigma \cdot \mathbb{E}[z_i z_i^{\top}]) = \operatorname{tr}(\Sigma \cdot I_n) = \operatorname{tr}(\Sigma)$$
|
| 369 |
+
(42)
|
| 370 |
+
Therefore, the effective noise variance is:
|
| 371 |
+
$$\sigma_{\varepsilon}^{2} := \mathbb{E}_{z_{i}}[\operatorname{Var}(\nu_{i}|z_{i})] = \operatorname{tr}(\Sigma) + \sigma_{\varepsilon}^{2} \tag{43}$$
|
| 372 |
+
By Assumption 3.2,
|
| 373 |
+
$$\operatorname{tr}(\Sigma) = \mathbb{E}[\|\zeta\|^2] \leq \sigma_g^2$$
|
| 374 |
+
, so $\sigma_e^2 \leq \sigma_g^2 + \sigma_\varepsilon^2$ .
|
| 375 |
+
#### **B.3. Proof of Main Convergence Theorem**
|
| 376 |
+
**Proof of Theorem 4.2.** Step 1: Single-step descent.
|
| 377 |
+
By Assumption 3.1 (*L*-smoothness):
|
| 378 |
+
$$\mathcal{L}(\theta_{t+1}) \le \mathcal{L}(\theta_t) + \langle g_t, \Delta \theta_t \rangle + \frac{L}{2} ||\Delta \theta_t||^2$$
|
| 379 |
+
(44)
|
| 380 |
+
where $g_t = \nabla \mathcal{L}(\theta_t)$ and $\Delta \theta_t = -\eta B_t \mu_t^{(k)}$ .
|
| 381 |
+
#### **Step 2: Inner product term.**
|
| 382 |
+
By Lemma B.2, the observation model is $y_i = z_i^\top g_t + z_i^\top \zeta + \epsilon_i$ , where $\zeta$ is gradient noise with $\text{Cov}(\zeta) = \Sigma$ , and $\epsilon_i \sim \mathcal{N}(0, \sigma_{\varepsilon}^2)$ is finite-difference error.
|
| 383 |
+
Let $\tilde{g} = g_t + \zeta$ and $\epsilon = [\epsilon_1, \dots, \epsilon_k]^{\top}$ . Then:
|
| 384 |
+
$$Y = B_t^{\top} \tilde{g} + \epsilon, \quad \mu_t^{(k)} = \gamma Y = \gamma (B_t^{\top} \tilde{g} + \epsilon)$$
|
| 385 |
+
(45)
|
| 386 |
+
{13}------------------------------------------------
|
| 387 |
+
The parameter update becomes:
|
| 388 |
+
$$B_t \mu_t^{(k)} = \gamma B_t B_t^{\top} (g_t + \zeta) + \gamma \sum_{i=1}^k z_i \epsilon_i$$
|
| 389 |
+
$$\tag{46}$$
|
| 390 |
+
Computing the expectation of the inner product. Since $\mathbb{E}[\zeta] = 0$ , $\mathbb{E}[\epsilon_i] = 0$ , and $\zeta$ , $\epsilon_i$ are independent of $B_t$ :
|
| 391 |
+
$$\mathbb{E}[\langle g_t, B_t \mu_t^{(k)} \rangle | \theta_t] = \gamma \mathbb{E}[g_t^\top B_t B_t^\top g_t] + \gamma \mathbb{E}[g_t^\top B_t B_t^\top \zeta] + \gamma \sum_{i=1}^k \mathbb{E}[(g_t^\top z_i) \epsilon_i]$$
|
| 392 |
+
(47)
|
| 393 |
+
The second term: $\mathbb{E}[g_t^{\top} B_t B_t^{\top} \zeta] = g_t^{\top} \mathbb{E}[B_t B_t^{\top}] \mathbb{E}[\zeta] = 0.$
|
| 394 |
+
The third term: $\mathbb{E}[(g_t^{\top} z_i) \epsilon_i] = \mathbb{E}[g_t^{\top} z_i] \mathbb{E}[\epsilon_i] = 0$ (independence).
|
| 395 |
+
The first term: $\mathbb{E}[g_t^{\top} B_t B_t^{\top} g_t] = \mathbb{E}[\|B_t^{\top} g_t\|^2] = \sum_{i=1}^k \mathbb{E}[(z_i^{\top} g_t)^2] = k \|g_t\|^2$ .
|
| 396 |
+
Therefore:
|
| 397 |
+
$$\mathbb{E}[\langle g_t, \Delta \theta_t \rangle | \theta_t] = -\eta \gamma k \|g_t\|^2 \tag{48}$$
|
| 398 |
+
#### Step 3: Second moment (detailed computation).
|
| 399 |
+
From Step 2, $B_t \mu_t^{(k)} = \gamma B_t B_t^{\top} \tilde{g} + \gamma \sum_i z_i \epsilon_i$ . Thus:
|
| 400 |
+
$$||B_t \mu_t^{(k)}||^2 = \gamma^2 ||B_t B_t^\top \tilde{g}||^2 + \gamma^2 \left\| \sum_i z_i \epsilon_i \right\|^2 + 2\gamma^2 \left\langle B_t B_t^\top \tilde{g}, \sum_i z_i \epsilon_i \right\rangle$$
|
| 401 |
+
(49)
|
| 402 |
+
Cross term vanishes: Since $\epsilon_i$ is independent of $B_t$ and $\tilde{g}$ , and $\mathbb{E}[\epsilon_i] = 0$ :
|
| 403 |
+
$$\mathbb{E}\left[\left\langle B_t B_t^{\top} \tilde{g}, \sum_i z_i \epsilon_i \right\rangle\right] = \sum_i \mathbb{E}[\epsilon_i] \cdot \mathbb{E}[\left\langle B_t B_t^{\top} \tilde{g}, z_i \right\rangle] = 0 \tag{50}$$
|
| 404 |
+
First term: We compute $\mathbb{E}[\|B_tB_t^{\top}\tilde{g}\|^2]$ by first conditioning on $B_t$ , then taking expectation over $B_t$ .
|
| 405 |
+
(A) Given $B_t$ , taking expectation over $\zeta$ (using $\mathbb{E}[\zeta] = 0$ ):
|
| 406 |
+
$$\mathbb{E}[\|B_t B_t^{\top} \tilde{g}\|^2 | B_t] = \|B_t B_t^{\top} g_t\|^2 + \mathbb{E}[\|B_t B_t^{\top} \zeta\|^2 | B_t]$$
|
| 407 |
+
(51)
|
| 408 |
+
where $\mathbb{E}[\|B_t B_t^\top \zeta\|^2 | B_t] = \operatorname{tr}((B_t B_t^\top)^2 \Sigma).$
|
| 409 |
+
(B) Taking expectation over $B_t$ . For $\mathbb{E}[\|B_t B_t^\top g\|^2]$ :
|
| 410 |
+
$$||B_t B_t^{\top} g||^2 = \sum_{i,j=1}^k (z_i^{\top} g) (z_j^{\top} g) (z_i^{\top} z_j)$$
|
| 411 |
+
(52)
|
| 412 |
+
Diagonal terms (i = j): $\mathbb{E}[(z_i^\top g)^2 ||z_i||^2] = (n+2)||g||^2$ (by Lemma B.4).
|
| 413 |
+
Off-diagonal terms $(i \neq j)$ : By independence of $z_i$ and $z_j$ :
|
| 414 |
+
$$\mathbb{E}[(z_i^\top g)(z_j^\top g)(z_i^\top z_j)] = \sum_{a,b,c} g_a g_b \mathbb{E}[(z_i)_a(z_i)_c] \mathbb{E}[(z_j)_b(z_j)_c]$$
|
| 415 |
+
(53)
|
| 416 |
+
$$= \sum_{a,b,c} g_a g_b \delta_{ac} \delta_{bc} = \sum_c g_c^2 = ||g||^2$$
|
| 417 |
+
(54)
|
| 418 |
+
Thus:
|
| 419 |
+
$$\mathbb{E}[\|B_t B_t^\top q\|^2] = k(n+2)\|q\|^2 + k(k-1)\|q\|^2 = k(n+k+1)\|q\|^2 = k\tilde{n}\|q\|^2$$
|
| 420 |
+
(55)
|
| 421 |
+
{14}------------------------------------------------
|
| 422 |
+
For $\mathbb{E}[\operatorname{tr}((B_t B_t^{\top})^2 \Sigma)]$ , let $P = B_t B_t^{\top}$ :
|
| 423 |
+
$$\operatorname{tr}(P^{2}\Sigma) = \sum_{i,j} (z_{i}^{\top} z_{j})(z_{i}^{\top} \Sigma z_{j}) \tag{56}$$
|
| 424 |
+
Diagonal (i = j): By Lemma B.4, $\mathbb{E}[||z||^2 \cdot z^{\top} \Sigma z] = (n + 2) \operatorname{tr}(\Sigma)$ .
|
| 425 |
+
Off-diagonal $(i \neq j)$ : By independence of $z_i$ and $z_i$ :
|
| 426 |
+
$$\mathbb{E}[(z_i^\top z_j)(z_i^\top \Sigma z_j)] = \sum_{a,b,c} \Sigma_{bc} \mathbb{E}[(z_i)_a(z_i)_b] \mathbb{E}[(z_j)_a(z_j)_c]$$
|
| 427 |
+
(57)
|
| 428 |
+
$$= \sum_{a,b,c} \Sigma_{bc} \delta_{ab} \delta_{ac} = \sum_{a} \Sigma_{aa} = \text{tr}(\Sigma)$$
|
| 429 |
+
(58)
|
| 430 |
+
Thus:
|
| 431 |
+
$$\mathbb{E}[\operatorname{tr}(P^2\Sigma)] = k(n+2)\operatorname{tr}(\Sigma) + k(k-1)\operatorname{tr}(\Sigma) = k\tilde{n} \cdot \operatorname{tr}(\Sigma)$$
|
| 432 |
+
(59)
|
| 433 |
+
Second term: For the finite-difference noise:
|
| 434 |
+
$$\mathbb{E}\left[\left\|\sum_{i=1}^{k} z_{i} \epsilon_{i}\right\|^{2}\right] = \sum_{i,j} \mathbb{E}[\epsilon_{i} \epsilon_{j}] \mathbb{E}[z_{i}^{\top} z_{j}] = \sum_{i} \sigma_{\varepsilon}^{2} \cdot n = kn\sigma_{\varepsilon}^{2}$$
|
| 435 |
+
$$(60)$$
|
| 436 |
+
Total second moment:
|
| 437 |
+
$$\mathbb{E}[\|B_t \mu_t^{(k)}\|^2] = \gamma^2 k \left( \tilde{n}(\|g_t\|^2 + \operatorname{tr}(\Sigma)) + n\sigma_{\varepsilon}^2 \right)$$
|
| 438 |
+
(61)
|
| 439 |
+
#### Step 4: Combining.
|
| 440 |
+
Substituting into the descent inequality:
|
| 441 |
+
$$\mathbb{E}[\|\Delta\theta_t\|^2] = \eta^2 \gamma^2 k\tilde{n} \|g_t\|^2 + \eta^2 \gamma^2 k(\tilde{n} \cdot \text{tr}(\Sigma) + n\sigma_{\varepsilon}^2)$$
|
| 442 |
+
(62)
|
| 443 |
+
Thus:
|
| 444 |
+
$$\mathbb{E}[\mathcal{L}(\theta_{t+1})] \le \mathbb{E}[\mathcal{L}(\theta_t)] - \eta \gamma k \|g_t\|^2 + \frac{L\eta^2 \gamma^2 k}{2} \left[ \tilde{n} \|g_t\|^2 + \tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2 \right]$$
|
| 445 |
+
(63)
|
| 446 |
+
Collecting terms in $||g_t||^2$ :
|
| 447 |
+
$$\mathbb{E}[\mathcal{L}(\theta_{t+1})] \leq \mathbb{E}[\mathcal{L}(\theta_t)] - \eta \gamma k \underbrace{\left(1 - \frac{L\eta \gamma \tilde{n}}{2}\right)}_{:=\beta(\eta)} \|g_t\|^2 + \frac{L\eta^2 \gamma^2 k(\tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2)}{2}$$
|
| 448 |
+
(64)
|
| 449 |
+
### Step 5: Telescoping sum.
|
| 450 |
+
When $\eta < \frac{2}{L\gamma\tilde{n}}$ , we have $\beta(\eta) > 0$ . Rearranging:
|
| 451 |
+
$$\|g_t\|^2 \le \frac{1}{\beta(\eta)\eta\gamma k} \left( \mathcal{L}(\theta_t) - \mathcal{L}(\theta_{t+1}) \right) + \frac{L\eta\gamma(\tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2)}{2\beta(\eta)}$$
|
| 452 |
+
(65)
|
| 453 |
+
Summing over t = 0, ..., T - 1 and dividing by T:
|
| 454 |
+
$$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|g_t\|^2] \le \frac{\mathcal{L}(\theta_0) - \mathcal{L}^*}{\beta(\eta)\eta\gamma kT} + \frac{L\eta\gamma(\tilde{n} \cdot \operatorname{tr}(\Sigma) + n\sigma_{\varepsilon}^2)}{2\beta(\eta)}$$
|
| 455 |
+
(66)
|
| 456 |
+
{15}------------------------------------------------
|
| 457 |
+
#### **B.4. Proof of Expected Update Direction**
|
| 458 |
+
Proof of Theorem 3.7. Step 1: Posterior mean unbiasedness.
|
| 459 |
+
By Corollary 3.6, for coordinate-axis sampling $(d^{(i)} = e_i)$ , the posterior mean is:
|
| 460 |
+
$$\mu^{(k)} = \gamma Y = \gamma [y^{(1)}, \dots, y^{(k)}]^{\top}$$
|
| 461 |
+
(67)
|
| 462 |
+
where $\gamma = \frac{\sigma_p^2}{\sigma_p^2 + \sigma_e^2}$ .
|
| 463 |
+
Each observation satisfies $y^{(i)} = e_i^{\top} \tilde{g} + \nu^{(i)} = \tilde{g}_i + \nu^{(i)}$ , where $\tilde{g} = B^{\top} g$ is the true normalized subspace gradient and $\nu^{(i)}$ is zero-mean noise.
|
| 464 |
+
Taking conditional expectation given B and g (so $\tilde{g}^* = B^{\top}g$ is fixed):
|
| 465 |
+
$$\mathbb{E}[y^{(i)}|B,g] = \tilde{g}_i^* + \mathbb{E}[\nu^{(i)}] = \tilde{g}_i^*$$
|
| 466 |
+
(68)
|
| 467 |
+
Thus:
|
| 468 |
+
$$\mathbb{E}[\mu^{(k)}|B,g] = \gamma \mathbb{E}[Y|B,g] = \gamma \tilde{g}^* = \gamma B^\top g \tag{69}$$
|
| 469 |
+
#### **Step 2: Conditional expectation of update.**
|
| 470 |
+
The parameter update is $\Delta\theta = -\eta B\mu^{(k)}$ . Taking conditional expectation:
|
| 471 |
+
$$\mathbb{E}[\Delta \theta | B] = -\eta B \mathbb{E}[\mu^{(k)} | B] = -\eta \gamma B B^{\mathsf{T}} g \tag{70}$$
|
| 472 |
+
#### Step 3: Expectation over subspace basis.
|
| 473 |
+
Taking expectation over $B = [z_1, \dots, z_k]$ where $z_i \stackrel{iid}{\sim} \mathcal{N}(0, I_n)$ :
|
| 474 |
+
$$\mathbb{E}[\Delta \theta] = -\eta \gamma \mathbb{E}[BB^{\top}]g \tag{71}$$
|
| 475 |
+
Computing $\mathbb{E}[BB^{\top}]$ :
|
| 476 |
+
$$BB^{\top} = \sum_{i=1}^{k} z_i z_i^{\top} \tag{72}$$
|
| 477 |
+
$$\mathbb{E}[BB^{\top}] = \sum_{i=1}^{k} \mathbb{E}[z_i z_i^{\top}] = \sum_{i=1}^{k} I_n = kI_n$$
|
| 478 |
+
(73)
|
| 479 |
+
Therefore:
|
| 480 |
+
$$\mathbb{E}[\Delta \theta] = -\eta \gamma k \cdot I_n \cdot q = -\eta \gamma k \cdot \nabla \mathcal{L}(\theta_0) \tag{74}$$
|
| 481 |
+
#### Step 4: Higher-order bias.
|
| 482 |
+
By Lemma B.1, the finite-difference estimator has $O(\varepsilon L)$ bias. After multiplication by $\varepsilon$ in the update, this becomes $O(\varepsilon^2 L)$ . Since $\varepsilon$ is typically small ( $\sim 10^{-3}$ ), we write:
|
| 483 |
+
$$\mathbb{E}[\Delta\theta] = -\eta \gamma k \cdot \nabla \mathcal{L}(\theta_0) + O(\varepsilon^3) \tag{75}$$
|
| 484 |
+
This proves that the expected update direction aligns with the negative gradient, with effective learning rate $\eta_{eff} = \eta \gamma k$ . $\Box$
|
| 485 |
+
#### <span id="page-15-1"></span>**B.5.** Adaptive Sampling Analysis
|
| 486 |
+
<span id="page-15-0"></span>**Theorem B.6** (Conditional Unbiasedness of Posterior Mean under Adaptive Sampling). Let $\mu^{(m)}$ denote the posterior mean after m adaptive sampling steps. Given the subspace basis B and the true gradient g, for **any** adaptive sampling strategy $\pi$ (where $d^{(j)}$ is $\mathcal{D}_{j-1}$ -measurable), we have:
|
| 487 |
+
$$\mathbb{E}[\mu^{(m)}|B,g] = \mathbb{E}\left[\Sigma^{(m)}D_m^{\top}R_m^{-1}D_m \mid B\right]\tilde{g}^* = \mathbb{E}[\Gamma_m|B] \cdot \tilde{g}^*$$
|
| 488 |
+
(76)
|
| 489 |
+
{16}------------------------------------------------
|
| 490 |
+
In particular, if $\Sigma^{(m)}$ is deterministic given B (e.g., coordinate-axis sampling or any strategy that depends only on $\mathcal{D}_{m-1}$ ), then:
|
| 491 |
+
$$\mathbb{E}[\mu^{(m)}|B, g, \mathcal{D}_m] = \Gamma_m \cdot \tilde{g}^* \tag{77}$$
|
| 492 |
+
where $\Gamma_m := I_k - \sigma_n^{-2} \Sigma^{(m)}$ is the shrinkage matrix.
|
| 493 |
+
#### **Proof.** Step 1: Expression for the posterior mean.
|
| 494 |
+
By the standard Bayesian linear regression formula:
|
| 495 |
+
$$\mu^{(m)} = \Sigma^{(m)} D_m^{\top} R_m^{-1} Y_m \tag{78}$$
|
| 496 |
+
where $Y_m = [y^{(1)}, \dots, y^{(m)}]^{\top}$ .
|
| 497 |
+
## Step 2: Computing the conditional expectation.
|
| 498 |
+
Note that $\Sigma^{(m)}$ and $D_m$ are both $\mathcal{D}_m$ -measurable. The key is to compute $\mathbb{E}[Y_m|B,g,\mathcal{D}_m]$ .
|
| 499 |
+
For each $y^{(j)}$ :
|
| 500 |
+
$$\mathbb{E}[y^{(j)}|B, g, \mathcal{D}_m] = \mathbb{E}[y^{(j)}|B, g, d^{(j)}] = d^{(j)\top}\tilde{g}^*$$
|
| 501 |
+
(79)
|
| 502 |
+
The first equality holds because given $d^{(j)}$ , $y^{(j)}$ is conditionally independent of other $d^{(i)}$ ( $i \neq j$ ).
|
| 503 |
+
Therefore:
|
| 504 |
+
$$\mathbb{E}[Y_m|B,g,\mathcal{D}_m] = D_m \tilde{g}^* \tag{80}$$
|
| 505 |
+
#### Step 3: Substituting into the posterior mean.
|
| 506 |
+
$$\mathbb{E}[\mu^{(m)}|B, g, \mathcal{D}_m] = \Sigma^{(m)} D_m^{\top} R_m^{-1} \mathbb{E}[Y_m | B, g, \mathcal{D}_m] = \Sigma^{(m)} D_m^{\top} R_m^{-1} D_m \tilde{g}^*$$
|
| 507 |
+
(81)
|
| 508 |
+
### Step 4: Simplifying the shrinkage matrix.
|
| 509 |
+
By the definition of $\Sigma^{(m)}$ :
|
| 510 |
+
$$(\Sigma^{(m)})^{-1} = \sigma_p^{-2} I_k + D_m^{\top} R_m^{-1} D_m$$
|
| 511 |
+
(82)
|
| 512 |
+
Therefore:
|
| 513 |
+
$$D_m^{\top} R_m^{-1} D_m = (\Sigma^{(m)})^{-1} - \sigma_p^{-2} I_k$$
|
| 514 |
+
(83)
|
| 515 |
+
Substituting:
|
| 516 |
+
$$\mathbb{E}[\mu^{(m)}|B,g,\mathcal{D}_m] = \Sigma^{(m)} \left[ (\Sigma^{(m)})^{-1} - \sigma_p^{-2} I_k \right] \tilde{g}^* = \left( I_k - \sigma_p^{-2} \Sigma^{(m)} \right) \tilde{g}^*$$
|
| 517 |
+
(84)
|
| 518 |
+
Defining the shrinkage matrix $\Gamma_m := I_k - \sigma_p^{-2} \Sigma^{(m)}$ , we obtain:
|
| 519 |
+
$$\mathbb{E}[\mu^{(m)}|B,g,\mathcal{D}_m] = \Gamma_m \tilde{g}^* \tag{85}$$
|
| 520 |
+
#### <span id="page-16-1"></span>**B.6.** Convergence Rate under Adaptive Sampling
|
| 521 |
+
<span id="page-16-0"></span>**Theorem B.7** (Convergence Rate under Adaptive Sampling). Under Assumptions 3.1, 3.2, and isotropic noise, consider the BSZO algorithm with adaptive sampling (m samples, where the first k samples use coordinate-axis sampling). Let $\tilde{n} = n + k + 1$ be the effective dimension. Suppose $\eta < \frac{2}{L\tilde{\gamma}\tilde{n}}$ , and define $\beta(\eta) := 1 - \frac{L\eta\tilde{\gamma}\tilde{n}}{2}$ . Then, after T iterations, the following inequality holds:
|
| 522 |
+
$$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{\Delta_0}{\beta(\eta)\eta \bar{\gamma}kT} + \frac{L\eta \bar{\gamma}(\tilde{n}\sigma_g^2 + n\sigma_n^2)}{2\beta(\eta)},\tag{86}$$
|
| 523 |
+
where:
|
| 524 |
+
{17}------------------------------------------------
|
| 525 |
+
- $\bar{\gamma} := \min_t \bar{\gamma}_t \geq \gamma$ is the minimum effective shrinkage factor,
|
| 526 |
+
- $\sigma_g^2$ is the gradient noise variance, $\sigma_n^2$ is the finite-difference approximation noise variance,
|
| 527 |
+
- $\Delta_0 := \mathcal{L}(\theta_0) \mathcal{L}^*$ .
|
| 528 |
+
**Corollary B.8.** Let $\eta = \frac{1}{L\bar{\gamma}\tilde{n}}$ . Then $\beta = 1/2$ , and the convergence bound simplifies to:
|
| 529 |
+
$$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{2L\bar{\gamma}\tilde{n}\Delta_0}{kT} + \sigma_g^2 + \frac{n}{\tilde{n}}\sigma_n^2. \tag{87}$$
|
| 530 |
+
Remark B.9. When $n \gg k$ , we have $\tilde{n} \approx n$ , so the noise floor $\sigma_g^2 + \frac{n}{\tilde{n}} \sigma_n^2 \approx \sigma_e^2$ becomes **decoupled** from the dimension n.
|
| 531 |
+
*Proof.* The proof follows the same structure as Theorem 4.2, with the fixed $\gamma$ replaced by the adaptive effective shrinkage factor $\bar{\gamma}_t$ .
|
| 532 |
+
### Step 1: Single-step descent.
|
| 533 |
+
By Assumption 3.1 (*L*-smoothness):
|
| 534 |
+
$$\mathcal{L}(\theta_{t+1}) \le \mathcal{L}(\theta_t) + \langle g_t, \Delta \theta_t \rangle + \frac{L}{2} ||\Delta \theta_t||^2$$
|
| 535 |
+
(88)
|
| 536 |
+
#### Step 2: Inner product term under adaptive sampling.
|
| 537 |
+
By the adaptive sampling theorem (Theorem B.6), the expected update direction satisfies:
|
| 538 |
+
$$\mathbb{E}[\langle g_t, \Delta \theta_t \rangle | \theta_t] = -\eta \mathbb{E}[\operatorname{tr}(\Gamma_m^{(t)})] \|g_t\|^2 = -\eta \bar{\gamma}_t k \|g_t\|^2$$
|
| 539 |
+
(89)
|
| 540 |
+
where $\bar{\gamma}_t = \frac{1}{k} \text{tr}(\Gamma_m^{(t)}) = 1 - \frac{U_m^{(t)}}{k \sigma_n^2}$ is the effective shrinkage factor at iteration t.
|
| 541 |
+
#### Step 3: Second moment (same structure as main theorem).
|
| 542 |
+
Following the same derivation as Theorem 4.2, with $\gamma$ replaced by $\bar{\gamma}_t$ :
|
| 543 |
+
$$\mathbb{E}[\|\Delta\theta_t\|^2 | \theta_t] = \eta^2 \bar{\gamma}_t^2 k \tilde{n} \|g_t\|^2 + \eta^2 \bar{\gamma}_t^2 k (\tilde{n}\sigma_g^2 + n\sigma_n^2)$$
|
| 544 |
+
(90)
|
| 545 |
+
The key observation is that the second moment structure remains unchanged because:
|
| 546 |
+
- The gradient noise $\sigma_g^2$ interacts with $B_t$ to produce the $\tilde{n}$ factor
|
| 547 |
+
- The finite-difference noise $\sigma_n^2$ is independent of $B_t$ , producing only the n factor
|
| 548 |
+
### Step 4: Combining and bounding.
|
| 549 |
+
Substituting into the descent inequality:
|
| 550 |
+
$$\mathbb{E}[\mathcal{L}(\theta_{t+1})] \le \mathbb{E}[\mathcal{L}(\theta_t)] - \eta \bar{\gamma}_t k \left(1 - \frac{L\eta \bar{\gamma}_t \tilde{n}}{2}\right) \mathbb{E}[\|g_t\|^2] + \frac{L\eta^2 \bar{\gamma}_t^2 k (\tilde{n} \sigma_g^2 + n \sigma_n^2)}{2}$$
|
| 551 |
+
(91)
|
| 552 |
+
Since $\bar{\gamma}_t \geq \bar{\gamma} := \min_t \bar{\gamma}_t \geq \gamma$ (by Lemma in Theorem B.6), and assuming $\eta < \frac{2}{L\bar{\gamma}\bar{n}}$ , we define $\beta(\eta) = 1 - \frac{L\eta\bar{\gamma}\bar{n}}{2} > 0$ .
|
| 553 |
+
Rearranging:
|
| 554 |
+
$$\mathbb{E}[\|g_t\|^2] \le \frac{1}{\beta(\eta)\eta\bar{\gamma}k} \left( \mathbb{E}[\mathcal{L}(\theta_t)] - \mathbb{E}[\mathcal{L}(\theta_{t+1})] \right) + \frac{L\eta\bar{\gamma}(\tilde{n}\sigma_g^2 + n\sigma_n^2)}{2\beta(\eta)}$$
|
| 555 |
+
(92)
|
| 556 |
+
#### **Step 5: Telescoping sum.**
|
| 557 |
+
{18}------------------------------------------------
|
| 558 |
+
Summing over t = 0, ..., T - 1 and dividing by T:
|
| 559 |
+
$$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{\Delta_0}{\beta(\eta)\eta \bar{\gamma}kT} + \frac{L\eta \bar{\gamma}(\tilde{n}\sigma_g^2 + n\sigma_n^2)}{2\beta(\eta)}$$
|
| 560 |
+
(93)
|
| 561 |
+
For the special learning rate $\eta=\frac{1}{L\bar{\gamma}\tilde{n}}$ , we have $\beta=1/2$ , and the bound simplifies to:
|
| 562 |
+
$$\frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta_t)\|^2] \le \frac{2L\bar{\gamma}\tilde{n}\Delta_0}{kT} + \sigma_g^2 + \frac{n}{\tilde{n}}\sigma_n^2$$
|
| 563 |
+
(94)
|
| 564 |
+
When $n\gg k$ , we have $\tilde{n}\approx n$ , so the noise floor $\sigma_g^2+\frac{n}{\tilde{n}}\sigma_n^2\approx\sigma_e^2$ becomes decoupled from dimension n.
|
| 565 |
+
# <span id="page-18-1"></span><span id="page-18-0"></span>C. Experiment Details
|
| 566 |
+
Table 6. Number of training and validation samples for each dataset.
|
| 567 |
+
| SPLIT | SST-2 | BoolQ | RTE | COPA | WIC | WSC | СВ | TREC |
|
| 568 |
+
|---------------------|-------|-------|------|------|------|-----|-----|------|
|
| 569 |
+
| TRAINING VALIDATION | 1000 | 1000 | 1000 | 300 | 1000 | 450 | 200 | 1000 |
|
| 570 |
+
| | 500 | 500 | 500 | 100 | 500 | 100 | 50 | 500 |
|
| 571 |
+
Table 7. Hyperparameter configurations for fine-tuning RoBERTa-large.
|
| 572 |
+
| Algorithm | Hyperparameter | Values |
|
| 573 |
+
|-------------|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
|
| 574 |
+
| MeZO | Batch size Learning rate $\varepsilon$ | |
|
| 575 |
+
| MeZO-Adam | Batch size Learning rate $\varepsilon$ | |
|
| 576 |
+
| HiZOO | Batch size Learning rate $\varepsilon$ | $ \{1 \times 10^{-4}, 1 \times 10^{-5}, 1 \times 10^{-6}, 1 \times 10^{-7}, 1 \times 10^{-8} \} $ |
|
| 577 |
+
| LOZO | Batch size Learning rate ε Rank Interval | |
|
| 578 |
+
| BSZO | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 579 |
+
| BSZO-B | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 580 |
+
| All Methods | Early stopping patience | 4,000 |
|
| 581 |
+
{19}------------------------------------------------
|
| 582 |
+
Table 8. Hyperparameter configurations for fine-tuning OPT-1.3B.
|
| 583 |
+
| Algorithm | Hyperparameter | Values |
|
| 584 |
+
|-------------|-------------------------------------------------------------------------|--------|
|
| 585 |
+
| MeZO | Batch size Learning rate $\varepsilon$ | |
|
| 586 |
+
| MeZO-Adam | Batch size Learning rate $\varepsilon$ | |
|
| 587 |
+
| HiZOO | Batch size Learning rate $\varepsilon$ | |
|
| 588 |
+
| LOZO | Batch size Learning rate ε Rank Interval | |
|
| 589 |
+
| BSZO | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 590 |
+
| BSZO-B | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 591 |
+
| All Methods | Early stopping patience | 4,000 |
|
| 592 |
+
{20}------------------------------------------------
|
| 593 |
+
Table 9. Hyperparameter configurations for fine-tuning Mistral-7B.
|
| 594 |
+
| Algorithm | Hyperparameter | Values |
|
| 595 |
+
|-------------|-------------------------------------------------------------------------|--------|
|
| 596 |
+
| MeZO | Batch size Learning rate $\varepsilon$ | |
|
| 597 |
+
| HiZOO | Batch size Learning rate $\varepsilon$ | |
|
| 598 |
+
| LOZO | Batch size Learning rate $\varepsilon$ Rank Interval | |
|
| 599 |
+
| BSZO | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 600 |
+
| BSZO-B | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 601 |
+
| All Methods | Early stopping patience | 4,000 |
|
| 602 |
+
{21}------------------------------------------------
|
| 603 |
+
Table 10. Hyperparameter configurations for fine-tuning OPT-13B.
|
| 604 |
+
| Algorithm | Hyperparameter | Values |
|
| 605 |
+
|-------------|-------------------------------------------------------------------------|--------|
|
| 606 |
+
| MeZO | Batch size Learning rate $\varepsilon$ | |
|
| 607 |
+
| MeZO-Adam | Batch size Learning rate $\varepsilon$ | |
|
| 608 |
+
| HiZOO | Batch size Learning rate $\varepsilon$ | |
|
| 609 |
+
| LOZO | Batch size Learning rate ε Rank Interval | |
|
| 610 |
+
| BSZO | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 611 |
+
| BSZO-B | Batch size Learning rate $\varepsilon$ $k$ (Subspace dim) $m$ (Samples) | |
|
| 612 |
+
| All Methods | Early stopping patience | 4,000 |
|
| 613 |
+
# **D.** Raw Experimental Results
|
| 614 |
+
We provide the complete raw results of 5 independent runs for each method on RoBERTa-large in Table 11. The mean and standard deviation reported in Table 1 are computed from these results.
|
| 615 |
+
{22}------------------------------------------------
|
| 616 |
+
*Table 11.* Raw test accuracy (%) of 5 runs on RoBERTa-large (355M).
|
| 617 |
+
<span id="page-22-0"></span>
|
| 618 |
+
| MEZO<br>92.43<br>92.32<br>91.74<br>92.78<br>91.86<br>MEZO-ADAM<br>92.32<br>92.66<br>91.51<br>92.43<br>92.78<br>HIZOO<br>91.97<br>91.86<br>91.28<br>91.17<br>90.94<br>SST-2<br>LOZO<br>91.63<br>92.09<br>91.74<br>91.51<br>92.20<br>BSZO<br>92.89<br>92.43<br>92.78<br>92.43<br>92.78<br>BSZO-B<br>92.66<br>91.74<br>92.32<br>91.97<br>92.66<br>MEZO<br>69.68<br>68.95<br>65.70<br>65.34<br>62.09<br>MEZO-ADAM<br>62.09<br>64.26<br>64.62<br>64.98<br>62.09<br>HIZOO<br>59.21<br>63.18<br>57.76<br>56.68<br>59.21<br>RTE<br>LOZO<br>59.57<br>63.90<br>61.01<br>65.34<br>63.18<br>BSZO<br>68.59<br>68.23<br>69.68<br>66.07<br>66.43<br>BSZO-B<br>67.87<br>70.04<br>70.76<br>66.43<br>66.79<br>MEZO<br>87.50<br>85.71<br>91.07<br>76.79<br>89.29<br>MEZO-ADAM<br>82.14<br>83.93<br>80.36<br>82.14<br>76.79<br>HIZOO<br>78.57<br>75.00<br>75.00<br>78.57<br>75.00<br>CB<br>LOZO<br>87.50<br>82.14<br>82.14<br>89.29<br>80.36<br>BSZO<br>83.93<br>85.71<br>87.50<br>87.50<br>83.93<br>BSZO-B<br>85.71<br>83.93<br>82.14<br>85.71<br>83.93<br>MEZO<br>49.69<br>58.31<br>52.98<br>57.52<br>57.52<br>MEZO-ADAM<br>54.39<br>51.10<br>54.70<br>46.55<br>57.52<br>HIZOO<br>50.31<br>54.39<br>51.10<br>54.70<br>57.52<br>WIC<br>LOZO<br>54.55<br>54.55<br>51.88<br>55.17<br>54.86<br>BSZO<br>57.68<br>57.52<br>55.64<br>54.70<br>54.70<br>BSZO-B<br>57.99<br>57.99<br>55.80<br>56.58<br>57.68<br>MEZO<br>81.20<br>86.40<br>86.40<br>86.20<br>86.60<br>MEZO-ADAM<br>84.80<br>74.20<br>71.40<br>83.00<br>80.60<br>HIZOO<br>65.20<br>65.20<br>65.20<br>62.20<br>59.40<br>TREC<br>LOZO<br>80.40<br>74.80<br>77.20<br>79.20<br>77.20<br>BSZO<br>83.40<br>84.60<br>84.40<br>83.80<br>84.60 | DATASET | METHOD | RUN 1 | RUN 2 | RUN 3 | RUN 4 | RUN 5 |
|
| 619 |
+
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|--------|-------|-------|-------|-------|-------|
|
| 620 |
+
| | | | | | | | |
|
| 621 |
+
| | | | | | | | |
|
| 622 |
+
| | | | | | | | |
|
| 623 |
+
| | | | | | | | |
|
| 624 |
+
| | | | | | | | |
|
| 625 |
+
| | | | | | | | |
|
| 626 |
+
| | | | | | | | |
|
| 627 |
+
| | | | | | | | |
|
| 628 |
+
| | | | | | | | |
|
| 629 |
+
| | | | | | | | |
|
| 630 |
+
| | | | | | | | |
|
| 631 |
+
| | | | | | | | |
|
| 632 |
+
| | | | | | | | |
|
| 633 |
+
| | | | | | | | |
|
| 634 |
+
| | | | | | | | |
|
| 635 |
+
| | | | | | | | |
|
| 636 |
+
| | | | | | | | |
|
| 637 |
+
| | | | | | | | |
|
| 638 |
+
| | | | | | | | |
|
| 639 |
+
| | | | | | | | |
|
| 640 |
+
| | | | | | | | |
|
| 641 |
+
| | | | | | | | |
|
| 642 |
+
| | | | | | | | |
|
| 643 |
+
| | | | | | | | |
|
| 644 |
+
| | | | | | | | |
|
| 645 |
+
| | | | | | | | |
|
| 646 |
+
| | | | | | | | |
|
| 647 |
+
| | | | | | | | |
|
| 648 |
+
| | | | | | | | |
|
| 649 |
+
| | | | | | | | |
|
| 650 |
+
| | | BSZO-B | 85.80 | 84.60 | 85.20 | 82.20 | 86.20 |
|
| 651 |
+
<span id="page-22-1"></span>*Table 12.* Full ablation studies on OPT-1.3B (fp32). (a) Effect of subspace dimension k with m = k. (b) Effect of observation count m with m = k + 1. (c) Noise-free adaptive sampling. Best per row in bold.
|
| 652 |
+
| | (a) Effect of k | | | (b) Effect of m | | | (c) NF-Adaptive | |
|
| 653 |
+
|---|-----------------|-------|---|-----------------|-------|---|-----------------|-------|
|
| 654 |
+
| k | SST-2 | RTE | k | SST-2 | RTE | k | SST-2 | RTE |
|
| 655 |
+
| 1 | 92.32 | 60.29 | 1 | 91.74 | 61.37 | 1 | 91.28 | 63.58 |
|
| 656 |
+
| 2 | 92.78 | 64.26 | 2 | 92.43 | 66.79 | 2 | 92.43 | 65.34 |
|
| 657 |
+
| 4 | 92.66 | 67.51 | 4 | 93.58 | 66.43 | 4 | 93.12 | 66.07 |
|
| 658 |
+
| 8 | 93.23 | 66.07 | 8 | 93.23 | 68.59 | 8 | 93.35 | 69.31 |
|