mishig HF Staff commited on
Commit
488846d
·
verified ·
1 Parent(s): 4f2364f

Add 1 files

Browse files
Files changed (1) hide show
  1. 2501/2501.13965.md +173 -0
2501/2501.13965.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification
2
+
3
+ URL Source: https://arxiv.org/html/2501.13965
4
+
5
+ Published Time: Mon, 27 Jan 2025 01:01:21 GMT
6
+
7
+ Markdown Content:
8
+ ![Image 1: [Uncaptioned image]](https://arxiv.org/html/2501.13965v1/extracted/6142167/bagel-logo-bw.png)
9
+
10
+ Bidhan Roy, Peter Potash, Marcos Villagra
11
+
12
+ Bagel Research Team 1 1 1 Bagel is a research lab, using cryptography to make open source AI monetizable.
13
+
14
+ (January 21, 2025)
15
+
16
+ ###### Abstract
17
+
18
+ Low-Rank Adaptation (LoRA) is a widely adopted method for customizing large-scale language models. In distributed, untrusted training environments, an open source base model user may want to use LoRA weights created by an external contributor, leading to two requirements: (1) the base model user must confirm that the LoRA weights are effective when paired with the intended base model, and (2) the LoRA contributor must keep their proprietary weights private until compensation is assured.
19
+
20
+ We present ZKLoRA, a zero-knowledge verification protocol that relies on succinct proofs and our novel Multi-Party Inference procedure to verify LoRA–base model compatibility without exposing LoRA weights. ZKLoRA produces _deterministic_ correctness guarantees and validates each LoRA module in only 1–2 seconds on state-of-the-art large language models. This low-latency approach enables nearly real-time verification and promotes secure collaboration among geographically decentralized teams and contract-based training pipelines. The protocol ensures that the delivered LoRA module works as claimed, safeguarding the contributor’s intellectual property while providing the base model user with verification of compatibility and lineage.
21
+
22
+ 1 Introduction
23
+ --------------
24
+
25
+ Large Language Models (LLMs) have attained remarkable success [[1](https://arxiv.org/html/2501.13965v1#bib.bib1), [2](https://arxiv.org/html/2501.13965v1#bib.bib2)], but verifying fine-tuned modifications such as LoRA [[4](https://arxiv.org/html/2501.13965v1#bib.bib4)] in an untrusted, distributed training environment can be difficult when the updated weights must remain private. Traditionally, one might re-run an entire forward pass or inspect thousands of parameters to ensure correctness, which is infeasible for massive models. ZKLoRA addresses this by generating a zero-knowledge proof of correctness for each LoRA module, guaranteeing that the private LoRA genuinely fits the base model. Crucially, the verification stage for each LoRA module in ZKLoRA remains about 1–2 seconds, even at scales of multi-billion parameter, state-of-the-art large language base models.
26
+
27
+ 2 Preliminary Results
28
+ ---------------------
29
+
30
+ We benchmarked ZKLoRA across several LLMs and smaller models with different numbers of LoRA modules. The input for inference is a batch of size 3 with sequence length 5. Our central question is how verification times, as well as settings and proof generation times, grow with the number of LoRA modules, while also considering each LoRA’s average parameter size. Figure[1](https://arxiv.org/html/2501.13965v1#S2.F1 "Figure 1 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification") and Table[1](https://arxiv.org/html/2501.13965v1#S2.T1 "Table 1 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification") detail this trade-off.2 2 2 Note that the number of LoRA modules in a given model is not purely a function of the number of layers – it is also a choice of which weight matrices within each layer is targeted. For example, just targeting one matrix per layer (ie the Query matrix in the Attention Block) will yield one LoRA per layer. Conversely, targeting the Query, Key, and Value matrices yields three matrices per layer and makes the total number of LoRAs 3×n⁢u⁢m 3 𝑛 𝑢 𝑚 3\times num 3 × italic_n italic_u italic_m _ l⁢a⁢y⁢e⁢r⁢s 𝑙 𝑎 𝑦 𝑒 𝑟 𝑠 layers italic_l italic_a italic_y italic_e italic_r italic_s.
31
+
32
+ ![Image 2: Refer to caption](https://arxiv.org/html/2501.13965v1/x1.png)
33
+
34
+ Figure 1: Total verification time (seconds) vs. number of LoRA modules, with dot size reflecting average LoRA size.
35
+
36
+ Table 1: Model benchmark results for Settings and Proof generation time averaged by number of LoRA modules.
37
+
38
+ From Figure[1](https://arxiv.org/html/2501.13965v1#S2.F1 "Figure 1 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification"), we see that models with a higher LoRA count (e.g., 80 modules for a large 70B model) can indeed lead to larger total verification time overall. However, the slope remains modest due to ZKLoRA’s succinct design. For instance, even if each module is verified individually at around 1–2 seconds, verifying 80 modules can be done in just a few minutes, which is still practical for real-world usage.
39
+
40
+ Table[1](https://arxiv.org/html/2501.13965v1#S2.T1 "Table 1 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification") similarly shows that average _proof generation_ and _settings time_ scale with the size of a LoRA module, which, combined with the number of modules, gives the total times. These two steps (proof generation on the LoRA contributor’s side and cryptographic circuit setup) can become more expensive, yet remain feasible in decentralized settings or paid contract relationships. The _Base Model User_, meanwhile, benefits from the relatively short verification overhead.
41
+
42
+ Overall, these results confirm that ZKLoRA can handle large-scale implementation of LoRA modules with minimal overhead for verifying correctness, emphasizing the viability of repeated or multi-adapter scenarios in large-scale LLM pipelines.
43
+
44
+ Figure 2: Three-Step ZKLoRA Process (Vertical).(1) Base Model User and LoRA Contributor exchange “Base Acts” and “LoRA Acts” in a multi-party inference. (2) The LoRA Contributor generates cryptographic proofs for correctness. (3) The Base Model User verifies these proofs, ensuring correct LoRA alignment without revealing private adapter weights.
45
+
46
+ Figure 3: Flow in a Multi-Party Inference scenario between local base model and remote LoRA weights. The base model performs a local forward pass, computing 𝐛𝐚𝐬𝐞⁢_⁢𝐨𝐮𝐭=𝐖𝐱 𝐛𝐚𝐬𝐞 _ 𝐨𝐮𝐭 𝐖𝐱\mathbf{base\_out}=\mathbf{Wx}bold_base _ bold_out = bold_Wx. In parallel, the input 𝐱 𝐱\mathbf{x}bold_x is sent to the remote LoRA module, which returns 𝚫=𝐁𝐀𝐱 𝚫 𝐁𝐀𝐱\mathbf{\Delta}=\mathbf{BAx}bold_Δ = bold_BAx, where 𝐁,𝐀 𝐁 𝐀\mathbf{B,A}bold_B , bold_A are the low-rank finetuned matrices. The final output is 𝐛𝐚𝐬𝐞⁢_⁢𝐨𝐮𝐭+𝚫 𝐛𝐚𝐬𝐞 _ 𝐨𝐮𝐭 𝚫\mathbf{base\_out}+\mathbf{\Delta}bold_base _ bold_out + bold_Δ.
47
+
48
+ 3 ZKLoRA
49
+ --------
50
+
51
+ ZKLoRA’s design reflects the synergy between LoRA’s parameter-efficiency and zero-knowledge cryptographic protocols: LoRA significantly shrinks the parameter footprint being proven, while the zero-knowledge aspect maintains confidentiality of the contributor’s proprietary weights. By merging these ideas, ZKLoRA enables trust-driven collaboration across decentralized infrastructures, contract-based training, and other scenarios where proof-of-correctness is essential but the LoRA code remains private. Our approach also builds on incremental verification concepts [[7](https://arxiv.org/html/2501.13965v1#bib.bib7)] and advanced proof systems such as Nova [[6](https://arxiv.org/html/2501.13965v1#bib.bib6)] and HyperNova [[5](https://arxiv.org/html/2501.13965v1#bib.bib5)], which allow us to scale proofs to large neural networks. Ultimately, this combination provides a practical pipeline for parameter-efficient fine-tuning while verifying correctness in a succinct and minimally intrusive manner.
52
+
53
+ We implement a protocol that not only supports multi-party inference with partial activations exchanged between a base model user and a LoRA contributor, but also produces cryptographic proofs that the LoRA transforms are valid. The overall workflow is shown in Figure[2](https://arxiv.org/html/2501.13965v1#S2.F2 "Figure 2 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification"), while Figure[3](https://arxiv.org/html/2501.13965v1#S2.F3 "Figure 3 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification") gives a deeper look at how Multi-Party Inference with LoRAs functions within an individual module. In addition, pseudocode for ZKLoRA is in Algorithm[1](https://arxiv.org/html/2501.13965v1#alg1 "Algorithm 1 ‣ 3 ZKLoRA ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification"). To begin the Multi-Party Inference, the base model user puts the dataset chosen for inference into the base model’s first module. The forward pass continues through until the base model until it hits a module that uses remote LoRA weights. When this occurs the base model user sends partial activations to the LoRA contributor for processing. These exchanged activations, shown conceptually in Figure[2](https://arxiv.org/html/2501.13965v1#S2.F2 "Figure 2 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification"), correspond to “Base Acts” from the base model user and “LoRA Acts” from the LoRA contributor.
54
+
55
+ After the multi-party inference finishes, the LoRA contributor shifts to a proof generation phase. At this stage, each LoRA module is compiled into a constraint system describing the LoRA transformations, and a key setup procedure yields the proving key, verification key, and possibly a structured reference string if the underlying zero-knowledge scheme requires one. The contributor then creates a “witness” by running partial activations through these constraints and finally produces the proof files.
56
+
57
+ Once the proof generation is done, the base model user receives each proof and runs a fast verification procedure, typically requiring about 1–2 seconds per module. As Figure[3](https://arxiv.org/html/2501.13965v1#S2.F3 "Figure 3 ‣ 2 Preliminary Results ‣ ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification") suggests, this does not require the LoRA contributor to reveal the actual low-rank matrices. Instead, the contributor only sends updates and proofs that these updates conform to the declared LoRA transformations. If any single proof fails, the base model user can reject the entire LoRA submission; otherwise, the system is accepted as consistent with the underlying base model.
58
+
59
+ Algorithm 1 ZKLoRA Pseudocode
60
+
61
+ 1:BaseModel (public), LoRAModel (private), Data
62
+
63
+ 2:Verified outputs or Reject
64
+
65
+ 3:
66
+
67
+ 4:Step 1: Multi-Party Inference
68
+
69
+ 5:for each submodule
70
+
71
+ s 𝑠 s italic_s
72
+ in BaseModel do
73
+
74
+ 6:if
75
+
76
+ s 𝑠 s italic_s
77
+ contains LoRA layers then
78
+
79
+ 7:_(a)_ run multi-party inference with LoRA Contributor for submodule
80
+
81
+ s 𝑠 s italic_s
82
+
83
+ 8:else
84
+
85
+ 9:_(b)_ run local inference on submodule
86
+
87
+ s 𝑠 s italic_s
88
+ (no remote calls)
89
+
90
+ 10:end if
91
+
92
+ 11:end for
93
+
94
+ 12:
95
+
96
+ 13:Step 2: Proof Generation
97
+
98
+ 14:for each LoRA module
99
+
100
+ m 𝑚 m italic_m
101
+ in LoRAModel do
102
+
103
+ 15:(1) Circuit Compilation: parse LoRA-augmented layers, produce cryptographic circuit
104
+
105
+ 16:(2) Key Setup: generate settings, proving key, verification key, and SRS if needed
106
+
107
+ 17:(3) Witness Creation: run partial activations through circuit, record wire values
108
+
109
+ 18:(4) Proof: construct zero-knowledge proof
110
+
111
+ 𝒫 m subscript 𝒫 𝑚\mathcal{P}_{m}caligraphic_P start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT
112
+ for module
113
+
114
+ m 𝑚 m italic_m
115
+
116
+ 19:end for
117
+
118
+ 20:
119
+
120
+ 21:Step 3: Verification
121
+
122
+ 22:for each proof
123
+
124
+ 𝒫 m subscript 𝒫 𝑚\mathcal{P}_{m}caligraphic_P start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT
125
+ do
126
+
127
+ 23:if Verify(
128
+
129
+ 𝒫 m subscript 𝒫 𝑚\mathcal{P}_{m}caligraphic_P start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT
130
+ ) fails then
131
+
132
+ 24:return Reject
133
+
134
+ 25:end if
135
+
136
+ 26:end for
137
+
138
+ 27:return Verified outputs
139
+
140
+ 4 Related Work
141
+ --------------
142
+
143
+ ### 4.1 Low-Rank Adaptation (LoRA)
144
+
145
+ Low-Rank Adaptation (LoRA) [[4](https://arxiv.org/html/2501.13965v1#bib.bib4)] is a technique for parameter-efficient fine-tuning of large language models (LLMs) that injects small, low-rank adapter matrices into specific layers of a pre-trained model. By isolating the fine-tuning process to these low-rank components, LoRA drastically reduces memory overhead compared to full-model fine-tuning. This design choice is especially appealing for massive LLMs where training or even storing all parameters can be prohibitive [[3](https://arxiv.org/html/2501.13965v1#bib.bib3)].
146
+
147
+ Beyond the clear advantage of reduced storage, LoRA also facilitates swapping multiple domain-specific adapters into a single base model, making it straightforward to maintain separate skill sets without instantiating an entire new copy of the model. These adapters can target specialized tasks (e.g., medical or legal text) with minimal overhead, driving LoRA’s widespread adoption. Yet verifying that a proprietary LoRA truly aligns with a base model (without revealing the adapter) remains problematic—precisely the gap ZKLoRA fills.
148
+
149
+ ### 4.2 Incrementally Verifiable Computation
150
+
151
+ In a decentralized world, trust is a resource that is hard to achieve. In decentralized computation, we need to make sure the computations are both done and done correctly. In a seminal paper by Valiant (2008) [[7](https://arxiv.org/html/2501.13965v1#bib.bib7)], it was shown that proofs of knowledge can be used to assert the correct execution of general computations. That is, if M 𝑀 M italic_M is a machine that runs for t 𝑡 t italic_t steps producing a sequence of configurations c 0,c 1,…,c t subscript 𝑐 0 subscript 𝑐 1…subscript 𝑐 𝑡 c_{0},c_{1},\dots,c_{t}italic_c start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, then there exists an efficient and effective way to produce a computationally sound proof for the computation c 0→𝑡 c t 𝑡→subscript 𝑐 0 subscript 𝑐 𝑡 c_{0}\xrightarrow{t}c_{t}italic_c start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_ARROW overitalic_t → end_ARROW italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. This idea is referred to as Incrementally Verifiable Computation or IVC.
152
+
153
+ The main goal of IVC is to produce compact, updatable proofs of correctness for a sequence of computations, so that each new step can be verified on its own while building on the guarantees of previous steps. This technique significantly reduces the verification overhead for long or evolving computations, which is invaluable in scenarios like decentralized networks, outsourced computation, and any application requiring frequent correctness checks.
154
+
155
+ Kothapalli et al. (2022) [[6](https://arxiv.org/html/2501.13965v1#bib.bib6)] introduced the proof system NOVA and the idea of recursive proofs, which are proofs that can “prove the correctness of other proofs.” Recursive proof composition is key to IVC where each proof attests to the correctness of both a step’s output and the validity of the previous step’s proof.
156
+
157
+ HyperNova [[5](https://arxiv.org/html/2501.13965v1#bib.bib5)] is a novel recursive argument system optimized for customizable constraint systems (CCS) that generalizes and improves upon prior approaches like Nova. It achieves efficiency through a folding scheme where the prover’s cryptographic costs are minimized and achieves zero-knowledge without relying on zkSNARKs. An IVC system allows the construction of proofs in zero-knowledge where the proofs reveal no information about the underlying computation or its inputs beyond the validity of the claim [[7](https://arxiv.org/html/2501.13965v1#bib.bib7)].
158
+
159
+ 5 Conclusion
160
+ ------------
161
+
162
+ ZKLoRA provides a fast, robust mechanism to ensure that private LoRA modules remain effective when combined with a large base model. Our evaluations indicate that each LoRA module’s forward computation can be verified in less than 2 seconds, even for multi-billion-parameter LLMs. This efficiency bridges the gap between privacy-preserving LoRA development and practical, real-time validation in large-scale deployments. In terms of future work, the most relevant and immediate work would be adding polynomial commitments of the base model’s activations (those that are sent as input to the LoRA Contributor). This would take us one step closer to providing end-to-end verifiability of inference computation for LoRA-finetuned models. Other avenues of expansion could be integrating multi-contributor LoRAs, advanced zero-knowledge proofs for further performance gains, or partial data-privacy frameworks to shield user inputs as well as LoRA parameters.
163
+
164
+ References
165
+ ----------
166
+
167
+ * [1] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners, 2020.
168
+ * [2] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018.
169
+ * [3] Ning Ding, Zhuosheng Zheng, Fei Tan, Yuxian Chen, Xipeng Xie, Zhiyang Liu, Xinze Dai, and et al. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models, 2022.
170
+ * [4] Edward Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021.
171
+ * [5] Abhiram Kothapalli and Srinath Setty. Hypernova: Recursive arguments for customizable constraint systems. In Annual International Cryptology Conference, pages 345–379. Springer, 2024.
172
+ * [6] Abhiram Kothapalli, Srinath Setty, and Ioanna Tzialla. Nova: Recursive zero-knowledge arguments from folding schemes. In Annual International Cryptology Conference, pages 359–388. Springer, 2022.
173
+ * [7] Leslie G Valiant. Incrementally verifiable computation or ivc. [https://dash.harvard.edu/handle/1/5026950](https://dash.harvard.edu/handle/1/5026950), 2008. Harvard University, Technical Report.