Buckets:
Title: Differentially Private Attention Computation
URL Source: https://arxiv.org/html/2305.04701
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1INTRODUCTION 2RELATED WORK 3PRELIMINARY 4MAIN RESULT 5TECHNIQUE OVERVIEW 6CONCLUSION Roadmap. References License: CC BY 4.0 arXiv:2305.04701v2 [cs.LG] 14 Oct 2024 Differentially Private Attention Computation Yeqi Gao a916755226@gmail.com. The University of Washington. Zhao Song magic.linuxkde@gmail.com. The Simons Institute for the Theory of Computing at the University of California, Berkeley. Xin Yang yangxin199207@gmail.com. The University of Washington. Yufa Zhou yufazhou@seas.upenn.edu. University of Pennsylvania.
Large language models (LLMs), especially those based on the Transformer architecture, have had a profound impact on various aspects of daily life, such as natural language processing, content generation, research methodologies, and more. Nevertheless, a crucial concern regarding the inference results of large language models is the issue of security and privacy. Given that large language models can generate results that may leak sensitive confidential or copyright information in many scenarios, it is crucial to compute the attention matrix with provable privacy guarantees, as attention is all you need.
In this work, we propose a novel and efficient algorithm for approximating the attention matrix while providing differential privacy (DP) guarantees. To achieve this, we build on recent advancements in fast attention computation and differentially private matrix publishing.
Contents 1INTRODUCTION 2RELATED WORK 3PRELIMINARY 4MAIN RESULT 5TECHNIQUE OVERVIEW 6CONCLUSION 1INTRODUCTION
The development of large language models (LLMs) has been rapid and significant in recent years, with numerous breakthroughs and advancements in the field. BERT [16] achieved state-of-the-art performance on a wide range of language tasks by training on a massive amount of text data in 2018. Since then, the GPT (Generative Pre-trained Transformer) family of models has further advanced the field. GPT-2 [66] and GPT-3 [8], with billions of parameters, are able to generate highly coherent and human-like text. Other notable LLMs include XLNet [93], which addresses some of the limitations of BERT [16], and RoBERTa [46], which improves upon BERT [16]’s training methods to achieve better performance. The rapid development of LLMs has been fueled by advancements in hardware, software, and data availability, allowing researchers and companies to train and deploy these models at an unprecedented scale.
As a result of their development, LLMs have found a wide range of applications in various fields. In the field of natural language processing (NLP) [80, 63, 16, 8], LLMs are used for tasks such as language translation [34], sentiment analysis [76], and creative writing [58]. In addition, LLMs are being used to develop chatbots and virtual assistants that can understand and respond to natural language queries [8, 58]. Outside of NLP, LLMs are being used in scientific research to generate new hypotheses and discover novel patterns in large datasets. The applications of LLMs are expanding rapidly, and it is likely that they will play an increasingly important role in many fields, such as computer vision [62], robotics [40], and autonomous vehicles [97, 7].
Despite their many benefits, large language models (LLMs) have the potential to pose several privacy and security risks [67, 79, 38, 23]. One concern is the risk of data breaches, as LLMs require large amounts of data to be trained and the data used for training is often collected from public sources without the explicit consent of the individuals involved. This data could include sensitive personal information, such as medical records, financial data, or personally identifiable information [75, 24]. Furthermore, LLMs can potentially be used to generate convincing fake text [66, 64], which could be used for malicious purposes such as phishing attacks, spreading misinformation or impersonating individuals online. Additionally, LLMs can be used for so-called ”model inversion” attacks [26], where an attacker can extract private information about individuals by querying the model. For example, an attacker could use the model to infer sensitive information, such as an individual’s political views or sexual orientation, based on their text input. These privacy and security concerns highlight the need for ethical considerations and responsible use of LLMs, as well as for the development of robust security mechanisms to protect against potential attacks.
Given that attention mechanisms are at the core of models like the Transformer [80], considering privacy in attention computation is crucial. Attention mechanisms process input data that may contain sensitive information, and the computed attention weights could inadvertently reveal this information if exposed. Specifically, if sensitive data are encoded within the attention weights, compromising these weights could lead to the disclosure of personal identifying information or trade secrets.
Recent studies have focused on privacy issues related to Transformers and their attention mechanisms. For example, [79] showed that learned conditional generative models might output samples similar to copyrighted data in their training set, leading to copyright infringement issues. The proposed solution is near access-freeness (NAF), which involves defining generative models that do not access potentially copyrighted data. [79] provide formal definitions of NAF and generative model learning algorithms that produce models with strong bounds on the probability of sampling protected content. While NAF provides formal guarantees against such infringements, it also underscores the need to secure the attention mechanisms within Transformers to prevent privacy breaches related to sensitive information embedded in attention weights.
Moreover, the potential harms of LLMs extend to intellectual property violations and the dissemination of misinformation. To mitigate these issues, [38] developed a watermarking framework for proprietary language models. This framework embeds invisible signals into generated text that can be algorithmically detected, promoting accountability and traceability. Although this approach addresses the outputs of LLMs, it emphasizes the broader necessity of safeguarding the internal computations—like attention mechanisms—to enhance overall security.
Building upon the discussed challenges, our research focuses on addressing privacy and security issues in attention computation. Unlike previous works [95, 3, 11, 56, 19, 28, 21, 30], our work will concentrate on static computation for attention computation. To be specific, static computation is a technique used in implementing attention mechanisms in deep learning models, especially in the field of natural language processing. It involves computing the attention weights between the encoder and decoder only once and reusing them during decoding, rather than dynamically computing the attention weights for each time step during decoding. This approach enhances computational efficiency and reduces overall decoding time, especially for longer sequences, while also strengthening privacy and security in attention-based models.
1.1Key Definitions
Here, let us recall the formal mathematical definition of attention computation in static setting,
Definition 1.1 (Attention computation, see [95, 3, 11] as examples).
Given matrices 𝑄 ∈ ℝ 𝑛 × 𝑑 , 𝐾 ∈ ℝ 𝑛 × 𝑑 and 𝑉 ∈ ℝ 𝑛 × 𝑑 , the goal is to compute
𝖠𝗍𝗍 ( 𝑄 , 𝐾 , 𝑉 ) := 𝐷 − 1 𝐴 𝑉
where 𝐴
exp ( 𝑄 𝐾 ⊤ ) ∈ ℝ 𝑛 × 𝑛 (we apply exp ( ) entry-wisely to the matrix), and 𝐷
diag ( 𝐴 𝟏 𝑛 ) .
Following from the setting of work [21], we consider the symmetric attention approximation problem where we treat 𝑄
𝐾 and ignore the effect of 𝑉 . The formal formulation is
Definition 1.2.
Given 𝑋 ∈ ℝ 𝑛 × 𝑑 , the goal is to find some 𝑌 ∈ ℝ 𝑛 × 𝑚 such that
‖ 𝐷 ( 𝑋 𝑋 ⊤ ) − 1 exp ( 𝑋 𝑋 ⊤ ) − 𝐷 ( 𝑌 𝑌 ⊤ ) − 1 exp ( 𝑌 𝑌 ⊤ ) ‖ ≤ small
where ∥ ⋅ ∥ is some certain norm and 𝐷 ( 𝑋 𝑋 ⊤ )
diag ( exp ( 𝑋 𝑋 ⊤ ) ⋅ 𝟏 𝑛 ) .
One recent work [79] choose the angle of near access-freeness to study the privacy concerns in LLMs. However, in this work, we use the differential privacy concept [22], and the formal definition of differential privacy can be written as follows.
Definition 1.3 (Differential Privacy [20, 18]).
A randomized mechanism ℳ is ( 𝜖 , 𝛿 ) -differentially private if for any event 𝒪 ∈ Range ( ℳ ) and for any pair of neighboring databases 𝑆 , 𝑆 ′ that differ in a single data element, one has
Pr [ ℳ ( 𝑆 ) ∈ 𝒪 ] ≤ exp ( 𝜖 ) ⋅ Pr [ ℳ ( 𝑆 ′ ) ∈ 𝒪 ] + 𝛿 .
Finally, we’re ready to define our differentially private attention computation problem.
Definition 1.4 (General Differentially Private Attention).
Let 𝑓 : ℝ → ℝ denote some fixed function. For a given matrix 𝑋 ∈ ℝ 𝑛 × 𝑑 with 𝑑 ≫ 𝑛 , let ℳ denote some mapping that maps ℝ 𝑛 × 𝑑 to ℝ 𝑛 × 𝑛 , let 𝐴
ℳ ( 𝑋 ) , for parameter 𝜖 , 𝛿 ∈ ( 0 , 0.1 ) , the goal is to design an ( 𝜖 , 𝛿 ) -differetially private algorithm that takes 𝑋 ∈ ℝ 𝑛 × 𝑑 as input and generates a PSD matrix 𝐵 ∈ ℝ 𝑛 × 𝑛 such that
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ≤ 𝑔 ( 𝜖 , 𝛿 )
where 𝑓 ( 𝐴 ) 𝑖 , 𝑗
𝑓 ( 𝐴 𝑖 , 𝑗 ) , 𝖣 ( 𝐴 )
diag ( 𝑓 ( 𝐴 ) 𝟏 𝑛 ) and where 𝑔 is some function.
Definition 1.4 is very general, and covers the standard self-attention computation. In particular, when ℳ ( 𝑋 )
𝑋 𝑋 ⊤ and 𝑓 ( 𝑧 )
exp ( 𝑧 ) , then above definition recovers the standard self-attention in LLMs.
1.2Our Result
Our results rely on good properties of the input data, which are defined as follows. They play a crucial role in the analysis of sensitivity with respect to ℳ ( 𝑋 )
𝑋 𝑋 ⊤ (See Section 5.3).
Definition 1.5 (Dataset).
Fix 𝜂
0 , 𝛼
0 . We say our dataset 𝑋 ∈ ℝ 𝑛 × 𝑑 is ( 𝛼 , 𝜂 ) -good if 𝑋 𝑋 ⊤ ⪰ 𝜂 ⋅ 𝐼 𝑛 and for all 𝑖 ∈ [ 𝑑 ] , ‖ 𝑋 ∗ , 𝑖 ‖ 2 ≤ 𝛼 .
In addition, we will introduce our proposed definition of neighboring data as follows.
Definition 1.6 (Neighboring data).
Let
𝑋
,
𝑋
~
∈
ℝ
𝑛
×
𝑑
denote two datasets from distribution
𝒟
, we say that
𝑋
and
𝑋
~
are
𝛽
-close if there exists exact one
𝑖
∈
[
𝑑
]
so that
‖
𝑋
∗
,
𝑖
−
𝑋
~
∗
,
𝑖
‖
2
≤
𝛽
and for all
𝑗
∈
[
𝑑
]
{
𝑖
}
,
𝑋
∗
,
𝑗
𝑋 ~ ∗ , 𝑗 . In this work, we consider two datasets to be neighboring if they are 𝛽 -close.
The above definition facilitates a more straightforward analysis of the sensitivity of attention matrix computations. By regulating 𝛽 -closeness, we can establish bounds on how the attention matrix responds to minor variations in input data, which is essential for ensuring differential privacy guarantees. Furthermore, in practical scenarios, assessing dataset similarity based on feature-wise differences rather than individual data points can be more practical and aligns better with real-world considerations.
Based on the aforementioned definitions, our work demonstrates the sensitivity property of ℳ ( 𝑋 )
𝑋 𝑋 ⊤ (attention matrix computation). Furthermore, we present a novel and efficient algorithm for approximating the attention matrix, which combines error analysis on matrix perturbation with provable privacy guarantees. We state our result as follows:
Theorem 1.7 (Main result, informal of Theorem 4.1).
Let 𝑑 ≥ 𝑛 . Let 𝑋 ∈ ℝ 𝑛 × 𝑑 . Let 𝑓 ( 𝑧 ) ∈ { exp ( 𝑧 ) , cosh ( 𝑧 ) } . Let 𝑟 , 𝜖 , 𝛿 ∈ ( 0 , 0.1 ) . Let Δ
0.1 min { 𝜖 𝑘 log ( 1 / 𝛿 ) , 𝜖 log ( 1 / 𝛿 ) } . Let 𝐴
ℳ ( 𝑋 )
𝑋 𝑋 ⊤ and ‖ 𝐴 ‖ ∞ ≤ 𝑟 . Let 𝑓 ( 𝐴 ) and 𝖣 ( 𝐴 ) be defined as Definition 1.4. For all 𝑋 sampled from 𝒟 , 𝑋 is ( 𝛼 , 𝜂 ) -good (see Definition 1.5). Let 𝜂 < 𝑟 . Let 𝛽 be the parameter for the neighboring dataset. Let 2 𝛼 𝛽 𝑛 / 𝜂 < Δ . Suppose ‖ ℳ ( 𝑋 ) 1 / 2 ℳ ( 𝑋 ~ ) − 1 ℳ ( 𝑋 ) 1 / 2 − 𝐼 ‖ 𝐹 ≤ Δ for all 𝑋 ∈ ℝ 𝑛 × 𝑑 , 𝑋 ~ ∈ ℝ 𝑛 × 𝑑 (see Definition 1.6). Let 𝜌
( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 + ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 < 0.1 𝜖 .
Then, there is an algorithm (Algorithm 1) that takes 𝑋 as input and produces the matrix 𝐵 ∈ ℝ 𝑛 × 𝑛 and also general attention 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) as output such that
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ 4 ⋅ ( 1 + 𝜖 + 2 𝑟 ) ⋅ 𝑟
which holds with probability 1 − 𝛾 . With respect to 𝑋 , the algorithm is ( 𝜖 , 𝛿 ) -differential private.
Roadmap.
Our paper is organized as follows. We discuss related work in Section 2. We provide our preliminary in Section 3. Our main result is presented in Section 4. We provide an overview of our techniques in Section 5. In Section 6, we give our conclusion of the paper.
2RELATED WORK 2.1Attention Mechanism
Attention mechanisms are foundational in modern neural networks, gaining widespread adoption since their introduction in [80]. They are crucial in decoder-only LLMs [66] and the Vision Transformer (ViT) [15], driving significant progress in language models and computer vision [61, 86, 87, 82, 72, 50]. Additionally, attention mechanisms have been applied to multi-modal models [5, 88, 96, 52, 85], mathematical reasoning [41, 90], diffusion models [59, 53, 35], differential privacy [6, 71, 51, 14], Hopfield models [36, 84, 33, 89, 83, 31, 32], and various other techniques [47, 54, 60, 48, 43, 44, 13, 68, 45, 91].
2.2Differential Privacy and Deep Learning
Differential privacy (DP) is a rigorous and quantifiable notion of privacy that ensures individual data entries cannot be distinguished within a dataset. It has become the go-to standard for understanding information leakage [22]. This widely recognized framework is increasingly being adopted in industry and has many real-world applications [92, 74, 69, 25, 17, 65]. There has been extensive research on applying differential privacy in deep learning [1, 39, 27, 51, 73, 49, 42]. Recent works [94, 57] have applied DP-SGD [1] to large language models (LLMs) for private fine-tuning. Our research, however, is orthogonal to these works as we focus on attention computation and consider general differential privacy mechanisms, not just DP-SGD.
2.3Softmax Computation and Regression
With the rapid development of large language models and attention schemes, many works have focused on softmax computation and regression in this field. [3, 4] shows that a faster attention algorithm can be designed by leveraging the matrix implicitly. [11] proposes a more efficient algorithm for computing dynamic attention by employing the method of lazy update. To solve exp , cosh , and sinh regressions with input sparsity, [56] use an approximate Newton method that operates in near-linear time. In their work on softmax regression, [19] conduct a further analysis of attention schemes based on prior research in regression. In contrast, [28] focus on the convergence analysis of overparameterized two-layer networks with exponential activation functions. To compute the attention matrix more efficiently for large feature dimensions, [21] propose a randomized algorithm.
3PRELIMINARY
Section 3.1 presents the notations that are used throughout our paper. In Section 3.2, we provide the previous tools that help our proofs.
3.1Notations
𝔼 [ 𝑋 ] represents the expected value (or mean) of a random variable 𝑋 . We use 𝜒 𝑑 2 to denote a Chi-squared random variable with 𝑑 degrees of freedom. If 𝑀 and 𝑁 are symmetric matrices, we define 𝑀 ⪰ 𝑁 to mean that for all vectors 𝑥 , the inequality 𝑥 ⊤ 𝑀 𝑥 ≥ 𝑥 ⊤ 𝑁 𝑥 holds. If 𝑀 is a symmetric matrix of dimension 𝑛 × 𝑛 , we define 𝑀 to be positive semidefinite ( 𝑀 ⪰ 0 ) if the inequality 𝑥 ⊤ 𝑀 𝑥 ≥ 0 holds for all vectors 𝑥 ∈ ℝ 𝑛 . We use the notation 𝟎 𝑛 to denote an 𝑛 -dimensional vector whose entries are all zero, and 𝟏 𝑛 to denote an 𝑛 -dimensional vector whose entries are all one. The symbol 𝐼 𝑛 represents the 𝑛 × 𝑛 identity matrix, which is a square matrix with ones on the main diagonal and zeros elsewhere. Let 𝑥 be an arbitrary vector in ℝ 𝑛 . We define exp ( 𝑥 ) ∈ ℝ 𝑛 as a vector whose 𝑖 -th entry exp ( 𝑥 ) 𝑖 is equal to exp ( 𝑥 𝑖 ) , where exp ( ⋅ ) denotes the exponential function. We use ⟨ 𝑥 , 𝑦 ⟩ to denote ∑ 𝑖
1 𝑛 𝑥 𝑖 𝑦 𝑖 . For any matrix 𝐴 , we use ‖ 𝐴 ‖ to denote the spectral norm of 𝐴 , i.e., ‖ 𝐴 ‖
max ‖ 𝑥 ‖ 2
1 ‖ 𝐴 𝑥 ‖ 2 , ‖ 𝐴 ‖ 𝐹 to denote its Frobenius norm and ‖ 𝐴 ‖ ∞ to denote the infinity norm. 𝐴 𝑖 , 𝑗 represents the element in the 𝑖 -th row and 𝑗 -th column of matrix 𝐴 .
3.2Previous Tools
This section introduces several differential privacy tools. These tools are essential for demonstrating the differential privacy properties of our algorithm.
Theorem 3.1 (Empirical covariance estimator for Gaussian [78]).
Let Σ ∈ ℝ 𝑑 × 𝑑 be PSD, 𝑋 1 , ⋯ , 𝑋 𝑛 ∼ 𝒩 ( 0 , Σ ) be i.i.d and Σ ~
1 𝑛 ∑ 𝑖
1 𝑛 𝑋 𝑖 𝑋 𝑖 ⊤ . Then with probability 1 − 𝛾 , it holds that ‖ Σ − 1 / 2 Σ ~ Σ − 1 / 2 − 𝐼 ‖ 𝐹 ≤ 𝜌 for some 𝜌
𝑂 ( 𝑑 2 + log ( 1 / 𝛾 ) 𝑛 + 𝑑 2 + log ( 1 / 𝛾 ) 𝑛 ) .
Theorem 3.2 (Lemma 1.5 in [77], Section 1.1 of [10]).
For a (randomized) mechanism ℳ and datasets 𝑥 , 𝑦 , define the function 𝑓 𝑥 𝑦 ( 𝑧 ) := log ( Pr [ ℳ ( 𝑥 )
𝑧 ] Pr [ ℳ ( 𝑦 )
𝑧 ] ) If Pr [ 𝑓 𝑥 𝑦 ( ℳ ( 𝑥 ) )
𝜖 ] ≤ 𝛿 for all adjacent datasets 𝑥 , 𝑦 , then ℳ is ( 𝜖 , 𝛿 ) -DP.
Lemma 3.3 (Sub-exponential tail bound, Proposition 2.9 in [81]).
Suppose that 𝑋 is sub-exponential with parameters ( 𝜈 , 𝛼 ) . Then Pr [ 𝑋 − 𝜇 ≥ 𝑡 ] ≤ max { exp ( − 𝑡 2 2 𝑣 2 ) , exp ( 𝑡 2 𝛼 ) } .
Lemma 3.4 ( 𝜒 1 2 sub-exponential parameters, Example 2.11 in [81]).
A chi-squared random variable with 1 degree of freedom ( 𝜒 1 2 ) is sub-exponential with parameters ( 𝜈 , 𝛼 )
( 2 , 4 )
Lemma 3.5 (Sub-exponential parameters of independent sum, Chapter 2 of [81]).
Consider an independent sequence 𝑋 1 , ⋯ , 𝑋 𝑘 of random variables, such that 𝑋 𝑖 is sub-exponential with parameters ( 𝜈 𝑖 , 𝛼 𝑖 ) . Then the variable ∑ 𝑖
1 𝑘 𝑋 𝑖 is sub-exponential with parameters ( 𝜈 ∗ , 𝛼 ∗ ) , where 𝑎 ∗
max 𝑖 ∈ [ 𝑘 ] 𝛼 𝑖 and 𝜈 ∗
( ∑ 𝑖
1 𝑘 𝜈 𝑖 2 ) 1 / 2 .
4MAIN RESULT Algorithm 1 Differential privacy algorithm 1:procedure DPAttention( 𝑋 ) 2: 𝐴 ← 𝑋 𝑋 ⊤ 3: 𝐵 ← DPCovariance ( 𝐴 , 𝑘 ) ▷ See Algorithm 2. 4: Compute 𝑓 ( 𝐵 ) 5: Compute 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) 6:end procedure
In this section, we provide a theoretical analysis of Algorithm 1, our primary algorithm for differentially private general attention computation. Our analysis leverages the tools established in Section 5.1, Section 5.2, and Section 5.3. From our previous proofs, it is evident that our algorithm possesses a rigorous differential privacy property, offering new insights into both differential privacy and attention mechanisms.
Theorem 4.1 (Main result).
If all of the following requirements are met Let 𝑑 ≥ 𝑛 , 𝑋 ∈ ℝ 𝑛 × 𝑑 , and 𝑓 ( 𝑧 ) ∈ { exp ( 𝑧 ) , cosh ( 𝑧 ) } . We define 𝑟 ∈ ( 0 , 0.1 ) as bounded ratio and 𝜖 , 𝛿 ∈ ( 0 , 0.1 ) as the parameter of DP. Let Δ
0.1 min { 𝜖 𝑘 log ( 1 / 𝛿 ) , 𝜖 log ( 1 / 𝛿 ) } . Let 𝐴
ℳ ( 𝑋 )
𝑋 𝑋 ⊤ and ‖ 𝐴 ‖ ∞ ≤ 𝑟 . For all 𝑋 sampled from 𝒟 , 𝑋 is ( 𝛼 , 𝜂 ) -good (see Definition 1.5). Let 𝜂 < 𝑟 . Let 𝛽 be the parameter for neighboring dataset. Let 2 𝛼 𝛽 𝑛 / 𝜂 < Δ . Let Δ denote the sensitivity parameter that ℳ satisfies a sensitivity bound that ‖ ℳ ( 𝑋 ) 1 / 2 ℳ ( 𝑋 ~ ) − 1 ℳ ( 𝑋 ) 1 / 2 − 𝐼 ‖ 𝐹 ≤ Δ for any neighboring datasets 𝑋 ∈ ℝ 𝑛 × 𝑑 , 𝑋 ~ ∈ ℝ 𝑛 × 𝑑 (see Definition 1.6). Let 𝜌
( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 + ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 and 𝜌 < 0.1 𝜖 .
There is an algorithm (Algorithm 1) that takes 𝑋 as input and produces the matrix 𝐵 ∈ ℝ 𝑛 × 𝑛 and also attention 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) as output such that
•
Part 1. ∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ 4 ⋅ ( 1 + 𝜖 + 2 𝑟 ) ⋅ 𝑟 .
•
Part 2. With respect to 𝑋 , the algorithm is ( 𝜖 , 𝛿 ) -differential private.
•
Part 3. It holds with probability 1 − 𝛾 .
Proof of Theorem 4.1.
The proof can be divided into two parts as follows.
Proof of Part 1 and Part 3.
Our proof focus on the function ℳ ( 𝑋 ) := 𝑋 𝑋 ⊤ first. Let 𝛼 and 𝜂 be denoted in Definition 1.5 and 𝛽 be denoted as Definition 1.6. Based on the assumption on dataset above, we can obtain 𝑋 is ( 𝜂 , 𝛼 ) -good (See Definition 1.5) while 𝑋 and 𝑋 ~ are 𝛽 -close (See Definition 1.6).
According to Part 1 of Lemma 5.11, we can conclude the property on ℳ ( 𝑋 )
𝑋 𝑋 ⊤ such that
‖ ( 𝑋 𝑋 ⊤ ) − 1 / 2 𝑋 ~ 𝑋 ~ ⊤ ( 𝑋 𝑋 ⊤ ) − 1 / 2 − 𝐼 ‖ 𝐹 ≤ 2 𝑛 𝛼 𝛽 / 𝜂
Let ℳ be the function denoted in the theorem statement and let 𝜌 be denoted as follows:
𝜌 := 𝑂 ( ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 + ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 )
Now, we will apply the conclusion drawn in Section 5.2. In order to satisfy the requirement specified in Requirement 4 of Theorem 5.9, we need ℳ ( 𝑋 ) to meet the following assumption:
‖ ℳ ( 𝑋 ) 1 / 2 ℳ ( 𝑋 ~ ) − 1 ℳ ( 𝑋 ) 1 / 2 − 𝐼 ‖ 𝐹 ≤ Δ .
Now, if we choose 2 𝛼 𝛽 𝑛 / 𝜂 < Δ . we will guarantee that our ℳ ( 𝑋 ) satisfies the assumption specified in Requirement 4 of Theorem 5.3. According to Part 3 of Theorem 5.3, there exists Algorithm 2 which can produce a matrix 𝐵 ∈ ℝ 𝑛 × 𝑛 such that, with probability at least 1 − 𝛾
( 1 − 𝜌 ) 𝐴 ⪯ 𝐵 ⪯ ( 1 + 𝜌 ) 𝐴
(1)
By choosing 𝜌 ∈ ( 0 , 0.1 ) 𝜖 , we will have
( 1 − 𝜖 ) 𝐵 ⪯ 𝐴 ⪯ ( 1 + 𝜖 ) 𝐵
(2)
Now according to Theorem 5.3 and Eq. (2), we have
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ 4 ⋅ ( 1 + 𝜖 + 2 𝑟 ) ⋅ 𝑟
Now, the proofs of Part 1 and Part 3 are completed.
Proof of Part 2.
It simply follows from Part 1 of Theorem 5.9. ∎
The main result implies that we can design an algorithm that computes a private approximation of the attention mechanism used in neural networks for functions like 𝑓 ( 𝑧 )
exp ( 𝑧 ) or 𝑓 ( 𝑧 )
cosh ( 𝑧 ) . Specifically, under certain conditions on the input matrix 𝑋 and parameters 𝜖 , 𝛿 , and with a small bounded ratio 𝑟 , the algorithm produces a matrix 𝐵 such that the normalized attention matrices derived from 𝐴
𝑋 𝑋 ⊤ and 𝐵 are close in the infinity norm. This closeness is quantified by a bound proportional to 𝑟 , ensuring that the utility of the attention mechanism is preserved. Additionally, the algorithm is ( 𝜖 , 𝛿 ) -differentially private with respect to 𝑋 , meaning it protects individual data entries from being inferred. The privacy and utility guarantees hold with high probability 1 − 𝛾 , demonstrating that it is possible to implement attention mechanisms in a way that maintains both model performance and data privacy.
5TECHNIQUE OVERVIEW
The objective of our research is to develop a differentially private algorithm that addresses the challenges of computing attention on large datasets. Specifically, we focus on scenarios where the size of the data matrix 𝑋 is extremely large, with the number of features 𝑑 significantly exceeding the number of samples 𝑛 (i.e., 𝑑 ≫ 𝑛 ). In these cases, the attention matrix 𝐴 is obtained as the output of the function ℳ ( 𝑋 )
𝑋 𝑋 ⊤ , and our goal is to ensure that the computation of 𝐴 is performed in a differentially private [20, 18] manner.
Perturb PSD Matrix.
We define the attention computation 𝖣 ( 𝑋 ) as Definition 5.2. By employing a more general version of Perturbation analysis presented in [21], we select 𝑓 as specified in Definition 5.1. To complete the error analysis of attention computation, we will utilize the perturbation analysis of the diagonal normalization matrix and the PSD matrix presented in Appendix B. Under the assumption the relative error between input matrix ℳ ( 𝑋 ) := 𝐴 and privacy required matrix output 𝐵 is less than or equal to 𝜖 ∈ ( 0 , 0.1 ) where ( 1 − 𝜖 ) 𝐵 ⪯ 𝐴 ⪯ ( 1 + 𝜖 ) 𝐵 . To establish an upper bound for ∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ , we first derive the following bound:
•
Part 1. | 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 𝖣 ( 𝐵 ) 𝑖 , 𝑖 | ≤ 𝑐 1 ⋅ 𝑟 ⋅ min { 𝖣 ( 𝐴 ) 𝑖 , 𝑖 , 𝖣 ( 𝐵 ) 𝑖 , 𝑖 } ∀ 𝑖 ∈ [ 𝑛 ] ,
•
Part 2. | 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑐 2 ⋅ 𝑟 ⋅ min { 𝑓 ( 𝐴 𝑖 , 𝑗 ) , 𝑓 ( 𝐵 𝑖 , 𝑗 ) } ∀ 𝑖 , 𝑗 ∈ [ 𝑛 ] × [ 𝑛 ]
And with the error of attention computation under control as mentioned above, we can obtain:
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ 4 ⋅ ( 1 + 𝜖 + 2 𝑟 ) ⋅ 𝑟
Sensitivity for PSD Matrix.
Our work relies on the basic assumptions that 𝑋 ∈ ℝ 𝑛 × 𝑑 is a ( 𝜂 , 𝛼 ) -good dataset (See Definition 1.5) and that 𝑋 and 𝑋 ~ are 𝛽 -close to each other (See Definition 1.6). We choose ℳ ( 𝑋 ) := 𝑋 𝑋 ⊤ . Now we will demonstrate the property of our function ℳ ( 𝑋 )
𝑋 𝑋 ⊤ based on the given assumptions. Since 𝑋 and 𝑋 ~ are neighbor datasets, we have the following:
‖ ℳ ( 𝑋 ) 1 / 2 ℳ ( 𝑋 ~ ) − 1 ℳ ( 𝑋 ) 1 / 2 − 𝐼 ‖ 𝐹 ≤ 2 𝛼 𝛽 𝑛
The proof details can be found in Section D. Let us denote Δ as defined in Definition 5.5. By choosing 2 𝛼 𝛽 𝑛 / 𝜂 < Δ , we will have
‖ ( 𝑋 𝑋 ⊤ ⏟ := ℳ ( 𝑋 ) ) 1 / 2 ( 𝑋 ~ 𝑋 ~ ⊤ ⏟ := ℳ ( 𝑋 ~ ) ) − 1 ( 𝑋 𝑋 ⊤ ⏟ := ℳ ( 𝑋 ) ) 1 / 2 − 𝐼 ‖ 𝐹 ≤ Δ
(3)
The assumption specified in the Requirement 5 of Theorem C.7 will be satisfied. Next, we will introduce our main algorithm using Eq. (3).
Differential Privacy Algorithm.
Next we give the differential privacy algorithm described in Theorem 1.7. And we will demonstrate that our algorithm (Algorithm 1) is able to output a matrix that satisfies the Part 1 of our formal main result (See Theorem 4.1).
To begin with, we demonstrate that there exists an algorithm capable of taking input 𝐴 and producing a matrix 𝐵 as output such that the difference between 𝐴 and 𝐵 is small enough, which can be seen as a small error resulting from the perturbation of 𝐴 by
𝜌 := 𝑂 ( ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 + ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 ) .
In other words, we have ( 1 − 𝜌 ) 𝐴 ⪯ 𝐵 ⪯ ( 1 + 𝜌 ) 𝐴 . The above equation holds with probability 1 − 𝛾 . Note that 𝑘 and 𝛾 can be chosen according to our requirements. We can ensure that a satisfactory 𝜌 is obtained. By choosing a small enough 𝜌 ≤ 0.1 𝜖 and using the conclusions on perturbed PSD matrices, the algorithm can certainly output a satisfactory 𝐵 which promises our attention computation is privacy [20, 18].
5.1Error Control from Logit Matrix to Attention Matrix
In this section, we analyze the perturbations in the attention computation, which are used to control the error. First, we define the followings.
Definition 5.1.
Let 𝑓 ( 𝑧 ) denote one of the following functions exp ( 𝑧 ) and cosh ( 𝑧 ) .
The motivation of considering exp ( 𝑧 ) is due to recent LLMs. The motivation of considering cosh ( 𝑧 ) is from recent progress in potential function design of convex optimization [12, 55, 70, 9, 37, 29, 60].
Definition 5.2.
Given that 𝐴 ∈ ℝ 𝑛 × 𝑛 , we define 𝑓 as Definition 5.1. Let us define 𝖣 ( 𝐴 ) := diag ( 𝑓 ( 𝐴 ) 𝟏 𝑛 ) where we apply 𝑓 to matrix entrywisely.
We state a major tool we proved in this paper to control the error propagation which summarizes the effectiveness of our error control mechanisms in achieving differential privacy for the computation of the attention matrix.
Theorem 5.3.
Let 𝜖 ∈ ( 0 , 0.1 ) and 𝑟 ∈ ( 0 , 0.1 ) . Let ‖ 𝐴 ‖ ∞ ≤ 𝑟 and ( 1 − 𝜖 ) 𝐵 ⪯ 𝐴 ⪯ ( 1 + 𝜖 ) 𝐵 . We define 𝖣 as Definition 5.2 and 𝑓 as Definition 5.1.
Then, we have
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ 4 ⋅ ( 1 + 𝜖 + 2 𝑟 ) ⋅ 𝑟
The prior work [21] only work for exp ( ) function and the final guarantee is 𝑂 ( 𝑟 ) . We generalize it to cosh ( ) function also, and our error bound is much tighter. The proof of the theorem above is in Section B.4.
5.2Analysis of Gaussian Sampling Mechanism
This section introduces a crucial component of our main differential privacy algorithm (Algorithm 1): the differentially private covariance releasing mechanism, detailed in Algorithm 2. The differential privacy (DP) of Algorithm 2 ensures the DP of the main algorithm (Algorithm 1). Therefore, we will also demonstrate its DP. Our proof is based on the assumption that the sensitivity is bounded (Requirement 4 in Theorem 5.9). We defer the validation of the assumption to Section 5.3. For clarity, the following proof is based on the assumption that 𝑀 ≤ Δ (See Definition 5.4 and Definition 5.5), which will be proven in Section 5.3. Let 𝒴 and 𝒴 ′ be neighboring datasets, as denoted in Definition 1.6.
To facilitate the explanation in the following proof, we will define 𝑀 to better illustrate the properties of ℳ .
Definition 5.4.
Let ℳ : ( ℝ 𝑛 ) 𝑑 → ℝ 𝑛 × 𝑛 be a (randomized) algorithm that given a dataset of 𝑑 points in ℝ 𝑛 outputs a PSD matrix. Then, we define
𝑀 := sup 𝒴 , 𝒴 ′ ‖ ℳ ( 𝒴 ) 1 / 2 ℳ ( 𝒴 ′ ) − 1 ℳ ( 𝒴 ) 1 / 2 − 𝐼 ‖ 𝐹
Here sup is over all neighboring datasets 𝒴 and 𝒴 ′ (see Definition 1.6).
We define the upper bound of 𝑀 as Δ as follows.
Definition 5.5.
Let 𝑀 be defined in Definition 5.4. We define Δ := min { 𝜖 8 𝑘 log ( 1 / 𝛿 ) , 𝜖 8 log ( 1 / 𝛿 ) } such that 𝑀 ≤ Δ .
We define the notation of gaussian distribution below.
Definition 5.6 (Gaussian Distribution).
We denote the 𝒩 ( 0 , Σ ) density function as follows
𝑓 Σ ( 𝑥 )
( 2 𝜋 ) − 𝑛 2 det ( Σ ) − 1 2 exp ( − 0.5 𝑥 ⊤ Σ 𝑥 )
We state our differentially private covariance releasing algorithm. Assuming 𝑔 𝑖 s are i.i.d. samples from 𝑓 Σ ( 𝑥 ) for 𝑖 ∈ [ 𝑘 ] , we use 𝑔 1 , 𝑔 2 , ⋯ , 𝑔 𝑘 to compute the covariance estimate Σ ^ in Algorithm 2. We will demonstrate the analysis of Σ ^
1 𝑘 ∑ 𝑖
1 𝑘 𝑔 𝑖 𝑔 𝑖 ⊤ using the symbol from Appendix C, which leads to the privacy guarantee for our algorithm 2.
Algorithm 2 Differentially private covariance releasing 1:procedure DPCovariance( Σ ∈ ℝ 𝑛 × 𝑛 , 𝑘 ∈ ℕ ) ▷ PSD matrix Σ and parameter 𝑘 2: Obtain vectors 𝑔 1 , 𝑔 2 , ⋯ , 𝑔 𝑘 by sampling 𝑔 𝑖 ∼ 𝒩 ( 0 , Σ ) , independently for each 𝑖 ∈ [ 𝑘 ] 3: Compute Σ ^
1 𝑘 ∑ 𝑖
1 𝑘 𝑔 𝑖 𝑔 𝑖 ⊤ ▷ This is Covariance estimate. 4: return Σ ^ 5:end procedure
The soundness Algorithm 2 can be shown using Theorem 5.9. We now give the definitions of Σ 1 , Σ 2 , ℎ 𝑖 , 𝑗 and 𝑍 which will be used to prove the Theorem 5.9.
Definition 5.7.
Let ℳ be denoted in Definition 5.4 and Σ ( 𝒴 ) := ℳ ( 𝒴 ) . We define Σ 1 := Σ ( 𝒴 ) , Σ 2 := Σ ( 𝒴 ′ ) .
Definition 5.8.
Let 𝑔 1 , 𝑔 2 , ⋯ , 𝑔 𝑘 be i.i.d samples from 𝒩 ( 0 , Σ 1 ) output by Algorithm 2. Then, we define ℎ 𝑖 , 𝑗 := ⟨ Σ 1 − 1 / 2 𝑔 𝑖 , 𝑣 𝑗 ⟩ , 𝑍 := ∑ 𝑖
1 𝑘 log ( 𝑓 Σ 1 ( 𝑔 𝑖 ) 𝑓 Σ 2 ( 𝑔 𝑖 ) ) where Σ 1 , Σ 2 are defined by Definition 5.7. Note that the random variables ℎ 𝑖 , 𝑗 are i.i.d copies of 𝒩 ( 0 , 1 ) .
We will now present our theorem for Algorithm 2. The proof is delayed to Section C.6.
Theorem 5.9 (Informal version of Theorem C.7).
If all of the following requirements are met: Requirement 1. Let 𝜖 ∈ ( 0 , 1 ) and 𝛿 ∈ ( 0 , 1 ) . Requirement 2. 𝑘 ∈ ℕ . Requirement 3. Let Δ be denoted as Definition 5.5 and Δ < 1 . Requirement 4. Let 𝑀 , ℳ be denoted as Definition 5.4 and 𝑀 ≤ Δ . Requirement 5. An input Σ
ℳ ( 𝒴 ) . Requirement 6. 𝜌
𝑂 ( ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 + ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 ) .
Then, there is an algorithm (Algorithm 2) such that
•
Part 1. Algorithm 2 is ( 𝜖 , 𝛿 ) -DP (with respect to the original dataset 𝒴 ).
•
Part 2. outputs Σ ^ ∈ 𝕊 + 𝑛 such that with probability at least 1 − 𝛾 , ‖ Σ − 1 / 2 Σ ^ Σ − 1 / 2 − 𝐼 𝑛 ‖ 𝐹 ≤ 𝜌
•
Part 3. ( 1 − 𝜌 ) Σ ⪯ Σ ^ ⪯ ( 1 + 𝜌 ) Σ .
Using this theorem, we can see our Algorithm 2 is DP, which ensures the DP of Algorithm 1.
5.3Sensitivity for PSD Matrix
We have demonstrated the existence of a differential privacy algorithm under the assumption on ℳ ( 𝑋 )
𝑋 𝑋 ⊤ introduced in Section 5.2. In this section, we show that ℳ ( 𝑋 )
𝑋 𝑋 ⊤ satisfies the assumption specified in Requirement 4 of Theorem 5.9 for ℳ ( 𝑋 ) . The lemma following is based on the assumption on datasets 𝑋 , 𝑋 ~ (See Definition 1.5 and Definition 1.6).
Lemma 5.10 (Informal version of Lemma D.1).
If 𝑋 ∈ ℝ 𝑛 × 𝑑 and 𝑋 ~ ∈ ℝ 𝑛 × 𝑑 are neighboring dataset (see Definition 1.5 and Definition 1.6), then ( 1 − 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ ⪯ 𝑋 ~ 𝑋 ~ ⊤ ⪯ ( 1 + 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ .
Now, we can have the following lemma. The subsequent lemma can be viewed as a variation of Lemma 5.10, yet it presents a more apparent result that can be directly employed in subsequent analyses.
Lemma 5.11.
Let 𝛼 and 𝜂 be defined in Definition 1.5. Let 𝛽 be defined in Definition 1.6. Let 𝑋 and 𝑋 ~ be neighboring datasets such that ( 1 − 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ ⪯ 𝑋 ~ 𝑋 ~ ⊤ ⪯ ( 1 + 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ .
Then, we have
•
Part 1. ‖ ( 𝑋 𝑋 ⊤ ) − 1 / 2 𝑋 ~ 𝑋 ~ ⊤ ( 𝑋 𝑋 ⊤ ) − 1 / 2 − 𝐼 ‖ ≤ 2 𝛼 𝛽 / 𝜂
•
Part 2. ‖ ( 𝑋 𝑋 ⊤ ) − 1 / 2 𝑋 ~ 𝑋 ~ ⊤ ( 𝑋 𝑋 ⊤ ) − 1 / 2 − 𝐼 ‖ 𝐹 ≤ 2 𝑛 𝛼 𝛽 / 𝜂
The proof of Lemma 5.11 follows directly from the Lemma 5.10.
Presently, we have delved into the sensitivity property of attention computation. We are able to illustrate that the computation of the attention matrix aligns with the assumptions introduced in Section 5.2. Building upon this foundation, we will subsequently address our primary result in Section 4.
6CONCLUSION
In this work, we propose a differentially private algorithm for approximating the attention matrix. Our algorithm is built upon recent advances in fast attention computation and private matrix releasing. To the best of our knowledge, this is the first work of accelerating attention computation in the DP setting. Given the dominating presence of Transformer based language models, we hope our work can stand as a starting point for fully DP training and inferring algorithms on large language models. It is also an interesting open problem to approximate asymmetric attention computation with differential privacy.
References ACG+ [16] ↑ Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.Deep learning with differential privacy.In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016. AKT+ [22] ↑ Daniel Alabi, Pravesh K Kothari, Pranay Tankala, Prayaag Venkat, and Fred Zhang.Privately estimating a gaussian: Efficient, robust and optimal.arXiv preprint arXiv:2212.08018, 2022. AS [23] ↑ Josh Alman and Zhao Song.Fast attention requires bounded entries.arXiv preprint arXiv:2302.13214, 2023. [4] ↑ Josh Alman and Zhao Song.The fine-grained complexity of gradient computation for training large language models.arXiv preprint arXiv:2402.04497, 2024. [5] ↑ Josh Alman and Zhao Song.How to capture higher-order correlations? generalizing matrix softmax attention to kronecker computation.In The Twelfth International Conference on Learning Representations, 2024. BEPP [22] ↑ Rouzbeh Behnia, Mohammadreza Reza Ebrahimi, Jason Pacheco, and Balaji Padmanabhan.Ew-tune: A framework for privately fine-tuning large language models with differential privacy.In 2022 IEEE International Conference on Data Mining Workshops (ICDMW), pages 560–566. IEEE, 2022. BKO [18] ↑ Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale.Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst.arXiv preprint arXiv:1812.03079, 2018. BMR+ [20] ↑ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.Language models are few-shot learners.Advances in neural information processing systems, 33:1877–1901, 2020. Bra [20] ↑ Jan van den Brand.A deterministic linear program solver in current matrix multiplication time.In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 259–278. SIAM, 2020. BS [16] ↑ Mark Bun and Thomas Steinke.Concentrated differential privacy: Simplifications, extensions, and lower bounds.In Theory of Cryptography: 14th International Conference, TCC 2016-B, Beijing, China, October 31-November 3, 2016, Proceedings, Part I, pages 635–658. Springer, 2016. BSZ [23] ↑ Jan van den Brand, Zhao Song, and Tianyi Zhou.Algorithm and hardness for dynamic attention maintenance in large language models.arXiv preprint arXiv:2304.02207, 2023. CLS [19] ↑ Michael B Cohen, Yin Tat Lee, and Zhao Song.Solving linear programs in the current matrix multiplication time.In STOC, 2019. CLS+ [24] ↑ Bo Chen, Yingyu Liang, Zhizhou Sha, Zhenmei Shi, and Zhao Song.Hsr-enhanced sparse attention acceleration, 2024. CSY [23] ↑ Timothy Chu, Zhao Song, and Chiwun Yang.How to protect copyright data in optimization of large language models?arXiv preprint arXiv:2308.12247, 2023. DBK+ [20] ↑ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.An image is worth 16x16 words: Transformers for image recognition at scale.arXiv preprint arXiv:2010.11929, 2020. DCLT [18] ↑ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.Bert: Pre-training of deep bidirectional transformers for language understanding.arXiv preprint arXiv:1810.04805, 2018. Dif [22] ↑ Differential Privacy Team, Apple.Learning with privacy at scale.https://docs-assets.developer.apple.com/ml-research/papers/learning-with-privacy-at-scale.pdf, 2022.Online; accessed 30 November 2022. DKM+ [06] ↑ Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor.Our data, ourselves: Privacy via distributed noise generation.In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 486–503. Springer, 2006. DLS [23] ↑ Yichuan Deng, Zhihang Li, and Zhao Song.Attention scheme inspired softmax regression.arXiv preprint arXiv:2304.10411, 2023. DMNS [06] ↑ Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith.Calibrating noise to sensitivity in private data analysis.In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pages 265–284. Springer, 2006. DMS [23] ↑ Yichuan Deng, Sridhar Mahadevan, and Zhao Song.Randomized and deterministic attention sparsification algorithms for over-parameterized feature dimension.arxiv preprint: arxiv 2304.03426, 2023. DR+ [14] ↑ Cynthia Dwork, Aaron Roth, et al.The algorithmic foundations of differential privacy.Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014. EMM+ [23] ↑ Alessandro Epasto, Jieming Mao, Andres Munoz Medina, Vahab Mirrokni, Sergei Vassilvitskii, and Peilin Zhong.Differentially private continual releases of streaming frequency moment estimations.arXiv preprint arXiv:2301.05605, 2023. ERLD [17] ↑ Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou.Hotflip: White-box adversarial examples for text classification.arXiv preprint arXiv:1712.06751, 2017. Fac [22] ↑ Facebook.Protecting privacy in facebook mobility data during the covid-19 response, 2022. FJR [15] ↑ Matt Fredrikson, Somesh Jha, and Thomas Ristenpart.Model inversion attacks that exploit confidence information and basic countermeasures.In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1322–1333, 2015. GGK+ [21] ↑ Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, and Chiyuan Zhang.Deep learning with label differential privacy.Advances in neural information processing systems, 34:27131–27145, 2021. GMS [23] ↑ Yeqi Gao, Sridhar Mahadevan, and Zhao Song.An over-parameterized exponential regression.arXiv preprint arXiv:2303.16504, 2023. GS [22] ↑ Yuzhou Gu and Zhao Song.A faster small treewidth sdp solver.arXiv preprint arXiv:2211.06033, 2022. GSY [23] ↑ Yeqi Gao, Zhao Song, and Junze Yin.An iterative algorithm for rescaled hyperbolic functions regression.arXiv preprint arXiv:2305.00660, 2023. HCL+ [24] ↑ Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Haozheng Luo, Hong-Yu Chen, Weijian Li, Wei-Po Wang, and Han Liu.Outlier-efficient hopfield layers for large transformer-based models.In Forty-first International Conference on Machine Learning (ICML), 2024. HCW+ [24] ↑ Jerry Yao-Chieh Hu, Bo-Yu Chen, Dennis Wu, Feng Ruan, and Han Liu.Nonparametric modern hopfield models.arXiv preprint arXiv:2404.03900, 2024. HLSL [24] ↑ Jerry Yao-Chieh Hu, Thomas Lin, Zhao Song, and Han Liu.On computational limits of modern hopfield models: A fine-grained complexity analysis.In Forty-first International Conference on Machine Learning (ICML), 2024. HWL [21] ↑ Weihua He, Yongyun Wu, and Xiaohua Li.Attention mechanism for neural machine translation: A survey.In 2021 IEEE 5th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), volume 5, pages 1485–1489. IEEE, 2021. HWSL [24] ↑ Jerry Yao-Chieh Hu, Weimin Wu, Zhao Song, and Han Liu.On statistical rates and provably efficient criteria of latent diffusion transformers (dits).arXiv preprint arXiv:2407.01079, 2024. HYW+ [23] ↑ Jerry Yao-Chieh Hu, Donglin Yang, Dennis Wu, Chenwei Xu, Bo-Yu Chen, and Han Liu.On sparse modern hopfield model.In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023. JSWZ [21] ↑ Shunhua Jiang, Zhao Song, Omri Weinstein, and Hengjie Zhang.Faster dynamic matrix inverse for faster lps.In STOC, 2021. KGW+ [23] ↑ John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein.A watermark for large language models.arXiv preprint arXiv:2301.10226, 2023. KKM+ [20] ↑ Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh.Scaffold: Stochastic controlled averaging for federated learning.In International Conference on Machine Learning, pages 5132–5143. PMLR, 2020. KNK [21] ↑ Oliver Kroemer, Scott Niekum, and George Konidaris.A review of robot learning for manipulation: Challenges, representations, and algorithms.The Journal of Machine Learning Research, 22(1):1395–1476, 2021. [41] ↑ Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Tianyi Zhou.Fourier circuits in neural networks: Unlocking the potential of large language models in mathematical reasoning and modular arithmetic.arXiv preprint arXiv:2402.09469, 2024. [42] ↑ Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Junwei Yu.Fast john ellipsoid computation with differential privacy optimization.arXiv preprint arXiv:2408.06395, 2024. [43] ↑ Xiaoyu Li, Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou.Fine-grained attention i/o complexity: Comprehensive analysis for backward passes, 2024. [44] ↑ Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, and Yufa Zhou.Beyond linear approximations: A novel pruning approach for attention matrix, 2024. LLSS [24] ↑ Xiaoyu Li, Yingyu Liang, Zhenmei Shi, and Zhao Song.A tighter complexity analysis of sparsegpt.arXiv preprint arXiv:2408.12151, 2024. LOG+ [19] ↑ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.Roberta: A robustly optimized bert pretraining approach.arXiv preprint arXiv:1907.11692, 2019. [47] ↑ Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, and Yufa Zhou.Looped relu mlps may be all you need as practical programmable computers, 2024. [48] ↑ Yingyu Liang, Zhizhou Sha, Zhenmei Shi, Zhao Song, and Yufa Zhou.Multi-layer transformers gradient can be approximated in almost linear time.arXiv preprint arXiv:2408.13233, 2024. LSSS [24] ↑ Yingyu Liang, Zhizhou Sha, Zhenmei Shi, and Zhao Song.Differential privacy mechanisms in neural tangent kernel regression.arXiv preprint arXiv:2407.13621, 2024. LSSY [24] ↑ Yingyu Liang, Zhenmei Shi, Zhao Song, and Chiwun Yang.Toward infinite-long prefix in transformer.arXiv preprint arXiv:2406.14036, 2024. [51] ↑ Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou.Differential privacy of cross-attention with provable guarantee.arXiv preprint arXiv:2407.14717, 2024. [52] ↑ Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou.Tensor attention training: Provably efficient learning of higher-order transformers.arXiv preprint arXiv:2405.16411, 2024. [53] ↑ Yingyu Liang, Zhenmei Shi, Zhao Song, and Yufa Zhou.Unraveling the smoothness properties of diffusion models: A gaussian mixture perspective.arXiv preprint arXiv:2405.16418, 2024. LSY [24] ↑ Xiaoyu Li, Zhao Song, and Junwei Yu.Quantum speedups for approximating the john ellipsoid.arXiv preprint arXiv:2408.14018, 2024. LSZ [19] ↑ Yin Tat Lee, Zhao Song, and Qiuyi Zhang.Solving empirical risk minimization in the current matrix multiplication time.In Conference on Learning Theory (COLT), pages 2140–2157. PMLR, 2019. LSZ [23] ↑ Zhihang Li, Zhao Song, and Tianyi Zhou.Solving regularized exp, cosh and sinh regression problems.arXiv preprint, 2303.15725, 2023. LTLH [21] ↑ Xuechen Li, Florian Tramer, Percy Liang, and Tatsunori Hashimoto.Large language models can be strong differentially private learners.arXiv preprint arXiv:2110.05679, 2021. Ope [23] ↑ OpenAI.Gpt-4 technical report.arXiv preprint arXiv:2303.08774, 2023. PX [23] ↑ William Peebles and Saining Xie.Scalable diffusion models with transformers.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195–4205, 2023. QSZZ [23] ↑ Lianke Qin, Zhao Song, Lichen Zhang, and Danyang Zhuo.An online and unified algorithm for projection matrix vector multiplication with application to empirical risk minimization.In AISTATS, 2023. RBL+ [22] ↑ Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer.High-resolution image synthesis with latent diffusion models.In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. RF [18] ↑ Joseph Redmon and Ali Farhadi.Yolov3: An incremental improvement.arXiv preprint arXiv:1804.02767, 2018. RNS+ [18] ↑ Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al.Improving language understanding by generative pre-training.2018. RSR+ [20] ↑ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.Exploring the limits of transfer learning with a unified text-to-text transformer.The Journal of Machine Learning Research, 21(1):5485–5551, 2020. RSY+ [21] ↑ V. Ruehle, R. Sim, Sergey Yekhanin, D. Jones, K. Laine, B. Köpf, J. Teevan, J. Kleewein, and S. Rajmohan.Privacy preserving machine learning: Maintaining confidentiality and preserving trust.2021. RWC+ [19] ↑ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.Language models are unsupervised multitask learners.OpenAI blog, 1(8):9, 2019. Sag [18] ↑ Matthew Sag.The new legal landscape for text mining and machine learning.J. Copyright Soc’y USA, 66:291, 2018. SMN+ [24] ↑ Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, and Shafiq Joty.Discovering the gems in early layers: Accelerating long-context llms with 1000x input token reduction.arXiv preprint arXiv:2409.17422, 2024. Sna [22] ↑ Snapchat.Differential privacy at snapchat, 2022. Son [19] ↑ Zhao Song.Matrix theory: optimization, concentration, and algorithms.The University of Texas at Austin, 2019. SSC+ [22] ↑ Weiyan Shi, Ryan Shea, Si Chen, Chiyuan Zhang, Ruoxi Jia, and Zhou Yu.Just fine-tune twice: Selective differential privacy for large language models.In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6327–6340, 2022. SWXL [24] ↑ Zhenmei Shi, Junyi Wei, Zhuoyan Xu, and Yingyu Liang.Why larger language models do in-context learning differently?arXiv preprint arXiv:2405.19592, 2024. SYYZ [23] ↑ Zhao Song, Xin Yang, Yuanyuan Yang, and Lichen Zhang.Sketching meets differential privacy: Fast algorithm for dynamic kronecker projection maintenance.In ICML, 2023. TM [22] ↑ Abhradeep Thakurta and Brendan McMahan.Federated learning with formal differential privacy guarantees.2022. TPG+ [17] ↑ Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel.The space of transferable adversarial examples.arXiv preprint arXiv:1704.03453, 2017. UAS+ [20] ↑ Mohd Usama, Belal Ahmad, Enmin Song, M Shamim Hossain, Mubarak Alrashoud, and Ghulam Muhammad.Attention-based sentiment analysis using convolutional and recurrent neural network.Future Generation Computer Systems, 113:571–578, 2020. Vad [17] ↑ Salil Vadhan.The complexity of differential privacy.Tutorials on the Foundations of Cryptography: Dedicated to Oded Goldreich, pages 347–450, 2017. Ver [18] ↑ Roman Vershynin.High-dimensional probability: An introduction with applications in data science, volume 47.Cambridge university press, 2018. VKB [23] ↑ Nikhil Vyas, Sham Kakade, and Boaz Barak.Provable copyright protection for generative models.arXiv preprint arXiv:2302.10870, 2023. VSP+ [17] ↑ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.Attention is all you need.Advances in neural information processing systems, 30, 2017. Wai [19] ↑ Martin J Wainwright.High-dimensional statistics: A non-asymptotic viewpoint, volume 48.Cambridge university press, 2019. WCZ+ [23] ↑ Yilin Wang, Zeyuan Chen, Liangjun Zhong, Zheng Ding, Zhizhou Sha, and Zhuowen Tu.Dolfin: Diffusion layout transformers without autoencoder.arXiv preprint arXiv:2310.16305, 2023. WHHL [24] ↑ Dennis Wu, Jerry Yao-Chieh Hu, Teng-Yun Hsiao, and Han Liu.Uniform memory retrieval with larger capacity for modern hopfield models.In Forty-first International Conference on Machine Learning (ICML), 2024. WHL+ [24] ↑ Dennis Wu, Jerry Yao-Chieh Hu, Weijian Li, Bo-Yu Chen, and Han Liu.STanhop: Sparse tandem hopfield model for memory-enhanced time series prediction.In The Twelfth International Conference on Learning Representations (ICLR), 2024. WMS+ [24] ↑ Jiayu Wang, Yifei Ming, Zhenmei Shi, Vibhav Vineet, Xin Wang, and Neel Joshi.Is a picture worth a thousand words? delving into spatial reasoning for vision language models.arXiv preprint arXiv:2406.14852, 2024. WSD+ [23] ↑ Zirui Wang, Zhizhou Sha, Zheng Ding, Yilin Wang, and Zhuowen Tu.Tokencompose: Grounding diffusion with token-level supervision.arXiv preprint arXiv:2312.03626, 2023. WXZ+ [24] ↑ Yilin Wang, Haiyang Xu, Xiang Zhang, Zeyuan Chen, Zhizhou Sha, Zirui Wang, and Zhuowen Tu.Omnicontrolnet: Dual-stage integration for conditional image generation.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7436–7448, 2024. XGH+ [21] ↑ Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, and Luke Zettlemoyer.Vlm: Task-agnostic video-language model pre-training for video understanding.arXiv preprint arXiv:2105.09996, 2021. XHH+ [24] ↑ Chenwei Xu, Yu-Chao Huang, Jerry Yao-Chieh Hu, Weijian Li, Ammar Gilani, Hsi-Sheng Goan, and Han Liu.Bishop: Bi-directional cellular learning for tabular data with generalized sparse modern hopfield model.In Forty-first International Conference on Machine Learning (ICML), 2024. XSL [24] ↑ Zhuoyan Xu, Zhenmei Shi, and Yingyu Liang.Do large language models have compositional ability? an investigation into limitations and scalability.In ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2024. XSW+ [23] ↑ Zhuoyan Xu, Zhenmei Shi, Junyi Wei, Fangzhou Mu, Yin Li, and Yingyu Liang.Towards few-shot adaptation of foundation models via multitask finetuning.In The Twelfth International Conference on Learning Representations, 2023. XZA+ [23] ↑ Zheng Xu, Yanxiang Zhang, Galen Andrew, Christopher A Choquette-Choo, Peter Kairouz, H Brendan McMahan, Jesse Rosenstock, and Yuanbo Zhang.Federated learning of gboard language models with differential privacy.arXiv preprint arXiv:2305.18465, 2023. YDY+ [19] ↑ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le.Xlnet: Generalized autoregressive pretraining for language understanding.Advances in neural information processing systems, 32, 2019. YNB+ [21] ↑ Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, et al.Differentially private fine-tuning of language models.arXiv preprint arXiv:2110.06500, 2021. ZHDK [23] ↑ Amir Zandieh, Insu Han, Majid Daliri, and Amin Karbasi.Kdeformer: Accelerating transformers via kernel density estimation.arXiv preprint arXiv:2302.02451, 2023. ZHJL [24] ↑ Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu.Vision-language models for vision tasks: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. ZTL+ [17] ↑ Jingwei Zhang, Lei Tai, Ming Liu, Joschka Boedecker, and Wolfram Burgard.Neural slam: Learning to explore with external memory.arXiv preprint arXiv:1706.09520, 2017.
Appendix
Roadmap.
In Section A, we present preliminaries for the paper. In Section B, we analyze the perturbations in attention computation. Section C presents the proof of the existence of differential privacy using our Gaussian sampling mechanism. In Section D, we provide more Lemma about sensitivity.
Appendix APRELIMINARY
Section A.1 presents the notations that are used throughout our paper. These notations are essential for a clear and concise presentation of our work. In Section A.2, we provide an introduction to some basic algebraic concepts that are relevant to our research. This includes fundamental mathematical operations and properties that are used in the analysis and development of our differential privacy algorithm.
A.1Notations
For a event 𝐶 , Pr [ 𝐶 ] represents the probability of event 𝐶 occurring. 𝔼 [ 𝑋 ] represents the expected value (or mean) of a random variable 𝑋 .
We use 𝜒 𝑑 2 to denote a Chi-squared random variable with 𝑑 degrees of freedom. ℕ represents the set of natural numbers, which consists of all positive integers including 1, 2, 3, and so on.
If 𝑀 and 𝑁 are symmetric matrices, we define 𝑀 ⪰ 𝑁 to mean that for all vectors 𝑥 , the inequality 𝑥 ⊤ 𝑀 𝑥 ≥ 𝑥 ⊤ 𝑁 𝑥 holds.
If 𝑀 is a symmetric matrix of dimension 𝑛 × 𝑛 , we define 𝑀 to be positive semidefinite ( 𝑀 ⪰ 0 ) if the inequality 𝑥 ⊤ 𝑀 𝑥 ≥ 0 holds for all vectors 𝑥 ∈ ℝ 𝑛 .
We use the notation 𝟎 𝑛 to denote an 𝑛 -dimensional vector whose entries are all zero, and 𝟏 𝑛 to denote an 𝑛 -dimensional vector whose entries are all one. The symbol 𝐼 𝑛 represents the 𝑛 × 𝑛 identity matrix, which is a square matrix with ones on the main diagonal and zeros elsewhere.
Let 𝑥 be an arbitrary vector in ℝ 𝑛 . We define exp ( 𝑥 ) ∈ ℝ 𝑛 as a vector whose 𝑖 -th entry exp ( 𝑥 ) 𝑖 is equal to exp ( 𝑥 𝑖 ) , where exp ( ⋅ ) denotes the exponential function. We use ⟨ 𝑥 , 𝑦 ⟩ to denote ∑ 𝑖
1 𝑛 𝑥 𝑖 𝑦 𝑖 .
For any matrix 𝐴 , we use ‖ 𝐴 ‖ to denote the spectral norm of 𝐴 , i.e., ‖ 𝐴 ‖
max ‖ 𝑥 ‖ 2
1 ‖ 𝐴 𝑥 ‖ 2 , ‖ 𝐴 ‖ 𝐹 to denote its Frobenius norm and ‖ 𝐴 ‖ ∞ to denote the infinity norm. 𝐴 𝑖 , 𝑗 represents the element in the 𝑖 -th row and 𝑗 -th column of matrix 𝐴 . det ( 𝐴 ) represents the determinant of matrix 𝐴 . For a square and symmetric matrix 𝐴 ∈ ℝ 𝑛 × 𝑛 , we say 𝐴 positive semi-definite ( 𝐴 ⪰ 0 ) if for all vectors 𝑥 ∈ ℝ 𝑛 , we have 𝑥 ⊤ 𝐴 𝑥 ≥ 0 .
We denote the inverse of a matrix 𝑀 as 𝑀 − 1 and its transpose as 𝑀 ⊤ . We refer to 𝜆 𝑖 as the 𝑖 -th eigenvalue of 𝑁 .
𝕊 + 𝑛 denotes the set of 𝑛 × 𝑛 positive semidefinite (PSD) matrices.
A.2Basic Algebra
In this section, we offer an introduction to fundamental algebraic concepts.
Fact A.1.
We have
•
Part 1. Let 𝐴 ∈ ℝ 𝑛 × 𝑛 , then we have ‖ 𝐴 ‖ 𝐹 ≤ 𝑛 ‖ 𝐴 ‖ .
•
Part 2. Let 𝐴 ∈ ℝ 𝑛 × 𝑛 , then we have ‖ 𝐴 ‖ ≤ ‖ 𝐴 ‖ 𝐹
•
Part 3. For two vectors 𝑎 , 𝑏 ∈ ℝ 𝑛 , then we have ‖ 𝑎 𝑏 ⊤ ‖ ≤ ‖ 𝑎 ‖ 2 ⋅ ‖ 𝑏 ‖ 2
Fact A.2.
We have
•
Part 1. cosh ( 𝑥 )
∑ 𝑖
0 ∞ ( 1 / ( 2 𝑖 ) ! ) ⋅ 𝑥 2 𝑖 .
•
Part 2. exp ( 𝑥 )
∑ 𝑖
0 ∞ ( 1 / ( 𝑖 ! ) ) ⋅ 𝑥 𝑖 .
•
Part 3. We have | exp ( 𝑥 ) − 1 | ≤ | 𝑥 | + 𝑥 2 , ∀ 𝑥 ∈ ( − 0.1 , 0.1 ) .
•
Part 4. | exp ( 𝑥 ) − exp ( 𝑦 ) | ≤ exp ( 𝑥 ) ⋅ ( | 𝑥 − 𝑦 | + | 𝑥 − 𝑦 | 2 ) for | 𝑥 − 𝑦 | ≤ 0.1 .
•
Part 5. We have | cosh ( 𝑥 ) − 1 | ≤ 𝑥 2 , ∀ 𝑥 ∈ ( − 0.1 , 0.1 ) .
•
Part 6. | cosh ( 𝑥 ) − cosh ( 𝑦 ) | ≤ cosh ( 𝑥 ) ⋅ | 𝑥 − 𝑦 | 2 for | 𝑥 − 𝑦 | ≤ 0.1 .
Appendix BERROR CONTROL FROM LOGIT MATRIX TO ATTENTION MATRIX
In Section B.1, we discuss the perturbation of positive semi-definite (psd) matrices, which is a crucial step in ensuring the differential privacy of our algorithm. Section B.2 focuses on the perturbation of diagonal normalization matrices, which is another important aspect of our error control approach. In Section B.3, we analyze the error in the attention matrix computation that arises from these perturbations. Finally, in Section B.4, we present the main result of Section B, which summarizes the effectiveness of our error control mechanisms in achieving differential privacy for the computation of the attention matrix.
B.1Perturb PSD Matrix
In Section B.1, we discuss the perturbation of positive semi-definite (psd) matrices. This is a crucial step in ensuring the differential privacy of our algorithm.
Lemma B.1 (Lemma 3.1 in [21]).
We denote 𝐴 ∈ ℝ 𝑛 × 𝑛 and 𝐵 ∈ ℝ 𝑛 × 𝑛 as psd matrices.
If all of the following requirements are met
•
Requirement 1. We have − 𝑟 ≤ 𝐴 𝑖 , 𝑗 ≤ 𝑟 , ∀ ( 𝑖 , 𝑗 ) ∈ [ 𝑛 ] × [ 𝑛 ] .
•
Requirement 2. ( 1 − 𝜖 ) 𝐵 ⪯ 𝐴 ⪯ ( 1 + 𝜖 ) 𝐵 ;
Then, it follows that
𝐵 𝑖 , 𝑗 ∈ [ − ( 1 + 𝜖 ) 𝑟 , ( 1 + 𝜖 ) 𝑟 ] .
Lemma B.2 (A general version of Lemma 3.2 in [21]).
If all of the following requirements are met
•
Requirement 1. 𝐴 𝑖 , 𝑗 ∈ [ − 𝑟 , 𝑟 ] .
•
Requirement 2. 𝐵 𝑖 , 𝑗 ∈ [ − ( 1 + 𝜖 ) 𝑟 , ( 1 + 𝜖 ) 𝑟 ] .
•
Requirement 3. 𝑟 ∈ ( 0 , 0.1 ) , 𝜖 ∈ ( 0 , 0.1 ) .
•
Requirement 4. Let 𝑓 ( 𝑧 ) ∈ { exp ( 𝑧 ) , cosh ( 𝑧 ) } .
It follows that
•
Part 1.
| 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑓 ( 𝐴 𝑖 , 𝑗 ) ⋅ ( 2 + 2 𝜖 + 4 𝑟 ) ⋅ 𝑟 ∀ 𝑖 , 𝑗 ∈ [ 𝑛 ] × [ 𝑛 ] .
•
Part 2.
| 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑓 ( 𝐵 𝑖 , 𝑗 ) ⋅ ( 2 + 2 𝜖 + 4 𝑟 ) ⋅ 𝑟 ∀ 𝑖 , 𝑗 ∈ [ 𝑛 ] × [ 𝑛 ] .
Proof.
According to Requirement 1., Requirement 2. and Requirement 3., we have
| 𝐴 𝑖 , 𝑗 − 𝐵 𝑖 , 𝑗 | ≤ ( 2 + 𝜖 ) 𝑟 .
(4) Proof of Part 1.
It follows that
|
𝑓
(
𝐴
𝑖
,
𝑗
)
−
𝑓
(
𝐵
𝑖
,
𝑗
)
|
≤
𝑓
(
𝐴
𝑖
,
𝑗
)
⋅
(
|
𝐴
𝑖
,
𝑗
−
𝐵
𝑖
,
𝑗
|
+
|
𝐴
𝑖
,
𝑗
−
𝐵
𝑖
,
𝑗
|
2
)
≤
𝑓
(
𝐴
𝑖
,
𝑗
)
⋅
|
𝐴
𝑖
,
𝑗
−
𝐵
𝑖
,
𝑗
|
⋅
(
1
+
|
𝐴
𝑖
,
𝑗
−
𝐵
𝑖
,
𝑗
|
)
≤
𝑓
(
𝐴
𝑖
,
𝑗
)
⋅
|
𝐴
𝑖
,
𝑗
−
𝐵
𝑖
,
𝑗
|
⋅
(
1
+
(
2
+
𝜖
)
𝑟
)
≤
𝑓
(
𝐴
𝑖
,
𝑗
)
⋅
(
2
+
𝜖
)
𝑟
⋅
(
1
+
(
2
+
𝜖
)
𝑟
)
𝑓 ( 𝐴 𝑖 , 𝑗 ) ⋅ ( 2 + 𝜖 + ( 2 + 𝜖 ) 2 𝑟 ) 𝑟
≤
𝑓 ( 𝐴 𝑖 , 𝑗 ) ⋅ ( 2 + 2 𝜖 + 4 𝑟 ) 𝑟
where the 1st step is the result of Fact A.2, the 2nd step follows from straightforward algebraic manipulations, the 3rd step is a consequence of Eq.(4), the 4th step is a consequence of Eq.(4), the 5th step follows from algebraic manipulations, and the 6th step is a result of satisfying Requirement 3 in the Lemma statement.
Proof of Part 2.
Similarly, we can prove it.
∎
B.2Error Control for Normalization
This section focuses on the perturbation of diagonal normalization matrices, which is another important aspect of our error control approach.
Lemma B.3 (Error Control for Normalization, A general version Lemma 3.3 in [21]).
If the following condition holds
•
Requirement 1. We define 𝑓 as Definition 5.1.
•
Requirement 2. We define 𝖣 as Definition 5.2.
•
Requirement 3. ∀ ( 𝑖 , 𝑗 ) ∈ [ 𝑛 ] × [ 𝑛 ] , we have | 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑓 ( 𝐴 𝑖 , 𝑗 ) ⋅ 𝑐 0 𝑟 .
•
Requirement 4. ∀ ( 𝑖 , 𝑗 ) ∈ [ 𝑛 ] × [ 𝑛 ] , we have 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑓 ( 𝐵 𝑖 , 𝑗 ) ⋅ 𝑐 0 𝑟 .
Then, it follows that,
•
Part 1.
| 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 𝖣 ( 𝐵 ) 𝑖 , 𝑖 | ≤ 𝖣 ( 𝐴 ) 𝑖 , 𝑖 ⋅ 𝑐 0 𝑟 ∀ 𝑖 ∈ [ 𝑛 ]
•
Part 2.
| 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 𝖣 ( 𝐵 ) 𝑖 , 𝑖 | ≤ 𝖣 ( 𝐵 ) 𝑖 , 𝑖 ⋅ 𝑐 0 𝑟 ∀ 𝑖 ∈ [ 𝑛 ]
Proof.
Proof of Part 1. From the above conditions in the lemma statement, it follows that
| 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 𝖣 ( 𝐵 ) 𝑖 , 𝑖 |
| ( 𝑓 ( 𝐴 𝑖 , ∗ ) − 𝑓 ( 𝐵 𝑖 , ∗ ) ) ⋅ 𝟏 𝑛 |
| ∑ 𝑗
1
𝑛
(
𝑓
(
𝐴
𝑖
,
𝑗
)
−
𝑓
(
𝐵
𝑖
,
𝑗
)
)
|
≤
∑
𝑗
1
𝑛
|
𝑓
(
𝐴
𝑖
,
𝑗
)
−
𝑓
(
𝐵
𝑖
,
𝑗
)
|
≤
∑
𝑗
1 𝑛 𝑓 ( 𝐴 𝑖 , 𝑗 ) ⋅ 𝑐 0 𝑟
𝑓 ( 𝐴 𝑖 , ∗ ) 𝟏 𝑛 ⋅ 𝑐 0 𝑟
𝖣 ( 𝐴 ) 𝑖 , 𝑖 ⋅ 𝑐 0 𝑟
where the 1st step follows from algebraic manipulations, the 2nd step is due to algebraic manipulations, the 3rd step is the result of triangle inequality, the 4th step is based on Requirement 2 in Lemma statement, the 5th step comes from algebraic manipulations and the last step is the result of algebraic manipulations.
Proof of Part 2.
The proof is similar to Part 1. So we omit the details here.
∎
B.3Error of Attention Matrix
In this section, we analyze the error in the attention matrix computation that arises from the perturbations of psd and diagonal normalization matrices.
Lemma B.4 (A general version of Lemma 3.4 in [21]).
Let 𝑐 1
0 and 𝑐 2
0 . If all of the following requirements are met
•
Requirement 1. We define 𝑓 as Definition 5.1.
•
Requirement 2. We define 𝖣 as Definition 5.2.
•
Requirement 3.
| 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 𝖣 ( 𝐵 ) 𝑖 , 𝑖 | ≤ 𝑐 1 ⋅ 𝑟 ⋅ min { 𝖣 ( 𝐴 ) 𝑖 , 𝑖 , 𝖣 ( 𝐵 ) 𝑖 , 𝑖 } ∀ 𝑖 ∈ [ 𝑛 ] ,
•
Requirement 4.
| 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑐 2 ⋅ 𝑟 ⋅ min { 𝑓 ( 𝐴 𝑖 , 𝑗 ) , 𝑓 ( 𝐵 𝑖 , 𝑗 ) } ∀ 𝑖 , 𝑗 ∈ [ 𝑛 ] × [ 𝑛 ]
It follows that
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ ( 𝑐 1 + 𝑐 2 ) ⋅ 𝑟 .
Proof.
We first decompose the difference into
∥
𝖣
(
𝐴
)
−
1
𝑓
(
𝐴
)
−
𝖣
(
𝐵
)
−
1
𝑓
(
𝐵
)
∥
∞
≤
∥
𝖣
(
𝐴
)
−
1
𝑓
(
𝐴
)
−
𝖣
(
𝐵
)
−
1
𝑓
(
𝐵
)
∥
∞
+
∥
𝖣
(
𝐵
)
−
1
𝑓
(
𝐵
)
−
𝖣
(
𝐵
)
−
1
𝑓
(
𝐵
)
∥
∞
𝑍 1 + 𝑍 2
where last step is obtained by
𝑍 1 := ∥ 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ,
and
𝑍 2 := ∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ .
We will present the proof in two parts.
The first term.
∀ ( 𝑖 , 𝑗 ) ∈ [ 𝑛 ] × [ 𝑛 ] , it follows that
𝑍 1
| ( 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ) 𝑖 , 𝑗 |
| 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 1 ⋅ ( 𝑓 ( 𝐴 ) 𝑖 , 𝑗 − 𝑓 ( 𝐵 ) 𝑖 , 𝑗 ) |
≤
𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 1 ⋅ | 𝑓 ( 𝐴 ) 𝑖 , 𝑗 − 𝑓 ( 𝐵 ) 𝑖 , 𝑗 ) |
≤
𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 1 ⋅ 𝑐 2 ⋅ 𝑟 ⋅ 𝑓 ( 𝐴 ) 𝑖 , 𝑗
≤
𝑐 2 𝑟 ⋅ ( 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) ) 𝑖 , 𝑗
≤
𝑐 2 𝑟 ,
where the 1st step comes from definition, the 2nd step is the result of algebraic manipulations, the 3rd step comes from triangle inequality, the 4th step is based on Requirement 4 in the lemma statement, the 5th step is the result of algebraic manipulations, and the last step is according to the definition of 𝖣 .
The second term.
∀ ( 𝑖 , 𝑗 ) ∈ [ 𝑛 ] × [ 𝑛 ] , it follows that
𝑍 2
| ( 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ) 𝑖 , 𝑗 |
| ( 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 1 − 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 1 ) 𝑓 ( 𝐵 ) 𝑖 , 𝑗 |
|
𝖣
(
𝐴
)
𝑖
,
𝑖
−
𝖣
(
𝐵
)
𝑖
,
𝑖
𝖣
(
𝐴
)
𝑖
,
𝑖
𝖣
(
𝐵
)
𝑖
,
𝑖
𝑓
(
𝐵
)
𝑖
,
𝑗
|
≤
|
𝖣
(
𝐴
)
𝑖
,
𝑖
−
𝖣
(
𝐵
)
𝑖
,
𝑖
𝖣
(
𝐴
)
𝑖
,
𝑖
𝖣
(
𝐵
)
𝑖
,
𝑖
|
⋅
|
𝑓
(
𝐵
)
𝑖
,
𝑗
|
≤
|
𝑐
1
𝑟
𝖣
(
𝐴
)
𝑖
,
𝑖
𝖣
(
𝐴
)
𝑖
,
𝑖
𝖣
(
𝐵
)
𝑖
,
𝑖
|
⋅
|
𝑓
(
𝐵
)
𝑖
,
𝑗
|
𝑐 1 𝑟 ⋅ | 𝖣 ( 𝐵 ) 𝑖 , 𝑖 − 1 | ⋅ | 𝑓 ( 𝐵 ) 𝑖 , 𝑗 |
where the 1st step based on definition, the 2nd steps follow from algebraic manipulations, the 3rd step is the result of algebraic manipulations, the 4th step is due to triangle inequality, the 5th step is due to Requirement 3 in the lemma statement, the last step is due to algebraic manipulations.
Then we have
𝑍 2
𝑐 1 𝑟 ⋅ | 𝖣 ( 𝐵 ) 𝑖 , 𝑖 − 1 | ⋅ | 𝑓 ( 𝐵 ) 𝑖 , 𝑗 |
𝑐 1 𝑟 ⋅ | 𝖣 ( 𝐵 ) 𝑖 , 𝑖 − 1 𝑓 ( 𝐵 ) 𝑖 , 𝑗 |
𝑐 1 𝑟 ⋅ ( 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ) 𝑖 , 𝑗
≤
𝑐 1 𝑟
where the 1st step is the result of the above equation, the 2nd step is due to all the entries are positive, the 3rd step is due to algebraic manipulations and the last step is due to definition of 𝖣 .
Based on the above deduction, it follows that
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤
𝑍 1 + 𝑍 2
≤
( 𝑐 1 + 𝑐 2 ) 𝑟 .
Thus we complete the proof. ∎
B.4Error Control
The main result of Section B is presented in this section.
Theorem B.5 (Formal version of Theorem 5.3).
If all of the following requirements are met
•
Let 𝜖 ∈ ( 0 , 0.1 )
•
Let 𝑟 ∈ ( 0 , 0.1 )
•
‖ 𝐴 ‖ ∞ ≤ 𝑟
•
( 1 − 𝜖 ) 𝐵 ⪯ 𝐴 ⪯ ( 1 + 𝜖 ) 𝐵
•
We define 𝖣 Definition 5.2.
•
We define 𝑓 as Definition 5.1.
It follows that
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ 4 ⋅ ( 1 + 𝜖 + 2 𝑟 ) ⋅ 𝑟
Proof of Theorem 5.3.
By Lemma B.1 and ( 1 − 𝜖 ) 𝐵 ⪯ 𝐴 ⪯ ( 1 + 𝜖 ) 𝐵 , we have
𝐵 𝑖 , 𝑗 ∈ [ − ( 1 + 𝜖 ) 𝑟 , ( 1 + 𝜖 ) 𝑟 ] .
(5)
.
By Lemma B.2 and Eq. (5), it follows that
•
Part 1.
| 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑓 ( 𝐴 𝑖 , 𝑗 ) ⋅ ( 2 + 2 𝜖 + 4 𝑟 ) ⋅ 𝑟 ∀ ( 𝑖 , 𝑗 ) ∈ [ 𝑛 ] × [ 𝑛 ] .
•
Part 2.
| 𝑓 ( 𝐴 𝑖 , 𝑗 ) − 𝑓 ( 𝐵 𝑖 , 𝑗 ) | ≤ 𝑓 ( 𝐵 𝑖 , 𝑗 ) ⋅ ( 2 + 2 𝜖 + 4 𝑟 ) ⋅ 𝑟 ∀ ( 𝑖 , 𝑗 ) ∈ [ 𝑛 ] × [ 𝑛 ] .
According to the discussion above and using Lemma B.3, we have
•
Part 1.
| 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 𝖣 ( 𝐵 ) 𝑖 , 𝑖 | ≤ 𝖣 ( 𝐴 ) 𝑖 , 𝑖 ⋅ 𝑐 0 𝑟 ∀ 𝑖 ∈ [ 𝑛 ]
•
Part 2.
| 𝖣 ( 𝐴 ) 𝑖 , 𝑖 − 𝖣 ( 𝐵 ) 𝑖 , 𝑖 | ≤ 𝖣 ( 𝐵 ) 𝑖 , 𝑖 ⋅ 𝑐 0 𝑟 ∀ 𝑖 ∈ [ 𝑛 ]
And then by using Lemma B.4, 𝑐 1
( 2 + 2 𝜖 + 4 𝑟 ) and 𝑐 2
( 2 + 2 𝜖 + 4 𝑟 ) , we have
∥ 𝖣 ( 𝐴 ) − 1 𝑓 ( 𝐴 ) − 𝖣 ( 𝐵 ) − 1 𝑓 ( 𝐵 ) ∥ ∞ ≤ 4 ⋅ ( 1 + 𝜖 + 2 𝑟 ) ⋅ 𝑟
∎
Appendix CANALYSIS OF GAUSSIAN SAMPLING MECHANISM
We denote the output of our privacy algorithm as 𝑍 . In Section C.1, we present the computation tools that we use to implement our approach. In Section C.2, we perform spectral decomposition of 𝐴 := ℳ ( 𝒴 ) 1 / 2 ℳ ( 𝒴 ′ ) − 1 ℳ ( 𝒴 ) 1 / 2 and derive some important conclusions from it. Then, in Section C.3, we transform 𝑍 into a format that is based on the spectral decomposition of 𝐴 . In Section C.4, We present the upper bound of 𝔼 [ 𝑍 ] , which is useful in the following section. In Section C.5, we demonstrate that 𝑍 is sub-exponential, which allows us to control the upper bound of Pr [ 𝑍 ≥ 𝜖 ] where 𝜖 ∈ ( 0 , 1 ) . Finally, we present our main result in Section C.6, which is that our Algorithm 2 is differential privacy.
C.1Computation Tools
This section is dedicated to presenting the computational tools that we use to implement our approach.
Definition C.1.
We define Σ 1 , Σ 2 as Definition 5.7. Let us define
•
𝐴 := Σ 1 1 / 2 Σ 2 − 1 Σ 1 1 / 2
•
𝐵 := Σ 2 1 / 2 Σ 1 − 1 Σ 2 − 1 / 2
•
𝐶 := Σ 1 − 1 / 2 Σ 2 1 / 2
Lemma C.2.
Let 𝐴 , 𝐵 and 𝐶 be defined as Definition C.1. Then we have
•
Part 1. 𝐴 − 1
𝐶 𝐶 ⊤ .
•
Part 2. 𝐵
𝐶 ⊤ 𝐶 .
•
Part 3. 𝐴 − 1 , 𝐵 have the same eigenvalue.
Proof.
Note that Σ 1 and Σ 2 are symmetric, we can easily have the proof as follows.
Proof of Part 1.
𝐴
−
1
( Σ 1 1 / 2 Σ 2 − 1 Σ 1 1 / 2 ) − 1
( Σ 1 1 / 2 Σ 2 − 1 / 2 Σ 2 − 1 / 2 Σ 1 1 / 2 ) − 1
( Σ 2 − 1 / 2 Σ 1 1 / 2 ) − 1 ( Σ 1 1 / 2 Σ 2 − 1 / 2 ) − 1
( Σ 1 1 / 2 Σ 2 − 1 / 2 ) ( Σ 2 − 1 / 2 Σ 1 1 / 2 )
𝐶
𝐶
⊤
(6)
Proof of Part 2.
𝐵
Σ 2 − 1 / 2 Σ 1 Σ 2 − 1 / 2
( Σ 2 − 1 / 2 Σ 1 1 / 2 ) ( Σ 1 1 / 2 Σ 2 − 1 / 2 )
𝐶 𝑇 𝐶
(7) Proof of Part 3.
It simply follows from Eq.(C.1) and Eq.(C.1). ∎
C.2Spectral Decomposition
This section is focused on the spectral decomposition of 𝐴 , which we perform to gain insights into its properties. By analyzing the spectral decomposition, we are able to draw important conclusions about 𝐴 that are relevant to our approach.
Lemma C.3.
If all of the following requirements are met
•
Requirement 1. We define 𝐴 as Definition C.1.
•
Requirement 2. Let 𝜆 1 ⋯ 𝜆 𝑛 be eigenvalues of 𝐴 .
•
Requirement 3. Let 𝐴
∑ 𝑗
1 𝑛 𝜆 𝑗 𝑣 𝑗 𝑣 𝑗 ⊤ be spectral decomposition for 𝐴 .
•
Requirement 4. Let Δ be denoted as Definition 5.5.
•
Requirement 5. Let 𝑀 , ℳ be denoted as Definition 5.4 and 𝑀 ≤ Δ .
We have
•
∑ 𝑗
1 𝑛 ( 𝜆 𝑗 − 1 ) 2 ≤ Δ 2 .
•
∑ 𝑗
1 𝑛 ( 1 − 1 𝜆 𝑗 ) 2 ≤ Δ 2 .
Proof.
we have
∑ 𝑗
1 𝑛 ( 𝜆 𝑗 − 1 ) 2
‖ 𝐴 − 𝐼 ‖ 𝐹 2
≤
Δ 2
where the 1st step is based on Requirement 3 in the lemma statement and the last step is due to Requirement 5 in lemma statement.
Similarly, we have
∑ 𝑗
1 𝑛 ( 1 − 1 𝜆 𝑗 ) 2
‖ 𝐼 − 𝐴 − 1 ‖ 𝐹 2
‖ 𝐼 − 𝐵 ‖ 𝐹 2
≤
Δ 2
where the 1st step is due to Requirement 3 in the lemma statement, the 2nd step follows from swapping the roles of 𝒴 , 𝒴 ′ and the last step is due to Lemma C.2. ∎
C.3The Transformation for Output
In Section C.3, we describe the process of transforming the output 𝑍 of our privacy algorithm into a format that is based on the spectral decomposition of 𝐴 .
Lemma C.4.
If all of the following requirements are met
•
Requirement 1.We define 𝑍 and ℎ 𝑖 , 𝑗 as Definition 5.8.
•
Requirement 2. Let 𝐴 be denoted as Definition C.1.
•
Requirement 3. Let 𝜆 1 , ⋯ , 𝜆 𝑛 demote the eigenvalue of 𝐴 .
Then we have
𝑍
1 2 ∑ 𝑖
1 𝑘 ∑ 𝑗
1 𝑛 ( ( 𝜆 𝑗 − 1 ) ℎ 𝑖 , 𝑗 2 − log ( 𝜆 𝑗 ) )
Proof.
The privacy loss random variable 𝑍 can be expressed as follows:
𝑍
∑ 𝑖
1 𝑘 log ( det ( Σ 1 ) − 1 2 exp ( − 1 2 𝑔 𝑖 ⊤ Σ 1 − 1 𝑔 𝑖 ) det ( Σ 2 ) − 1 2 exp ( − 1 2 𝑔 𝑖 ⊤ Σ 2 − 1 𝑔 𝑖 ) )
∑ 𝑖
1 𝑘 ( 1 2 𝑔 𝑖 ⊤ ( Σ 2 − 1 − Σ 1 − 1 ) 𝑔 𝑖 − 1 2 log ( det ( Σ 1 ) det ( Σ 2 ) ) )
1 2 ∑ 𝑖
1 𝑘 ( ( Σ 1 − 1 / 2 𝑔 𝑖 ) ⊤ ( 𝐴 − 𝐼 ) ( Σ 1 − 1 / 2 𝑔 𝑖 ) − log det ( 𝐴 ) )
1 2 ∑ 𝑖
1 𝑘 ∑ 𝑗
1 𝑛 ( ( 𝜆 𝑗 − 1 ) ℎ 𝑖 , 𝑗 2 − log ( 𝜆 𝑗 ) )
where the 1st step is based on Requirement 1 in the lemma statement, the 2nd step follows from rearranging the terms, the 3rd step is based on Requirement 2 in the lemma statement, and the last step is by taking the spectral decomposition of 𝐴 . ∎
C.4The Upper Bound for Expectation
In Section C.4, we provide an upper bound on the expected value of 𝑍 , which is a useful result for the subsequent section.
Lemma C.5.
If all of the following requirements are met
•
Requirement 1 We define 𝑍 as Definition 5.8.
•
Requirement 2 Let 𝜖 ∈ ( 0 , 1 ) and 𝑘 ∈ ℕ .
•
Requirement 3. Let 𝐴 be denoted as Definition C.1.
•
Requirement 4. Let 𝜆 1 , ⋯ , 𝜆 𝑛 denote the eigenvalue of 𝐴 .
•
Requirement 5. Let Δ be denoted as Definition 5.5.
•
Requirement 6. Let 𝑀 , ℳ be denoted as Definition 5.4 and 𝑀 ≤ Δ .
we have
𝔼
[
𝑍
]
≤
𝜖
2
Proof.
𝔼
[
𝑍
]
𝑘 2 ∑ 𝑗
1
𝑛
(
𝜆
𝑗
−
1
−
log
(
𝜆
𝑗
)
)
≤
𝑘
2
∑
𝑗
1 𝑛 ( 𝜆 𝑗 − 2 + 1 𝜆 𝑗 )
𝑘 2 ∑ 𝑗
1 𝑛 ( 𝜆 𝑗 − 1 ) ( 1 − 1 𝜆 𝑗 )
≤
‖ 𝐴 − 𝐼 ‖ 𝐹 ⋅ ‖ 𝐼 − 𝐴 − 1 ‖ 𝐹
≤
𝑘 2 Δ 2
≤
𝜖 2
where the 1st step follows from linearity of expectation and Lemma C.4, the 2nd step is the result of 𝜆 𝑗
0 and log ( 𝑥 )
1 − 1 𝑥 for 𝑥
0 , the 3rd step follows from simple factorization, the fourth step follows from Cauchy-Schwarz, the fifth step follows from Lemma C.3 and Requirement 6 in the lemma statement, and the last step follows from Δ < 𝜖 𝑘 and 𝜖 < 1 . ∎
C.5Sub-Exponential
In Section C.5, evidence is provided that supports the claim that 𝑍 is sub-exponential. This is significant because it enables us to limit the maximum probability of the event 𝑍 ≥ 𝜖 , which is crucial in ensuring differential privacy.
Lemma C.6.
If all of the following requirements are met
•
Requirement 1.We define 𝑍 as Definition 5.8.
•
Requirement 2. Let 𝜖 ∈ ( 0 , 1 ) and 𝛿 ∈ ( 0 , 1 ) .
•
Requirement 3. Let Δ be denoted as Definition 5.5 and Δ < 1 .
•
Requirement 4. Let 𝑀 , ℳ be denoted as Definition 5.4 and 𝑀 ≤ Δ .
•
Requirement 5. 𝑘 ∈ ℕ .
we have
Pr [ 𝑍
𝜖 ] ≤ 𝛿
Proof.
First, we will prove 𝑍 is sub-exponential.
Proof of Sub Exponential
Let 𝐴 be dented as Definition C.1 and ℎ 𝑖 , 𝑗 be denoted as Definition 5.8.
Since ℎ 𝑖 , 𝑗 ∼ 𝜒 1 2 , Lemma 3.5 and Lemma 3.4, we can say 𝑍 is sub-exponential with
•
𝜈
𝑘 ‖ 𝐼 − 𝐴 ‖ 𝐹
•
𝛼
2 ‖ 𝐼 − 𝐴 ‖ 𝐹
By Lemma C.3, we have
•
𝜈
𝑘 ‖ 𝐴 − 𝐼 ‖ 𝐹 ≤ 𝑘 Δ
•
𝛼
2 ‖ 𝐴 − 𝐼 ‖ 𝐹 ≤ 2 Δ
Proof of Upper Bound for 𝔼 [ 𝑍 ] .
Under Requirement 3 and Requirement 4, by using Lemma C.5, we have
𝔼 [ 𝑍 ] ≤ 𝜖 / 2
(8) Proof of Upper Bound
By using Lemma 3.3 (sub-exponential tail bound), we have
Pr [ 𝑍
𝜖 ] <
Pr [ 𝑍 − 𝔼 [ 𝑍 ]
𝜖 / 2 ]
≤
max { exp ( − ( 𝜖 / 2 ) 2 2 𝜈 2 ) , exp ( − 𝜖 / 2 2 𝛼 ) }
≤
𝛿
where the 1st step is the reuslt of Eq. (8), the 2nd step is the reuslt of Lemma 3.3, and the last step follows from Requirement 3 in the lemma statement. ∎
C.6Analysis of Gaussian Sampling
This section contains our main result in Section C, which we present as follows. The following theorem statement can be viewed as a variation of Theorem 5.1 in [2].
Theorem C.7 (Formal version of Theorem 5.9, Analysis of the Gaussian Sampling Mechanism ).
If all of the following requirements are met
•
Requirement 1. Let 𝜖 ∈ ( 0 , 1 ) and 𝛿 ∈ ( 0 , 1 ) .
•
Requirement 2. 𝑘 ∈ ℕ .
•
Requirement 3. Let Δ be denoted as Definition 5.5 and Δ < 1 .
•
Requirement 4. Let 𝑀 , ℳ be denoted as Definition 5.4 and 𝑀 ≤ Δ .
•
Requirement 5. An input Σ
ℳ ( 𝒴 ) .
•
Requirement 6. 𝜌
𝑂 ( ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 + ( 𝑛 2 + log ( 1 / 𝛾 ) ) / 𝑘 ) .
Then, there exists an algorithm 2 such that
•
Part 1. Algorithm 2 is ( 𝜖 , 𝛿 ) -DP (with respect to the original dataset 𝒴 ).
•
Part 2. outputs Σ ^ ∈ 𝕊 + 𝑛 such that with probabilities at least 1 − 𝛾 ,
‖ Σ − 1 / 2 Σ ^ Σ − 1 / 2 − 𝐼 𝑛 ‖ 𝐹 ≤ 𝜌
•
Part 3.
( 1 − 𝜌 ) Σ ⪯ Σ ^ ⪯ ( 1 + 𝜌 ) Σ
Proof.
We denote 𝑍 as Definition 5.8 which is as the output of algorithm 2. The utility guarantee is immediately implied by Theorem 3.1. We then focus on the proof of privacy. By Lemma C.6, we have
Pr [ 𝑍
𝜖 ] ≤ 𝛿
(9)
And then by Theorem 3.2 and Eq. (9), Algorithm 2 is proved as ( 𝜖 , 𝛿 ) -differential private.
Proof of Part 3.
‖ Σ − 1 / 2 Σ ^ Σ − 1 / 2 − 𝐼 𝑛 ‖ ≤
‖ Σ − 1 / 2 Σ ^ Σ − 1 / 2 − 𝐼 𝑛 ‖ 𝐹
≤
𝜌
Thus,
( 1 − 𝜌 ) 𝐼 𝑛 ⪯ Σ − 1 / 2 Σ ^ Σ − 1 / 2 ⪯ ( 1 + 𝜌 ) 𝐼 𝑛
which is equivalent to
( 1 − 𝜌 ) Σ ⪯ Σ ^ ⪯ ( 1 + 𝜌 ) Σ
∎
Appendix DMORE SENSITIVITY LEMMA
In this section, we provide more lemmas related to sensitivity.
Lemma D.1 (Formal version of Lemma 5.10).
If 𝑋 ∈ ℝ 𝑛 × 𝑑 and 𝑋 ~ ∈ ℝ 𝑛 × 𝑑 are neighboring dataset (see Definition 1.5 and Definition 1.6), then ( 1 − 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ ⪯ 𝑋 ~ 𝑋 ~ ⊤ ⪯ ( 1 + 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ .
Proof.
Let 𝑖 ∈ [ 𝑑 ] be index that 𝑋 ∗ , 𝑖 and 𝑋 ~ ∗ , 𝑖 are different (See Definition 1.6).
We have
𝑋 ~ 𝑋 ~ ⊤
∑ 𝑗
1 𝑑 𝑋 ~ ∗ , 𝑗 𝑋 ~ ∗ , 𝑗 ⊤
(
∑
𝑗
∈
[
𝑑
]
{
𝑖
}
𝑋
~
∗
,
𝑗
𝑋
~
∗
,
𝑗
⊤
)
+
𝑋
~
∗
,
𝑖
𝑋
~
∗
,
𝑖
⊤
(
∑
𝑗
∈
[
𝑑
]
{
𝑖
}
𝑋
∗
,
𝑗
𝑋
∗
,
𝑗
⊤
)
+
𝑋
~
∗
,
𝑖
𝑋
~
∗
,
𝑖
⊤
𝑋 𝑋 ⊤ − 𝑋 ∗ , 𝑖 𝑋 ∗ , 𝑖 ⊤ + 𝑋 ~ ∗ , 𝑖 𝑋 ~ ∗ , 𝑖
where the first step is the result of matrix multiplication, the second step is from simple algebra, the third step follows from Definition 1.6, and the last step comes from simple algebra.
We know that
‖ 𝑋 ∗ , 𝑖 𝑋 ∗ , 𝑖 ⊤ − 𝑋 ~ ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ‖
‖ 𝑋 ∗ , 𝑖 𝑋 ∗ , 𝑖 ⊤ − 𝑋 ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ⊤ + 𝑋 ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ⊤ − 𝑋 ~ ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ‖
≤
‖ 𝑋 ∗ , 𝑖 𝑋 ∗ , 𝑖 ⊤ − 𝑋 ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ⊤ ‖ + ‖ 𝑋 ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ⊤ − 𝑋 ~ ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ‖
≤
‖ 𝑋 ∗ , 𝑖 ‖ 2 ⋅ ‖ 𝑋 ∗ , 𝑖 − 𝑋 ~ ∗ , 𝑖 ‖ 2 + ‖ 𝑋 ∗ , 𝑖 − 𝑋 ~ ∗ , 𝑖 ‖ 2 ⋅ ‖ 𝑋 ~ ∗ , 𝑖 ‖ 2
≤
2 𝛼 𝛽
(10)
where the first step is from adding a new term 𝑋 ∗ , 𝑖 𝑋 ~ ∗ , 𝑖 ⊤ , the second step follows from the triangle inequality, the third step follows from Fact A.1, and the last step is due to Definition 1.5 and Definition 1.6.
Thus, we have 𝑋 ~ 𝑋 ~ ⊤ ⪰ 𝑋 𝑋 ⊤ − 2 𝛼 𝛽 𝐼 𝑛 ⪰ ( 1 − 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ . Similarly, we have 𝑋 ~ 𝑋 ~ ⊤ ⪯ 𝑋 𝑋 ⊤ + 2 𝛼 𝛽 𝐼 𝑛 ⪯ ( 1 + 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ . ∎
The following is the presentation of the additional sensitivity lemma, which further extends the conclusion of Lemma 5.10 in Section 5.3. We use the following lemma in the proof of our main result, Theorem 4.1, presented in Section 4.
Lemma D.2.
If the following conditions hold
•
Let 𝛼 and 𝜂 be defined in Definition 1.5.
•
Let 𝛽 be defined in Definition 1.6.
•
𝑋 and 𝑋 ~ are neighboring datasets such that
( 1 − 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤ ⪯ 𝑋 ~ 𝑋 ~ ⊤ ⪯ ( 1 + 2 𝛼 𝛽 / 𝜂 ) 𝑋 𝑋 ⊤
Then, we have
•
Part 1.
‖ ( 𝑋 𝑋 ⊤ ) − 1 / 2 𝑋 ~ 𝑋 ~ ⊤ ( 𝑋 𝑋 ⊤ ) − 1 / 2 − 𝐼 ‖ ≤ 2 𝛼 𝛽 / 𝜂
•
Part 2.
‖ ( 𝑋 𝑋 ⊤ ) − 1 / 2 𝑋 ~ 𝑋 ~ ⊤ ( 𝑋 𝑋 ⊤ ) − 1 / 2 − 𝐼 ‖ 𝐹 ≤ 2 𝑛 𝛼 𝛽 / 𝜂
Proof.
The proof is straightforward, and we omit the details here. ∎
Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 90.1 kB
- Xet hash:
- 7c6bf05d4e5ba90db77e5451ddea1832609195b72c48fe0400368554e58fade5
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.