Buckets:
Title: Revealing the True Cost of Locally Differentially Private Protocols: An Auditing Perspective
URL Source: https://arxiv.org/html/2309.01597
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Related Work 3LDP Frequency Estimation Protocols 4LDP Auditing 5Experimental Evaluation 6Conclusion and Perspectives References License: CC BY 4.0 arXiv:2309.01597v3 [cs.CR] 12 Jul 2024 @printpermissiontrue@printcopyrighttrue@acmownedtrue@acmownedfalse@ACM@journal@bibstripfalse Revealing the True Cost of Locally Differentially Private Protocols: An Auditing Perspective Héber H. Arcolezi 0000-0001-8059-7094 Inria Centre at the University Grenoble AlpesFrance heber.hwang-arcolezi@inria.fr Sébastien Gambs 0000-0002-7326-7377 Université du Québec à Montréal (UQAM)Canada gambs.sebastien@uqam.ca Abstract.
While the existing literature on Differential Privacy (DP) auditing predominantly focuses on the centralized model (e.g., in auditing the DP-SGD algorithm), we advocate for extending this approach to audit Local DP (LDP). To achieve this, we introduce the LDP-Auditor framework for empirically estimating the privacy loss of locally differentially private mechanisms. This approach leverages recent advances in designing privacy attacks against LDP frequency estimation protocols. More precisely, through the analysis of numerous state-of-the-art LDP protocols, we extensively explore the factors influencing the privacy audit, such as the impact of different encoding and perturbation functions. Additionally, we investigate the influence of the domain size and the theoretical privacy loss parameters 𝜖 and 𝛿 on local privacy estimation. In-depth case studies are also conducted to explore specific aspects of LDP auditing, including distinguishability attacks on LDP protocols for longitudinal studies and multidimensional data. Finally, we present a notable achievement of our LDP-Auditor framework, which is the discovery of a bug in a state-of-the-art LDP Python package. Overall, our LDP-Auditor framework as well as our study offer valuable insights into the sources of randomness and information loss in LDP protocols. These contributions collectively provide a realistic understanding of the local privacy loss, which can help practitioners in selecting the LDP mechanism and privacy parameters that best align with their specific requirements. We open-sourced LDP-Auditor in (Arcolezi, 2024).
Local differential privacy, Privacy auditing, Privacy attacks. †journalyear: 2024 †journalvolume: 2024 †journalnumber: 4 †doi: 10.56553/popets-2024-0110 1.Introduction
Differential Privacy (DP) (Dwork et al., 2006) is now widely recognized as the gold standard for providing formal guarantees on the privacy level achieved by an algorithm. One of its extension, known as Local DP (LDP) (Kasiviswanathan et al., 2011; Duchi et al., 2013), aims at tackling the trust challenges associated with relying on a centralized server, such as those highlighted by various data breaches (McCandless et al., 2021) and instances of data misuse (Wong, 2019). In LDP, each user perturbs their own data locally before sharing it with a data aggregator or a central server. The fundamental idea behind LDP is to introduce carefully calibrated noise to the data to ensure individual privacy guarantees while allowing meaningful statistical analysis to be performed on the aggregated noisy data.
Formally, a randomized algorithm ℳ satisfies ( 𝜖 , 𝛿 )-local differential privacy (( 𝜖 , 𝛿 )-LDP), for 𝜖 ≥ 0 and 0 ≤ 𝛿 ≤ 1 , if for any pair of input values 𝑣 1 , 𝑣 2 ∈ Domain ( ℳ ) and all possible sets of outputs 𝑂 ⊆ Range ( ℳ ) , the following inequality holds:
(1) Pr [ ℳ ( 𝑣 1 ) ∈ 𝑂 ] ≤ 𝑒 𝜖 ⋅ Pr [ ℳ ( 𝑣 2 ) ∈ 𝑂 ] + 𝛿 .
In particular, ( 𝜖 , 𝛿 )-LDP is also called approximate LDP, with the special case of 𝛿
0 being called pure 𝜖 -LDP. On the one hand, an (L)DP mechanism is accompanied by the mathematical proof in Equation (1) that establishes a theoretical upper bound for the privacy loss, represented by the privacy parameters 𝜖 and 𝛿 . In particular, lower values of 𝜖 indicate stronger privacy guarantees. On the other hand, the recent and emerging field of DP auditing (e.g., see (Jagielski et al., 2020; Nasr et al., 2021; Lu et al., 2022b; Tramer et al., 2022; Maddock et al., 2022; Steinke et al., 2023; Andrew et al., 2023; Pillutla et al., 2023; Nasr et al., 2023; Cebere et al., 2024; Kazmi et al., 2024)) aims at estimating an empirical lower bound for the privacy loss, denoted as 𝜖 𝑒 𝑚 𝑝 .
The role of DP auditing is crucial because it bridges the gap between theoretical guarantees and practical implementations, especially when theoretical bounds on privacy loss might be overly pessimistic or not sufficiently tight (e.g., as in Differentially Private Stochastic Gradient Descent – DP-SGD (Abadi et al., 2016)). In other words, DP auditing helps in understanding how well privacy-preserving mechanisms perform under different conditions and attack scenarios (Nasr et al., 2021). Furthermore, auditing can uncover potential vulnerabilities or flaws in the implementation that might not be apparent through theoretical analysis alone (Turati et al., 2023; Tramer et al., 2022; Ding et al., 2018). From a practical standpoint, the empirical estimation of the privacy loss through realistic attackers can also help practitioners make informed decisions and understand the implications of specific privacy parameter choices. These instances underscore the significance of empirically estimating and verifying the claimed privacy levels of (L)DP mechanisms.
1.1.Our Contributions
With these motivations in mind, in this paper, we introduce the LDP-Auditor framework, which is designed to audit LDP frequency estimation protocols and estimate their empirical privacy loss. Frequency (or histogram) estimation is a primary objective of LDP as it is a building block for more complex tasks. This means our audit results are applicable and relevant to numerous tasks under LDP guarantees, such as heavy hitter estimation (Bassily and Smith, 2015; Wang et al., 2021a), joint distribution estimation (Kikuchi, 2022; Costa Filho and Machado, 2023; Ren et al., 2018; Zhang et al., 2018), frequent item-set mining (Wang et al., 2018; Wu et al., 2023), machine learning (Mahawaga Arachchige et al., 2020; Yilmaz et al., 2020), frequency estimation of multidimensional data (Wang et al., 2019; Nguyên et al., 2016; Arcolezi et al., 2021a) and frequency monitoring (Arcolezi et al., 2023b, 2022a; Erlingsson et al., 2014; Ding et al., 2017; Vidal et al., 2020).
More precisely, LDP-Auditor relies on Monte Carlo methods to estimate the probabilities 𝑝 ^ 0
Pr [ ℳ ( 𝑣 1 ) ∈ 𝑂 ] and 𝑝 ^ 1
Pr [ ℳ ( 𝑣 2 ) ∈ 𝑂 ] from Equation (1) through attacks. From this, an empirical privacy loss is computed, 𝜖 𝑒 𝑚 𝑝
ln ( ( 𝑝 ^ 0 − 𝛿 ) / 𝑝 ^ 1 ) , thus providing an estimate of the algorithm’s privacy leakage. A comprehensive discussion on the LDP-Auditor framework, including its detailed methodology and applications, is deferred to Section 4.
Unlike traditional DP-SGD auditing, in which the focus is on distinguishing neighboring datasets, LDP-Auditor assesses the distinguishability of inputs directly. To achieve this, we instantiate LDP-Auditor with distinguishability attacks based on recent adversarial analysis of LDP frequency estimation protocols (Emre Gursoy et al., 2022; Arcolezi et al., 2023a). These attacks allow an adversary’s to predict the user’s input value based on the obfuscated output, enabling LDP-Auditor to directly evaluate the privacy guarantees offered by LDP mechanisms, making it well-suited for privacy auditing. In this context, expanding beyond (Emre Gursoy et al., 2022; Arcolezi et al., 2023a), we also introduce novel distinguishability attacks tailored to four additional LDP frequency estimation protocols based on histogram encoding (Wang et al., 2017), as well as general distinguishability attacks on LDP protocols for longitudinal studies (see Algorithm 2) and on LDP protocols for multidimensional data (see Algorithm 3).
As an example, Figure 1 illustrates an instance of our auditing results for a theoretical upper bound of 𝜖
2 (indicated by the dashed red line) across eight 𝜖 -LDP frequency estimation protocols: Generalized Randomized Response (GRR) (Kairouz et al., 2016), Subset Selection (SS) (Wang et al., 2016; Ye and Barg, 2018), Symmetric Unary Encoding (SUE) (Erlingsson et al., 2014), Optimal Unary Encoding (OUE) (Wang et al., 2017), Thresholding with Histogram Encoding (THE) (Wang et al., 2017), Summation with Histogram Encoding (SHE) (Dwork et al., 2006), Binary Local Hashing (BLH) (Bassily and Smith, 2015) and Optimal Local Hashing (OLH) (Wang et al., 2017). Among all these protocols, GRR demonstrated a tight empirical privacy loss estimation for 𝜖 𝑒 𝑚 𝑝 as it does not require a specific encoding. On the other hand, other LDP protocols presented 𝜖 𝑒 𝑚 𝑝 within ≤ 2 x of the theoretical 𝜖 (such as SUE, THE and SHE), and even within ≤ 4 x of the theoretical 𝜖 (like BLH). These results indicate that either the state-of-the-art attacks are still not representative of the worst-case scenario or that the upper bound analyses of these LDP protocols are not tight. The latter assumption might occur for LDP protocols that incorporate sources of randomness (e.g., due to hashing (Acharya et al., 2019; Bassily and Smith, 2015; Wang et al., 2017; Apple Differential Privacy Team, 2017)) not captured in the worst-case definition of LDP in Equation (1).
More specifically, we have investigated several factors influencing the audit, including the effect of theoretical privacy loss parameters ( 𝜖 and 𝛿 ) in low, mid and high privacy regimes as well as the impact of the domain size 𝑘 on local privacy estimation. Our investigation included detailed case studies to further explore specific facets of LDP auditing. Notably, our analysis assessed how variations in 𝛿 affect the empirical privacy loss, 𝜖 𝑒 𝑚 𝑝 , for approximate LDP variants (Wang et al., 2021b) of the GRR, SUE, BLH and OLH protocols, alongside with the Gaussian Mechanism (GM) (Dwork et al., 2014) and the Analytic GM (AGM) (Balle and Wang, 2018). Moreover, given that BLH exhibited the least tight empirical privacy loss estimation 𝜖 𝑒 𝑚 𝑝 , we investigated the privacy loss of local hashing without LDP obfuscation. In addition, we examined the degradation of the empirical local privacy loss in repeated data collections compared to the theoretical upper bound imposed by the (L)DP sequential composition (Dwork et al., 2014). In this context, within a generic framework, we proposed distinguishability attacks on LDP protocols in longitudinal studies (cf. Algorithm 2). Furthermore, we addressed the case of multidimensional data, proposing distinguishability attacks for LDP protocols following the RS+FD (Arcolezi et al., 2021a) solution (cf. Algorithm 3). We also show how LDP-Auditor successfully identified a bug in one state-of-the-art LDP Python package, in which the empirical privacy loss 𝜖 𝑒 𝑚 𝑝 contradicts the theoretical upper bound 𝜖 (see Figure 8).
Figure 1.Comparison of estimated privacy loss 𝜖 𝑒 𝑚 𝑝 with theoretical upper bound 𝜖
2 for eight pure LDP frequency estimation protocols. The dashed red line corresponds to the certifiable upper bound. While GRR closely aligns with the theoretical bound, others exhibit empirical 𝜖 𝑒 𝑚 𝑝 within ≤ 2 x (e.g., SUE) or even ≤ 4 x (i.e., BLH) of the theoretical 𝜖 value.
Taking all these aspects into account, the coverage of our analysis is broadened, allowing for a more comprehensive assessment of the robustness of various LDP protocols in realistic data collection scenarios. More specifically, our main contributions in this paper can be summarized as follows:
•
We introduce the LDP-Auditor framework, which aims to estimate the empirical privacy loss of LDP frequency estimation protocols. This framework provides a realistic assessment of privacy guarantees, which is essential for making informed decisions about LDP parameter selection and on stimulating the research of new privacy attacks.
•
We introduce novel distinguishability attacks specifically tailored to LDP protocols for longitudinal studies and multidimensional data. These new attacks enrich the privacy analysis techniques available for examining the robustness of LDP mechanisms in practical settings.
•
We conduct an extensive audit of various LDP protocols, analyzing the impact of factors such as privacy regimes, domain size and multiple data collections. This comprehensive analysis provides valuable insights into the resilience and effectiveness of nine state-of-the-art LDP mechanisms, fundamental building blocks for applications such as frequency monitoring (Arcolezi et al., 2023b; Erlingsson et al., 2014; Ding et al., 2017; Vidal et al., 2020), heavy hitter estimation (Bassily and Smith, 2015; Wang et al., 2021a) and machine learning (Mahawaga Arachchige et al., 2020; Yilmaz et al., 2020).
•
We demonstrate the bug detection capabilities of LDP-Auditor by identifying an issue in a state-of-the-art LDP Python package. This highlights the practical significance of our framework in validating LDP implementations.
2.Related Work
Differential privacy auditing, as introduced by Jagielski et al. (2020), involves employing various techniques to empirically assess the extent of privacy leakage in machine learning algorithms through estimating the 𝜖 𝑒 𝑚 𝑝 privacy loss. These techniques are particularly valuable when known analytical bounds on the DP loss lack precision, allowing for empirical measurements of privacy in such cases. For instance, DP auditing has been extensively investigated in evaluating the mathematical analysis for the well-known DP-SGD algorithm proposed by Abadi et al. (2016). The research literature on DP-SGD auditing covers both centralized (Jagielski et al., 2020; Lu et al., 2022b; Steinke et al., 2023; Pillutla et al., 2023; Nasr et al., 2023, 2021; Cebere et al., 2024; Tramer et al., 2022) and federated (Maddock et al., 2022; Andrew et al., 2023) learning settings. Beyond privacy-preserving machine learning, privacy auditing has also been studied for standard DP algorithms (Ding et al., 2018; Bichsel et al., 2021; Gorla et al., 2022; Askin et al., 2022; Lu et al., 2022a). For instance, some of these works consider a fully black-box scenario (i.e., unknown DP mechanism) with the goal of estimating the 𝜖 -(L)DP guarantee provided (Gorla et al., 2022; Askin et al., 2022; Lu et al., 2022a). Another line of research (Ding et al., 2018; Bichsel et al., 2021) has been tailored to identify errors in algorithm analysis or code implementations, especially when derived lower bounds contradict theoretical upper bounds. While the works in (Ding et al., 2018; Bichsel et al., 2021) could also be used to certify the 𝜖 -LDP guarantee through Monte Carlo estimations, our work considers realistic privacy attacks to LDP mechanisms to empirically estimate the privacy loss 𝜖 𝑒 𝑚 𝑝 . In other words, they would be able to answer “is the claimed 𝜖 -LDP correct in this code implementation?”, whereas we alternatively answer “is the claimed 𝜖 -LDP worst-case guarantee tight under state-of-the-art attacks?”.
This distinction highlights our emphasis on assessing the tightness of privacy guarantees under stringent adversarial conditions. Consequently, we envision our auditing analysis as an stimulus for advancing the current state-of-the-art in privacy attacks on LDP protocols and achieve tight empirical estimates for 𝜖 𝑒 𝑚 𝑝 . In this context, the existing literature on privacy attacks on LDP comprises several categories: (1) Distinguishability attacks (Emre Gursoy et al., 2022; Arcolezi et al., 2023a; Chatzikokolakis et al., 2023) (adopted in this work), which enable adversaries to predict the users’ input based on the obfuscated outputs; (2) Pool inference attacks (Gadotti et al., 2022), allowing adversaries to deduce a user’s preferences or attributes from the aggregated data, such as inferring a user’s preferred skin tone used in emojis; (3) Re-identification attacks (Murakami and Takahashi, 2021; Arcolezi et al., 2023a), aiming to uniquely identify a specific user within a larger population; and (iv) Attacks on iterative data collections (Arcolezi et al., 2023b; GÜRSOY, 2024), which allows adversaries to detect a pattern change in longitudinal studies, such as when someone starts a diet by monitoring calorie consumption.
3.LDP Frequency Estimation Protocols
In this section, we review the necessary notation (cf. Table 1 in Appendix A) and background information of the LDP frequency estimation protocols. Throughout the paper, let [ 𝑛 ]
{ 1 , 2 , … , 𝑛 } denote a set of integers and 𝑉
{ 𝑣 1 , … , 𝑣 𝑘 } represent a sensitive attribute with a discrete domain of size 𝑘
| 𝑉 | . We consider a distributed setting with 𝑛 users and one untrusted server collecting the data reported by these users. The fundamental premise of ( 𝜖 , 𝛿 )-LDP, as stated in Equation (1), is that the input to ℳ cannot be confidently determined from its output, with the level of confidence determined by 𝑒 𝜖 and 𝛿 . Therefore, the user’s privacy is considered compromised if the adversary can correctly predict the user’s value.
In recent works (Emre Gursoy et al., 2022; Arcolezi et al., 2023a), the authors introduced distinguishability attacks 𝒜 to state-of-the-art LDP frequency estimation protocols. These attacks enable an adversary to predict the users’ value 𝑣 ^
𝒜 ( 𝑦 ) , in which 𝑦
ℳ ( 𝑣 ) represents the reported value obtained through the 𝜖 -LDP protocol. In essence, although each LDP protocol employs different encoding and perturbation functions, the adversary’s objective remains the same, namely to predict the user’s true value by identifying the most likely value that would have resulted in the reported value 𝑦 . The notion of distinguishability attacks provides a unified approach to evaluate the privacy guarantees offered by different LDP protocols.
We now provide a brief overview of state-of-the-art pure and approximate LDP frequency estimation protocols ℳ , along with their respective distinguishability attacks denoted as 𝒜 ℳ . The attack 𝒜 ℳ generally relies on a “support set” (Wang et al., 2017), denoted as 𝟙 ℳ , which is built upon the reported value 𝑦 . The combination of these protocols and attack strategies will enable us to comprehensively audit the empirical privacy level provided by various LDP mechanisms.
3.1.Pure 𝜖 -LDP Protocols
Generalized Randomized Response (GRR). The GRR (Kairouz et al., 2016) mechanism generalizes the randomized response surveying technique proposed by Warner (1965) for 𝑘 ≥ 2 while satisfying 𝜖 -LDP. Given a value 𝑣 ∈ 𝑉 , GRR ( 𝑣 ) outputs the true value 𝑣 with probability 𝑝 , and any other value 𝑣 ′ ∈ 𝑉 ∖ { 𝑣 } , otherwise. More formally:
(2) Pr [ GRR ( 𝑣 )
𝑦 ]
{ 𝑝
𝑒 𝜖 𝑒 𝜖 + 𝑘 − 1 if 𝑦
𝑣 ,
𝑞
1 𝑒 𝜖 + 𝑘 − 1 if 𝑦 ≠ 𝑣 ,
in which 𝑦 ∈ 𝑉 is the perturbed value sent to the server. The support set for GRR is simply 𝟙 GRR
{ 𝑦 } . From Equation (2), Pr [ 𝑦
𝑣 ] > Pr [ 𝑦
𝑣 ′ ] for all 𝑣 ′ ∈ 𝑉 ∖ { 𝑣 } . Therefore, the attack strategy 𝒜 GRR is to predict 𝑣 ^
𝑦 (Emre Gursoy et al., 2022; Arcolezi et al., 2023a).
Subset Selection (SS). The SS (Wang et al., 2016; Ye and Barg, 2018) mechanism was proposed for the case in which the obfuscation output is a subset of values Ω of the original domain 𝑉 . The optimal subset size that minimizes the variance is 𝜔
| Ω |
max ( 1 , ⌊ 𝑘 𝑒 𝜖 + 1 ⌉ ) . Given an empty subset Ω , the true value 𝑣 is added to Ω with probability 𝑝
𝜔 𝑒 𝜖 𝜔 𝑒 𝜖 + 𝑘 − 𝜔 . Finally, values are added to Ω as follows:
•
If 𝑣 ∈ Ω , then 𝜔 − 1 values are sampled from 𝑉 ∖ { 𝑣 } uniformly at random (without replacement) and are added to Ω ;
•
If 𝑣 ∉ Ω , then 𝜔 values are sampled from 𝑉 ∖ { 𝑣 } uniformly at random (without replacement) and are added to Ω .
Afterward, the user sends the subset Ω to the server. The support set for SS is the subset of all values in Ω , i.e., 𝟙 SS
{ 𝑣 | 𝑣 ∈ Ω } . Therefore, the attack strategy 𝒜 SS is to predict 𝑣 ^
Uniform ( 𝟙 SS ) (Emre Gursoy et al., 2022; Arcolezi et al., 2023a).
Unary Encoding (UE). UE protocols (Erlingsson et al., 2014; Wang et al., 2017) encode the user’s input data 𝑣 ∈ 𝑉 , as a one-hot 𝑘 -dimensional vector before obfuscating each bit independently. More precisely, let v
[ 0 , … , 0 , 1 , 0 , … , 0 ] be a binary vector with only the bit at the position 𝑣 set to 1 while the other bits are set to 0 . The obfuscation function of UE mechanisms randomizes the bits from v independently to generate y as follows:
(3) ∀ 𝑖 ∈ [ 𝑘 ] : Pr [ y 𝑖
1 ]
{ 𝑝 , if v 𝑖
1 ,
𝑞 , if v 𝑖
0 ,
in which y is sent to the server. There are two variations of UE mechanisms: (i) Symmetric UE (SUE) (Erlingsson et al., 2014) that selects 𝑝
𝑒 𝜖 / 2 𝑒 𝜖 / 2 + 1 and 𝑞
1 𝑒 𝜖 / 2 + 1 in Equation (3), such that 𝑝 + 𝑞
1 ; and (ii) Optimal UE (OUE) (Wang et al., 2017) that selects 𝑝
1 2 and 𝑞
1 𝑒 𝜖 + 1 in Equation (3). With y, the adversary can construct the subset of all values 𝑣 ∈ 𝑉 that are set to 1, i.e., 𝟙 UE
{ 𝑣 | y 𝑣
1 } . There are two possible attack strategies 𝒜 UE (Emre Gursoy et al., 2022; Arcolezi et al., 2023a):
•
𝒜 UE 0 is a random choice 𝑣 ^
Uniform ( [ 𝑘 ] ) , if 𝟙 UE
∅ ;
•
𝒜 UE 1 is a random choice 𝑣 ^
Uniform ( 𝟙 UE ) , otherwise.
Local Hashing (LH). LH protocols (Wang et al., 2017; Bassily and Smith, 2015) use hash functions to map the input data 𝑣 ∈ 𝑉 to a new domain of size 𝑔 ≥ 2 , and then apply GRR to the hashed value. Let ℋ be a universal hash function family such that each hash function H ∈ ℋ hashes a value 𝑣 ∈ 𝑉 into [ 𝑔 ] (i.e., H : 𝑉 → [ 𝑔 ] ). There are two variations of LH mechanisms: (i) Binary LH (BLH) (Bassily and Smith, 2015) that just sets 𝑔
2 , and (ii) Optimal LH (OLH) (Wang et al., 2017) that selects 𝑔
⌊ 𝑒 𝜖 + 1 ⌉ . Each user first selects a hash function H ∈ ℋ at random and obfuscates the hash value ℎ
H ( 𝑣 ) with GRR. In particular, the LH reporting mechanism is LH ( 𝑣 ) ≔ ⟨ H , GRR ( ℎ ) ⟩ , in which GRR ( ℎ ) is given in Equation (2) while operating on the new domain [ 𝑔 ] . Each user reports the hash function and obfuscated value ⟨ H , 𝑦 ⟩ to the server. With these elements, the adversary can construct the subset of all values 𝑣 ∈ 𝑉 that hash to 𝑦 , i.e., 𝟙 LH
{ 𝑣 | H ( 𝑣 )
𝑦 } . There are two possible attack strategies 𝒜 LH (Emre Gursoy et al., 2022; Arcolezi et al., 2023a):
•
𝒜 LH 0 is a random choice 𝑣 ^
Uniform ( [ 𝑘 ] ) , if 𝟙 LH
∅ ;
•
𝒜 LH 1 is a random choice 𝑣 ^
Uniform ( 𝟙 LH ) , otherwise.
Histogram Encoding (HE). HE protocols (Wang et al., 2017) encode the user value as a one-hot 𝑘 -dimensional histogram, v
[ 0.0 , 0.0 , … , 1.0 , 0.0 , … , 0.0 ] in which only the 𝑣 -th component is 1.0 . To satisfy 𝜖 -LDP, HE ( v ) perturbs each bit of v independently using the Laplace mechanism (Dwork et al., 2006). Two different input values 𝑣 1 , 𝑣 2 ∈ 𝑉 will result in two vectors with L1 distance of Δ 1
2 . Thus, HE will output y such that y 𝑖
v 𝑖 + Lap ( Δ 1 𝜖 ) . In this paper, we propose distinguishability attacks on two pure 𝜖 -LDP HE protocols:
•
Summation with HE (SHE) (Dwork et al., 2006). With SHE, there is no post-processing of y. Instead of constructing a support set, we describe our attacking strategy to SHE as follows. Let 𝑃 𝑉 ( 𝑣 ) be the prior probability of input value 𝑣 , and let 𝑃 𝑌 ( y | 𝑣 ) be the likelihood of observing y given the true input value 𝑣 . By the Bayes’ theorem, the posterior probability of input value 𝑣 given the observed y is:
(4) 𝑃 𝑉 ( 𝑣 | y )
𝑃 𝑌 ( y | 𝑣 ) 𝑃 𝑉 ( 𝑣 ) ∑ 𝑖
1 𝑘 𝑃 𝑌 ( y | 𝑖 ) 𝑃 𝑉 ( 𝑖 ) .
We can compute the likelihood 𝑃 𝑌 ( y | 𝑣 ) as follows. For a given 𝑣 , the corresponding one-hot encoded histogram is v. The reported value y is the sum of v and noise from a Laplace distribution with scale 𝑏
2 / 𝜖 . Therefore, the likelihood of observing 𝐲 given 𝐯 is:
(5) 𝑃 𝑌 ( y | v )
1 ( 2 𝑏 ) 𝑘 exp ( − | y − v | 1 𝑏 ) ,
in which | y − v | 1 is the L1 distance between y and v. To perform the attack, we compute the posterior probability 𝑃 𝑉 ( 𝑣 | y ) for each possible input value 𝑣 ∈ 𝑉 and output the most probable input value. In other words, given the reported y, our Bayes optimal attack 𝒜 𝑆 𝐻 𝐸 outputs:
(6) 𝑣 ^
arg max 𝑣 ∈ 𝑉 𝑃 𝑉 ( 𝑣 | y ) .
Note that this attack requires knowledge of the prior probability distribution 𝑃 𝑉 ( 𝑣 ) . If the prior is unknown (assumed in this paper), one can use a uniform prior.
•
Thresholding with HE (THE) (Wang et al., 2017). With THE, the server (or the user) can construct the support set as 𝟙 THE
{ 𝑣 | y 𝑣 > 𝜃 } , i.e., each noise count whose value > 𝜃 . The optimal threshold value for 𝜃 that minimizes the protocol’s variance is within ( 0.5 , 1 ) . With 𝟙 THE
{ 𝑣 | y 𝑣
𝜃 } , we propose an adversary 𝒜 THE with two attack strategies:
–
𝒜 THE 0 is a random choice 𝑣 ^
Uniform ( [ 𝑘 ] ) , if 𝟙 THE
∅ ;
–
𝒜 THE 1 is a random choice 𝑣 ^
Uniform ( 𝟙 THE ) , otherwise.
3.2.Approximate ( 𝜖 , 𝛿 )-LDP Protocols
In this section, we describe two ( 𝜖 , 𝛿 )-LDP protocols, which are based on the Gaussian mechanism (Dwork et al., 2014; Balle and Wang, 2018). We defer the descriptions of approximate ( 𝜖 , 𝛿 )-LDP variants (Wang et al., 2021b) of GRR, SUE and LH protocols – namely, Approximate GRR (AGRR), Approximate SUE (ASUE), Approximate LH (ALH) – to Appendix B.
HE with Gaussian Mechanism (HE-GM) (Dwork et al., 2014; Balle and Wang, 2018). Similar to HE protocols of Section 3.1, HE-GM protocols encode the user value as a one-hot 𝑘 -dimensional histogram. Then, HE-GM ( v ) perturbs each bit of v independently using a Gaussian mechanism (GM) (Dwork et al., 2014; Balle and Wang, 2018). Two different input values 𝑣 1 , 𝑣 2 ∈ 𝑉 will result in two vectors with L2 distance of Δ 2
2 . Thus, HE-GM will output y such that y 𝑖
v 𝑖 + 𝒩 ( 0 , 𝜎 2 ) , in which 𝜎 is determined by 𝜖 , 𝛿 , and Δ 2 . When using the well-established GM for 𝜖 , 𝛿 ∈ ( 0 , 1 ) , 𝜎
Δ 2 𝜖 2 ln ( 1.25 / 𝛿 ) (Dwork et al., 2014). In this paper, we also consider the Analytic GM (AGM) (Balle and Wang, 2018), which is an improved version of the GM (Dwork et al., 2014) and can be applied for any 𝜖
0 . The main difference between GM and AGM is the method to parameterize 𝜎 . With AGM, 𝜎 is calculated analytically as demonstrated in (Balle and Wang, 2018, Algorithm 1) and its implementation (Balle, 2018). Hereafter, we will specifically denote “AGM” and “GM” when referring to HE-GM instantiated with AGM and GM, respectively.
Building upon our distinguishability attack’s description of the SHE protocol with Laplace noise, we extend the attack analysis to HE-GM protocols. The overall strategy, including the use of Bayes’ theorem to compute posterior probabilities, remains consistent with our prior description in Section 3.1 (cf. Equation (4)). However, the key difference lies in the noise distribution used for ensuring LDP.
While the Laplace mechanism involves adding noise drawn from a Laplace distribution with scale 𝑏
2 / 𝜖 , the Gaussian mechanism adds noise following the normal distribution, (i.e., 𝒩 ( 0 , 𝜎 2 ) ). This needs a different computation for the likelihood 𝑃 𝑌 ( 𝐲 | 𝑣 ) in Equation (5), reflecting the properties of Gaussian noise. Accordingly, the likelihood of observing 𝐲 given 𝐯 under Gaussian noise is:
(7) 𝑃 𝑌 ( 𝐲 | 𝐯 )
1 ( 2 𝜋 𝜎 2 ) 𝑘 exp ( − | 𝐲 − 𝐯 | 2 2 2 𝜎 2 ) ,
in which | 𝐲 − 𝐯 | 2 2 denotes the L2 squared distance between 𝐲 and 𝐯 .
Then, our Bayes optimal attack for HE-GM protocols 𝒜 HE-GM predicts the most probable input value, 𝑣 ^ , given the reported 𝐲 , by following Equation (6). Remark that Equation (7) is valid for both GM and AGM as a function of their respective noise scale 𝜎 . Similar to the 𝒜 𝑆 𝐻 𝐸 attack, if the prior probability distribution 𝑃 𝑉 ( 𝑣 ) is unknown, a uniform prior may be assumed for the analysis.
4.LDP Auditing
In this section, we introduce our LDP-Auditor framework (Section 4.1) and our distinguishability attacks considering multiple data collections (Section 4.2 and Section 4.3).
4.1.LDP-Auditor
Our LDP-Auditor framework builds upon previous work on central DP auditing (Jagielski et al., 2020) with slight modifications tailored for LDP auditing. This adaptation is necessary due to the intrinsic differences between the central DP and LDP models, primarily regarding the granularity of privacy and the nature of the data being protected. Figure 9 in Appendix C compares the adversarial privacy game between central and local DP. Unlike central DP, in which the adversary’s objective is to distinguish between two “neighboring datasets”, the LDP model shifts the focus towards distinguishing between individual “inputs”. We instantiate LDP-Auditor with distinguishability attacks to construct a robust test statistic for auditing LDP mechanisms. Specifically, we can formulate a distinguishability attack as a binary hypothesis testing problem: ℋ : “ 𝑦 comes from 𝑣 1 ”. The attacker receives an output drawn from one of the two distributions ℳ ( 𝑣 1 ) or ℳ ( 𝑣 2 ) and has to infer whether the input was 𝑣 1 or not. If the algorithm ℳ is ( 𝜖 , 𝛿 ) -LDP, then no distinguishability attacks can be too accurate (Kairouz et al., 2015). Specifically, for any distinguishability attack 𝒜 , we can statistically measure LDP re-writing Equation (1) as:
(8) Pr [ 𝒜 ( ℳ ( 𝑣 1 ) )
𝑣 1 ] ⏟ True Positive Rate (TPR) ≤ 𝑒 𝜖 ⋅ Pr [ 𝒜 ( ℳ ( 𝑣 2 ) )
𝑣 1 ] ⏟ False Positive Rate (FPR) + 𝛿 .
If ℳ satisfies ( 𝜖 , 𝛿 ) -LDP, then 𝜖 ≥ ln ( TPR − 𝛿 FPR ) . In this formulation, the TPR is the probability that the attack correctly identifies 𝑦 as coming from 𝑣 1 , and the FPR is the likelihood that 𝑦 is incorrectly attributed to 𝑣 1 when it comes from 𝑣 2 . Note that the 𝛿 term in Equation (8) reflects the privacy budget from ℳ and is not an independent probability or error rate introduced by the distinguishability attack 𝒜 . However, a single run of a distinguishability attack is typically not sufficient to draw meaningful conclusions due to the inherent variability in the mechanism’s outputs. Thus, to ensure the robustness of our empirical privacy loss estimation and account for statistical uncertainty, LDP-Auditor runs for multiple trials 𝑇 in order to compute the TPR and FPR from Equation (8). Then, to affirm that our empirical privacy loss estimation is valid with a probability greater than 1 − 𝛼 , we use Clopper-Pearson confidence intervals1 (Clopper and Pearson, 1934) to establish a lower bound 𝑝 ^ 1 for the FPR and an upper bound 𝑝 ^ 0 for the TPR, each with a confidence of 1 − 𝛼 / 2 . As a consequence, we can be confident that our empirical privacy loss estimation 𝜖 𝑒 𝑚 𝑝
ln ( 𝑝 ^ 0 − 𝛿 𝑝 ^ 1 ) , holds with probability 1 − 𝛼 . This procedure is outlined in Algorithm 1, and we prove its correctness in Theorem 1. The proof of Theorem 1 is deferred to Appendix E.
Algorithm 1 LDP-Auditor. 1:Input : Theoretical 𝜖 and 𝛿 , LDP protocol ℳ , distinguishability attack 𝒜 , values 𝑣 1 , 𝑣 2 ∈ 𝑉 , trial count 𝑇 , confidence level 𝛼 . 2:Output : Estimated privacy loss 𝜖 𝑒 𝑚 𝑝 . 3: TP
0 , FP
0 ▷ True Positive (TP) and False Positive (FP) 4:for 𝑖 ∈ [ 𝑇 ] do 5: if 𝒜 ( ℳ ( 𝑣 1 ) )
𝑣 1 TP
TP + 1 6: if 𝒜 ( ℳ ( 𝑣 2 ) )
𝑣 1 FP
FP + 1 7:end for 8: 𝑝 ^ 0
ClopperPearsonLower ( TP , 𝑇 , 𝛼 / 2 ) 9: 𝑝 ^ 1
ClopperPearsonUpper ( FP , 𝑇 , 𝛼 / 2 ) 10:return : 𝜖 𝑒 𝑚 𝑝
ln ( ( 𝑝 ^ 0 − 𝛿 ) / 𝑝 ^ 1 ) Theorem 1 (Correctness of LDP-Auditor).
Given black-box access to an LDP mechanism ℳ , and a distinguishability attack 𝒜 , for any two distinct values 𝑣 1 , 𝑣 2 , a number of trials 𝑇 , and a statistical confidence 𝛼 , if LDP-Auditor in Algorithm 1 returns 𝜖 𝑒 𝑚 𝑝 , then, with probability 1 − 𝛼 , ℳ does not satisfy ( 𝜖 ′ , 𝛿 ) -LDP for any 𝜖 ′ < 𝜖 𝑒 𝑚 𝑝 .
Accounting for statistical uncertainty. We highlight that when we refer to 𝜖 𝑒 𝑚 𝑝 as an empirical lower bound with probability 1 − 𝛼 , this naming is solely due to the inherent randomness of the Monte Carlo sampling process, without the need for any specific modeling or assumptions. By increasing the number of trials 𝑇 , we can progressively enhance our confidence level towards 1. Furthermore, the decision to employ the Clopper-Pearson method stems from its relevance when an exact confidence interval is desired, in contrast to approximate methods (e.g., heuristic approaches). This approach enables a more reliable safeguard against underestimating privacy risks, and has been widely used in central DP audit research (Jagielski et al., 2020; Nasr et al., 2021; Tramer et al., 2022; Bichsel et al., 2021). In this work, we utilize the Clopper-Pearson implementation provided by the proportion_confint method in the Python package statsmodels (https://pypi.org/project/statsmodels/).
Choice of parameters. Given that LDP frequency estimation protocols usually distribute noise uniformly at random, the estimation of the empirical privacy loss 𝜖 𝑒 𝑚 𝑝 is not contingent upon selecting values 𝑣 1 and 𝑣 2 to represent a “worst-case scenario”, unlike in central DP audit. Considering 𝑉
{ 1 , 2 , … , 𝑘 } , in this work, we set 𝑣 1
1 and 𝑣 2
2 . The performed tests revealed no statistical difference when experiments were conducted with 𝑣 1
1 and a dynamic 𝑣 2
Uniform ( 2 , 𝑘 ) . Considering the experimental setup parameters, the number of trials 𝑇 and the confidence level 1 − 𝛼 should be chosen to balance computational efficiency with the robustness of the empirical privacy loss estimation. Typically, a larger 𝑇 enhances the reliability of 𝜖 𝑒 𝑚 𝑝 estimates, while a smaller 𝛼 increases the confidence in these estimates. In this work, we recommend selecting 𝑇 to be sufficiently large to ensure stable estimates across multiple experiments (e.g., we set 𝑇
10 6 ) and setting 𝛼 to reflect a high confidence level, such as 0.05 or 0.01 , to underpin the statistical significance of the empirical findings.
Limits on the empirical privacy loss estimation. The 𝜖 𝑒 𝑚 𝑝 reported by Algorithm 1 is upper bounded by the theoretical 𝜖 but also by an upper bound imposed by Monte Carlo estimation, which will be denoted by 𝜖 𝑂 𝑃 𝑇 and depends on 𝛼 and 𝑇 . For instance, let 𝛼
0.01 to get a 99 % -confidence bound and 𝑇
10 4 trials. Even if we get perfect inference accuracy with TP
𝑇 and FP
0 , the Clopper-Pearson confidence interval would produce 𝑝 ^ 0
0.9994 and 𝑝 ^ 1
0.0006 , which implies an empirical privacy loss of 𝜖 𝑒 𝑚 𝑝
7.42 . This means, with 99% probability, the true 𝜖 is at least 7.42 , and 𝜖 𝑂 𝑃 𝑇 ( 𝛼 , 𝑇 )
7.42 .
4.2.LDP-Auditor for Longitudinal Studies
In practice, the server often needs to collect users’ data periodically throughout multiple data collections (i.e., longitudinal studies). Nevertheless, in the worst-case, one known result in (L)DP is that repeated data collections have a linear privacy loss due to the sequential composition (Dwork et al., 2014). This occurs because attackers can exploit “averaging attacks” to distinguish the user’s actual value from the added noise. For this reason, well-known LDP mechanisms for longitudinal studies such as RAPPOR (Erlingsson et al., 2014) (deployed in Google Chrome) and 𝑑 BitFlipPM (Ding et al., 2017) (deployed in Windows 10), were designed with a memoization-based solution. We discuss how to audit LDP mechanisms based on memoization in Appendix F.
Given 𝜏 data collections, we aim to audit the empirical privacy loss of LDP protocols in comparison to the upper bound 𝜏 𝜖 -LDP imposed by the (L)DP sequential composition. Our main motivation is to evaluate how tight the sequential composition is for LDP protocols. Furthermore, this audit will provide insights into the privacy implications of real-world applications similar to those implemented by Apple (Apple Differential Privacy Team, 2017), in which memoization was not employed.
In Algorithm 2, we present the extension of distinguishability attacks on LDP protocols to longitudinal studies 𝒜 𝐿 . In this context, the adversary’s objective remains the same: to predict the user’s true value by determining the most probable value that would have generated the reported value 𝑦 𝑡 after 𝜏 data collections. Notably, the adversary now possesses an increased knowledge due to random fresh noise being added to the user’s value 𝑣 over 𝜏 times. To perform the “averaging attack”, in each data collection, the adversary constructs the “support set” based on the reported value 𝑦 𝑡 and LDP mechanism ℳ . The support set is then used to increment the knowledge (i.e., count) about the user’s true value and what constitutes noisy data, ultimately predicting 𝑣 ^ . We highlight that the exceptions are HE-based protocols in which the notion of a support set is not applicable, namely SHE, GM and AGM, rendering Algorithm 2 inapplicable. In these protocols, Laplace or Gaussian noise with a mean of 0 is added in each data collection. Consequently, the “averaging attack” is straightforward as it involves determining 𝑣 ^ by taking the argmax of the summation of all reports. Formally, this is expressed as 𝑣 ^
argmax ( ∑ 𝑡
1 𝜏 𝐲 𝑡 ) .
Finally, our LDP-Auditor framework (Algorithm 1) can be used to estimate the privacy loss of LDP protocols in longitudinal studies. To achieve this, one can simply replace “ 𝒜 ( ℳ ( 𝑣 ) ) ” in Lines 3 and 4 of Algorithm 1 with “ 𝒜 𝐿 ( 𝑣 ) ”, i.e., the distinguishability attack outlined in Algorithm 2, which already takes into account ℳ .
Algorithm 2 Distinguishability Attack in Longitudinal Study: 𝒜 𝐿 . 1:Input : User value 𝑣 , privacy guarantee 𝜖 , LDP protocol ℳ , number of data collections 𝜏 . 2:Output : Predicted value 𝑣 ^ . 3:Initialize a 𝑘 sized zero-vector 𝐳
[ 0 , 0 , … , 0 ] 4:for 𝑡 ∈ [ 𝜏 ] do: 5: User-side randomization 𝑦 𝑡
ℳ ( 𝑣 ) 6: Given 𝑦 𝑡 , adversary construct support set 𝟙 ℳ 7: for 𝑣 ∈ 𝟙 ℳ do: 8: Increment count z [ 𝑣 ]
z [ 𝑣 ] + 1 9: end for 10:end for 11:Predict 𝑣 ^
argmax ( z ) 12:return : 𝑣 ^ 4.3.LDP-Auditor for Multidimensional Data
Another dimension of interest to the server is multidimensional data (i.e., 𝑑 ≥ 2 attributes), aiming to enable more comprehensive decision-making. Considering potential correlations among these attributes, the principles of DP sequential composition (Dwork et al., 2014) remain applicable in this context. Therefore, the existing solutions for multidimensional data, represented as 𝐯
[ 𝑣 1 , 𝑣 2 , … , 𝑣 𝑑 ] , include:
•
Splitting (SPL): This naïve method involves partitioning the privacy budget 𝜖 among the 𝑑 attributes, collecting each attribute under 𝜖 𝑑 -LDP. Examples based on this SPL solution are the LoPub (Ren et al., 2018) and Castell (Kikuchi, 2022) mechanisms, which are designed for joint distribution estimation.
•
Sampling (SMP): In this approach, users are divided into 𝑑 disjoint sub-groups. Each sub-group 𝑗 ∈ [ 𝑑 ] then reports the 𝑗 -th attribute under 𝜖 -LDP. Example of mechanisms using the SMP solution include CALM (Zhang et al., 2018) and FELIP (Costa Filho and Machado, 2023), proposed for marginal estimation, and (Nguyên et al., 2016; Wang et al., 2019, 2021b), which introduced LDP mechanisms for mean estimation.
•
Random Sampling Plus Fake Data (RS+FD) (Arcolezi et al., 2021a): In this solution, each user samples a single attribute 𝑗 ∈ [ 𝑑 ] to report 𝑣 𝑗 under 𝜖 ′ -LDP and reports uniform fake data for the 𝑑 − 1 non-sampled attributes. Because the sampling result is not disclosed to the aggregator, there is amplification by sampling (Li et al., 2012; Balle et al., 2018). For this reason, RS+FD utilizes an amplified privacy budget 𝜖 ′
ln ( 𝑑 ⋅ ( 𝑒 𝜖 − 1 ) + 1 ) for the sampled attribute. An example based on RS+FD is the GRR-FS mechanism (Bhaila et al., 2024), designed for node-level LDP on graph data, to enable training of graph neural networks.
Upon closer examination of the three solutions, one can notice that both SPL and SMP solutions can be considered as straightforward instances of reporting one attribute with a given LDP mechanism (one at a time for SPL). Consequently, our LDP-Auditor framework can be directly used to estimate empirical privacy losses 𝜖 𝑒 𝑚 𝑝 for LDP mechanisms following the SPL and SMP solutions. Therefore, in this work, our focus shifts towards auditing the RS+FD solution, for which there is a privacy amplification effect due to uncertainty on the server side.
In Algorithm 3, we introduce the distinguishability attack designed for LDP protocols following the RS+FD solution, denoted as 𝒜 RS+FD . Here, the adversary’s objective is twofold: first, to predict the attribute that the user has sampled, and subsequently, to predict the user’s actual value. Since each user selects an attribute 𝑗 ∈ [ 𝑑 ] uniformly at random, the Bayes optimal guess for the adversary is 𝑗 ^
Uniform ( [ 𝑑 ] ) . Once the attribute is predicted, the adversary constructs the “support set” based on the reported value 𝑦 𝑗 ^ and LDP mechanism ℳ . With the support set, as in Section 3, the adversary predicts the user’s value 𝑣 ^ 𝑗 ^ .
Finally, we extend our LDP-Auditor framework for RS+FD protocols in Algorithm 4. The main change is due to the multidimensional data setting, for which we define 𝐯 𝟏 and 𝐯 𝟐 in Lines 1 and 2 of Algorithm 4. The test statistic remains unchanged, as it is derived from distinguishability attacks as per Algorithm 3. Notice that one main difference with RS+FD auditing is that even if the user did not sample the attribute 𝑗 ^ , the attack can still predict the user’s value 𝑣 𝑗 correctly due to uniform fake data generation for that attribute. Our goal is thus to audit if RS+FD satisfies the claimed 𝜖 -LDP guarantee with amplification by sampling. Algorithm 4 builds upon the foundational principles established in Section 4.1 and in Algorithm 1 and, consequently, the framework’s correctness and reliability extend to this adaptation as well.
Algorithm 3 Distinguishability Attack on RS+FD: 𝒜 RS+FD . 1:Input : User values 𝐯
[ 𝑣 1 , 𝑣 2 , … , 𝑣 𝑑 ] , privacy guarantee 𝜖 , RS+FD protocol ℳ . 2:Output : Predicted value 𝑣 ^ 𝑗 ^ . 3:for 𝑖 ∈ [ 𝑑 ] do: ▷ cf. RS+FD (Arcolezi et al., 2021a, Algorithm 1) 4: User-side randomization 𝑦 𝑖
ℳ ( 𝑣 𝑖 ) 5:end for 6:Adversary predict user’s sampled attribute 𝑗 ^
Uniform ( [ 𝑑 ] ) 7:Given 𝑦 𝑗 ^ , construct support set 𝟙 ℳ 8:Predict 𝑣 ^ 𝑗 ^
Uniform ( 𝟙 ℳ ) ▷ cf. Section 3 9:return : 𝑣 ^ 𝑗 ^ Algorithm 4 LDP-Auditor for RS+FD Protocols. 1:Input : Theoretical 𝜖 , LDP protocol ℳ , distinguishability attack 𝒜 RS+FD , values 𝑣 1 , 𝑣 2 ∈ 𝑉 , trial count 𝑇 , confidence level 𝛼 . 2:Output : Estimated privacy loss 𝜖 𝑒 𝑚 𝑝 . 3: 𝐯 𝟏
[ 𝑣 1 , 𝑣 1 , … , 𝑣 1 ] 1 × 𝑑 , 𝐯 𝟐
[ 𝑣 2 , 𝑣 2 , … , 𝑣 2 ] 1 × 𝑑 4: TP
0 , FP
0 ▷ True Positive (TP) and False Positive (FP) 5:for 𝑖 ∈ [ 𝑇 ] do 6: if 𝒜 RS+FD ( 𝐯 𝟏 )
𝑣 1 TP
TP + 1 7: if 𝒜 RS+FD ( 𝐯 𝟐 )
𝑣 1 FP
FP + 1 8:end for 9: 𝑝 ^ 0
ClopperPearsonLower ( TP , 𝑇 , 𝛼 / 2 ) 10: 𝑝 ^ 1
ClopperPearsonUpper ( FP , 𝑇 , 𝛼 / 2 ) 11:return : 𝜖 𝑒 𝑚 𝑝
ln ( 𝑝 ^ 0 / 𝑝 ^ 1 ) 5.Experimental Evaluation
This section presents our experimental setting to assess the proposed audit framework as well as the main results obtained.
5.1.General Setup of Experiments
For all experiments, we have used the following setting:
•
Environment. All algorithms are implemented in Python 3 with the Numpy (van der Walt et al., 2011), Numba (Lam et al., 2015), Ray (Moritz et al., 2018), Multi-Freq-LDPy (Arcolezi et al., 2022b) and pure-LDP (Maddock, 2021; Cormode et al., 2021) libraries, and run on a local machine with 2.50GHz Intel Core i9 and 64GB RAM. Our LDP-Auditor tool is open-sourced in a GitHub repository (Arcolezi, 2024).
•
Audit parameters. We set 𝑇
10 6 trial counts and use Clopper-Pearson confidence intervals with 𝛼
0.01 (i.e., our estimates hold with 99 % confidence). These parameters establish the Monte Carlo upper bound as 𝜖 𝑂 𝑃 𝑇
12.025 .
•
Stability. Since LDP protocols are randomized, we report average results with standard deviation over 5 runs.
5.2.Main Auditing Results
We begin by presenting our main LDP auditing results, considering:
•
LDP protocols. We audit the eight 𝜖 -LDP frequency estimation protocols described in Section 3.1 and the six ( 𝜖 , 𝛿 )-LDP frequency estimation protocols described in Section 3.2.
•
Theoretical upper bound. We evaluated the LDP frequency estimation protocols in high, mid and low privacy regimes over the range 𝜖 ∈ { 0.25 , 0.5 , 0.75 , 1 , 2 , 4 , 6 , 10 } . The chosen range for 𝜖 follows the state-of-the-art LDP literature (e.g., see (Acharya et al., 2019; Zhang et al., 2018; Mahawaga Arachchige et al., 2020; Wang et al., 2021a; Wu et al., 2023)) and real-world implementations (Desfontaines, 2021) (e.g., RAPPOR (Erlingsson et al., 2014) with 𝜖
0.5 ).
•
Delta parameter. For approximate LDP, we set 𝛿
1 𝑒 − 5 .
•
Domain size. We also varied the domain size 𝑘 ∈ { 25 , 50 , 100 , 150 , 200 } as it influences the performance of the distinguishability attacks.
Figure 2 illustrates the theoretical 𝜖 values (x-axis) versus the estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis), demonstrating the comparison across various domain sizes 𝑘 , for the eight 𝜖 -LDP frequency estimation protocols: GRR, SS, SUE, OUE, BLH, OLH, SHE and THE. Similarly, Figure 3 presents analogous plots for the six ( 𝜖 , 𝛿 )-LDP frequency estimation protocols: AGRR, ASUE, ABLH, AOLH, GM and AGM. Henceforth, when discussing our results, the notation “(A)GRR” will be used whenever the findings are applicable to both GRR and AGRR protocols (analogously for other LDP protocols).
Figure 2.Theoretical 𝜖 values (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) using our LDP-Auditor framework with 𝛿
0 . We compare different domain sizes 𝑘 for eight state-of-the-art 𝜖 -LDP frequency estimation protocols: GRR (Kairouz et al., 2016), SS (Wang et al., 2016; Ye and Barg, 2018), SUE (Erlingsson et al., 2014), OUE (Wang et al., 2017), BLH (Bassily and Smith, 2015), OLH (Wang et al., 2017), SHE (Dwork et al., 2006) and THE (Wang et al., 2017). Figure 3.Theoretical 𝜖 values (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) using our LDP-Auditor framework with 𝛿
1 𝑒 − 5 . We compare different domain sizes 𝑘 for six state-of-the-art ( 𝜖 , 𝛿 )-LDP frequency estimation protocols: AGRR (Wang et al., 2021b), ASUE (Wang et al., 2021b), ABLH (Wang et al., 2021b), AOLH (Wang et al., 2021b), GM (Dwork et al., 2014) and AGM (Balle and Wang, 2018). For GM, we only audit for certifiable theoretical upper bounds 𝜖 ≤ 1 .
Effect of Encoding and Perturbation Functions. It is important to note that LDP frequency estimation protocols employ different encoding and perturbation functions, leading to varying levels of susceptibility to distinguishability attacks (Emre Gursoy et al., 2022; Arcolezi et al., 2023a). Notably, as shown in Figure 2 and Figure 3, one can notice that (A)GRR is the unique LDP protocol that achieves tight empirical privacy estimates for 𝜖 𝑒 𝑚 𝑝 . As described in Section 3.1, auditing (A)GRR’s privacy guarantees is straightforward since there is no specific encoding (i.e., the input and output spaces are equal). Conversely, all other LDP protocols (i.e., SS, UE-, LH- and HE-based) incorporate specific pre-processing encoding functions, which may result in information loss and/or additional randomness.
For instance, (A)BLH hashes the input set 𝑉 of size 𝑘 to { 0 , 1 } and, thus results in excessive loss of information due to collisions. Even if the bit is transmitted correctly after the (A)GRR perturbation, the server can only obtain one bit of information about the input (i.e., to which half of the input domain the value belongs to). For these reasons, BLH consistently led to the worst auditing results among the 𝜖 -LDP protocols with a “flat” 𝜖 𝑒 𝑚 𝑝 < 1 estimation after 𝜖 ≥ 2 . Indeed, although both (A)LH protocols present similar empirical privacy losses 𝜖 𝑒 𝑚 𝑝 in high privacy regimes (the lowest among all other LDP protocols), the difference is remarkable in favor of (A)OLH in mid to low privacy regimes. Thus, (A)OLH preserves more utility than (A)BLH, while providing tighter privacy loss estimation.
Concerning the SS protocol that reports a subset Ω of 𝜔 values, one can note from Figure 2 that the empirical privacy loss 𝜖 𝑒 𝑚 𝑝 demonstrated similar results to other LDP protocols in high privacy regimes. However, an exception occurs in low privacy regimes, in which SS equals GRR due to a subset size 𝜔
1 , resulting in tight estimates for 𝜖 𝑒 𝑚 𝑝 . Regarding UE-based protocols, in high-privacy regimes ( 𝜖 ≤ 1 ), both SUE and OUE presented similar empirical privacy estimates for 𝜖 𝑒 𝑚 𝑝 in Figure 2. In mid-privacy regimes ( 1 < 𝜖 ≤ 4 ), OUE presented higher empirical privacy losses 𝜖 𝑒 𝑚 𝑝 than SUE. However, OUE reached a “plateau” estimation for 𝜖 𝑒 𝑚 𝑝 in low privacy regimes ( 𝜖
4 ), explained by an upper bound on the distinguishability attack (see (Emre Gursoy et al., 2022)). This plateau behavior is also observed for the (A)OLH protocol in low privacy regimes due to a comparable upper bound on the attacker effectiveness. Comparing approximate- and pure-SUE protocols, similar results were noticed for (A)SUE in Figure 2 and Figure 3, considering all privacy regimes.
Lastly, for HE-based protocols, similar estimates for 𝜖 𝑒 𝑚 𝑝 were observed across all privacy regimes for both 𝜖 -LDP protocols, namely SHE and THE, in Figure 2, albeit with varying sensitivity to the domain size 𝑘 (discussed afterwards). In contrast, from Figure 3, one can notice that the ( 𝜖 , 𝛿 )-LDP GM protocol led to the worst auditing results among all LDP protocols. Therefore, in addition to AGM’s ability to preserve greater utility than GM, it also offers more precise empirical privacy loss estimations.
Impact of domain size. As the domain size 𝑘 increases, one can observe in Figure 2 and Figure 3 a direct impact on the empirical privacy loss estimation of 𝜖 𝑒 𝑚 𝑝 for all LDP protocols, in which the gap with the theoretical 𝜖 increases. However, the impact is minor for the (A)GRR protocol, even in high privacy regimes. Conversely, for all other LDP protocols, this impact is substantial, with empirical 𝜖 𝑒 𝑚 𝑝 estimates ranging within ≤ 2.5 x of the theoretical 𝜖 (when 𝑘
25 ) up to ≤ 5 x (when 𝑘
200 ). These results are consistent with the distinguishability attack effectiveness, which decreases according to higher 𝑘 (i.e., more uncertainty) (Emre Gursoy et al., 2022; Arcolezi et al., 2023a). For instance, in the case of GRR, the probability 𝑝
𝑒 𝜖 𝑒 𝜖 + 𝑘 − 1 of being “honest” in Equation (2) decreases proportionally to 𝑘 . In other mechanisms, there is a higher likelihood of introducing noise in the output 𝑦 , such as by flipping more bits from 0 to 1 in (A)UE protocols.
Nevertheless, exceptions exist for both OUE and OLH protocols, in which in low privacy regimes (when 𝜖 ≥ 4 ), a larger domain size 𝑘 leads to tighter estimates of 𝜖 𝑒 𝑚 𝑝 than smaller domain sizes. Although to a small extent, the THE protocol also yields more accurate estimates for higher 𝑘 when 𝜖
10 . Taking OUE as an example, these results can be attributed to the fact that the bit corresponding to the user’s value is transmitted with a random probability of 1 2 (cf. Equation (3)). Consequently, if the domain size is small, it results in a higher false positive rate, which subsequently decreases the estimated empirical privacy loss 𝜖 𝑒 𝑚 𝑝 .
Generality of Our Findings. Overall, the gap between empirical 𝜖 𝑒 𝑚 𝑝 and theoretical 𝜖 privacy guarantees tends to widen in high privacy regimes (i.e., lower 𝜖 values). This trend is particularly pronounced when considering the sensitivity of different LDP protocols to the domain size. Lastly, we highlight that all 𝜖 -LDP and ( 𝜖 , 𝛿 )-LDP frequency estimation protocols audited herein are building blocks of LDP mechanisms for more complex tasks such as: heavy hitter estimation (Bassily and Smith, 2015; Wang et al., 2021a), joint distribution estimation (Ren et al., 2018; Zhang et al., 2018; Kikuchi, 2022; Costa Filho and Machado, 2023), frequent item-set mining (Wang et al., 2018; Wu et al., 2023), machine learning (Mahawaga Arachchige et al., 2020; Yilmaz et al., 2020), frequency estimation of multidimensional data (Nguyên et al., 2016; Wang et al., 2019; Arcolezi et al., 2021a) and frequency monitoring (Erlingsson et al., 2014; Ding et al., 2017; Vidal et al., 2020; Arcolezi et al., 2022a, 2023b). Thus, our audit results provide generic insights that shed light on several critical factors influencing the estimation of the local privacy loss.
5.3.Case Study #1: Approximate- VS Pure-LDP
In theory, Bun et al. (2019) proved that in the local DP model, approximate privacy is actually never more useful than pure privacy. We will now compare approximate- and pure-LDP by assessing the impact of 𝛿 on the LDP auditing process. In these experiments, we use the following parameter values:
•
LDP protocols. We audit the six ( 𝜖 , 𝛿 )-LDP protocols described in Section 3.2.
•
Theoretical upper bound. Because GM requires 𝜖 ≤ 1 (Dwork et al., 2014), we vary the privacy guarantee only in high privacy regimes, within the range 𝜖 ∈ { 0.25 , 0.5 , 0.75 , 1 } .
•
Delta parameter. We vary the 𝛿 parameter within the range 𝛿 ∈ { 0 , 1 𝑒 − 7 , 1 𝑒 − 6 , 1 𝑒 − 5 , 1 𝑒 − 4 } ; 𝛿
0 means 𝜖 -LDP.
•
Domain size. We vary the domain size 𝑘 ∈ { 25 , 100 , 150 , 200 } . We present results for 𝑘 ∈ { 25 , 200 } in the main paper and defer the others to Appendix G.1
Figure 4 illustrates the theoretical 𝜖 values (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) when varying the 𝛿 parameter and domain size 𝑘 ∈ { 25 , 200 } , using our LDP-Auditor framework. Note that for both GM and AGM protocols, there is no 𝜖 𝑒 𝑚 𝑝 value when 𝛿
0 , as these protocols do not have pure 𝜖 -LDP variations.
Interestingly, for protocols such as AGRR, ASUE, ABLH and AOLH, our observations corroborate the theoretical assertions made by Bun et al. (2019) regarding the comparative utility of approximate versus pure privacy. More precisely, Figure 4 and Figure 10 (for 𝑘 ∈ { 100 , 150 } ) indicate that variations in 𝛿 do not significantly alter the estimated privacy loss 𝜖 𝑒 𝑚 𝑝 across these protocols. This consistency in 𝜖 𝑒 𝑚 𝑝 values, irrespective of 𝛿 adjustments, suggests that for LDP protocols with a finite range, the audit outcomes for approximate-privacy closely align with those for pure-privacy. Conversely, the GM and AGM protocols exhibit distinct behaviours. More precisely, as 𝛿 increases, signaling a relaxation in the privacy constraint, we observe a narrowing gap between theoretical 𝜖 and empirical 𝜖 𝑒 𝑚 𝑝 values. This trend highlights a crucial aspect of LDP protocols with an infinite range, in which allowing for a nonzero 𝛿 directly influences the perceived privacy protection, leading to a more pronounced estimation of the privacy loss. Finally, the impact of the domain size on the estimated privacy loss has a minor effect on the AGRR, ASUE, ABLH and AOLH protocols, with a decreasing 𝜖 𝑒 𝑚 𝑝 value for higher 𝑘 . In contrast, for both GM and AGM protocols, the estimated 𝜖 𝑒 𝑚 𝑝 values increase (i.e., indicating less privacy) as 𝑘 increases.
(a)Domain size 𝑘
25 . (b)Domain size 𝑘
200 . Figure 4.Theoretical 𝜖 values (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) using our LDP-Auditor framework. We assess different privacy guarantees for six ( 𝜖 , 𝛿 )-LDP protocols across domain sizes 𝑘 ∈ { 25 , 200 } . The special case 𝛿
0 corresponds to pure 𝜖 -LDP, for which GM and AGM do not satisfy. 5.4.Case Study #2: Auditing the Privacy Loss of Local Hashing Encoding Without LDP
As discussed previously in Section 5.2, both LH protocols present the least tight estimates for 𝜖 𝑒 𝑚 𝑝 in high privacy regimes. Even worse, BLH’s estimated privacy loss remains below 𝜖 𝑒 𝑚 𝑝 < 1 for 𝜖 ≥ 2 , leading to empirical privacy losses ≤ 10 x of the theoretical 𝜖 . Motivated by these observations, we performed an additional study to audit the impact of local hashing encoding but with no LDP perturbation (i.e., 𝜖
- ∞ ), which we refer to as Local Hashing Only (LHO). More precisely, the LHO reporting mechanism is LHO ( 𝑣 ) ≔ ⟨ H , H ( 𝑣 ) ⟩ , and we used the same distinguishability attack 𝒜 LH described in Section 3 to attack LHO. For these experiments, we use the following parameter values:
•
LHO hash domain. We vary the hash domain [ 𝑔 ] within the range 𝑔 ∈ { 2 , 4 , 6 , 8 , 10 } .
•
Domain size. We vary the domain size within the range 𝑘 ∈ { 25 , 50 , 100 , 150 , 200 } .
Figure 5.Estimated 𝜖 𝑒 𝑚 𝑝 (y-axis) versus hash domain 𝑔 (x-axis) using our LDP-Auditor framework comparing different domain sizes 𝑘 for LH encoding with no LDP randomization.
Figure 5 presents the estimated 𝜖 𝑒 𝑚 𝑝 values (x-axis) for LHO protocols according to the hash domain sizes 𝑔 (y-axis) using our LDP-Auditor framework for different domain sizes 𝑘 . Observations from Figure 5 underscore that, even for a binary hash domain ( 𝑔
2 ), the estimated privacy loss remains 𝜖 𝑒 𝑚 𝑝 < 1 , aligning with high privacy regimes suitable for real-world applications. Indeed, even though there is no LDP randomization of the hashed value ℎ ∈ { 0 , 1 } , the adversary still has a random guess on the support set 𝟙 LH . Given a general (universal) family of hash functions ℋ , each input value 𝑣 ∈ 𝑉 is hashed into a value in [ 𝑔 ] by a hash function H ∈ ℋ , and the universal property requires ∀ 𝑣 1 , 𝑣 2 ∈ 𝑉 , 𝑣 1 ≠ 𝑣 2 : Pr H ∈ ℋ [ H ( 𝑣 1 )
H ( 𝑣 2 ) ] ≤ 1 𝑔 . In other words, approximately 𝑘 / 𝑔 values can be mapped to the same hashed value ℎ
H ( 𝑣 ) in [ 𝑔 ] . Although local hashing pre-processing by itself has no proven DP guarantees, this significant loss of information in the encoding step suggests potential privacy gains for LH protocols due to the presence of many random collisions. In a similar context, DP-Sniper (Bichsel et al., 2021), a method developed to finds violations of DP, also encountered difficulties estimating 𝜖 for the original RAPPOR (Erlingsson et al., 2014), which is based on Bloom filters and employs hash functions.
One could expect a similar privacy gain for other LDP mechanisms based on sketching such as Apple’s Count-Mean Sketch (CMS) (Apple Differential Privacy Team, 2017) and Hadamard (Acharya et al., 2019) mechanisms, which we leave as for future audit investigations. Furthermore, as we increase the hash domain size 𝑔
2 without introducing any LDP perturbation, the estimated 𝜖 𝑒 𝑚 𝑝 starts to rise, achieving medium privacy regimes 1 < 𝜖 𝑒 𝑚 𝑝 ≤ 2.5 . This outcome is expected since preserving more information during the encoding step decreases the support set size | 𝟙 LH | , which naturally enhances the accuracy of the distinguishability attack 𝒜 LH . Therefore, the estimated privacy loss 𝜖 𝑒 𝑚 𝑝 for LH-based protocols will be lower if the domain size 𝑘 is high and/or if the new hashed domain 𝑔 is small.
5.5.Case Study #3: Auditing the LDP Sequential Composition in Longitudinal Studies
As discussed in Section 4.2, we aim to audit the empirical privacy loss of LDP protocols in longitudinal studies (i.e., 𝜏 data collections). This will allow to assess the gap between empirical local privacy loss estimation and the theoretical upper bound imposed by the (L)DP sequential composition. For these experiments, we use both Algorithms 1 and 2 with the following parameter values:
•
LDP protocols. We audit the eight 𝜖 -LDP protocols from Section 3.1. Additionally, in light of the findings presented in Section 5.3, we only audit two ( 𝜖 , 𝛿 )-LDP protocols that exhibit sensitivity to 𝛿 ; namely, GM and AGM.
•
Number of data collections. We vary the number of data collections in the range 𝜏 ∈ { 5 , 10 , 25 , 50 , 75 , 100 , 250 , 500 } .
•
Theoretical upper bound. We vary the per-report privacy guarantee in high privacy regimes, in the range 𝜖 ∈ { 0.25 , 0.5 , 0.75 , 1 } . By the sequential composition, the theoretical upper bound after 𝜏 data collections is 𝜏 𝜖 -LDP.
•
Delta parameter. For approximate LDP, we set 𝛿
1 𝑒 − 5 .
•
Domain size. We vary the domain size 𝑘 ∈ { 2 , 25 , 50 , 100 } . We present results for 𝑘 ∈ { 2 , 100 } in the main paper and defer the others to Appendix G.2.
Figure 6 illustrates the estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) for the eight 𝜖 -LDP and both GM and AGM ( 𝜖 , 𝛿 )-LDP protocols according to the the number of data collections 𝜏 (x-axis), per report 𝜖 and domain size 𝑘 ∈ { 2 , 100 } , using our LDP-Auditor framework. From Figure 6(a), one can notice that both GRR and SS protocols have equal 𝜖 𝑒 𝑚 𝑝 estimates, as for 𝑘
2 , the subset size 𝜔
1 (i.e., GRR). These two LDP protocols exhibited the tightest empirical privacy estimates for 𝜖 𝑒 𝑚 𝑝 , aligning with the observations made in Section 5.2 (see Figure 8). In contrast, the approximate LDP protocols, notably GM and AGM, showed less favorable estimates for privacy loss, which corroborates the findings illustrated in Figure 3 in Section 5.2. The remaining pure-LDP protocols – SUE, OUE, BLH and OLH – display intermediate privacy loss estimates.
Furthermore, Figure 6(b) reveals that, for a larger domain size of 𝑘
100 , the results obtained are reversed. Among pure-LDP protocols, GRR yields the lowest 𝜖 𝑒 𝑚 𝑝 estimation for all experimented 𝜏 values, followed by the SHE protocol. The reason for this is that the probability of being “honest” 𝑝
𝑒 𝜖 𝑒 𝜖 + 𝑘 − 1 in Equation (2), is directly proportional to the domain size 𝑘 . Therefore, even after many data collections 𝜏 , the adversary has still too much noisy data to filter, which makes the distinguishability attack less efficient. Similar to Figure 6, in Figure 11, approximate-LDP protocols (GM and AGM) led to the lowest empirical privacy loss estimates for 𝜖 𝑒 𝑚 𝑝 .
(a)Domain size 𝑘
2 . (b)Domain size 𝑘
100 . Figure 6.Estimated 𝜖 𝑒 𝑚 𝑝 (y-axis) versus the number of data collections 𝜏 (x-axis) using our LDP-Auditor framework for different domain sizes 𝑘 ∈ { 2 , 100 } . We vary the per report 𝜖 -LDP guarantee for the following LDP frequency estimation protocols: GRR, SS, SUE, OUE, BLH, OLH, SHE, THE, GM and AGM. For both approximate-LDP protocols, namely GM and AGM, 𝛿
1 𝑒 − 5 .
Moreover, from both Figure 6 and Figure 11, it is evident that even after 𝜏
500 , none of the LDP protocols, achieves the optimal upper bound 𝜖 𝑂 𝑃 𝑇 imposed by the Monte Carlo estimation when the per-report privacy guarantee is too small (i.e., 𝜖
0.25 ). However, as the number of data collections becomes sufficiently large (i.e., 𝜏 ≥ 250 ) and the privacy guarantee per report also increases (e.g., 𝜖 ≥ 0.75 ), all pure-LDP protocols, with the exception of GRR, manage to achieve the Monte Carlo upper bound, resulting in 𝜖 𝑒 𝑚 𝑝
𝜖 𝑂 𝑃 𝑇 . Yet, as the number of data collections becomes sufficiently large (i.e., 𝜏 → ∞ ), we anticipate that 𝜖 𝑒 𝑚 𝑝 will converge to 𝜖 𝑂 𝑃 𝑇 for all LDP protocols even when the per-report 𝜖 < 0.5 .
These results are quite surprising since one would imagine the privacy leakage to be higher for repeated data collections when random fresh noise is added per report. Nevertheless, as the domain size increases, the performance of the distinguishability attack decreases (Emre Gursoy et al., 2022; Arcolezi et al., 2023a). As a consequence, for real-world deployments with substantial domain sizes (e.g., list of Internet domains), exclusively relying on theoretical 𝜖 -LDP guarantees may prove unrealistic. Privacy auditing becomes imperative in such scenarios, to establish appropriate privacy parameters, thus avoiding adding more noise than required. Notably, these auditing results emphasize a crucial aspect for longitudinal studies: a substantial gap exists between theory (sequential composition) and practice (LDP auditing). To narrow this gap, one could consider designing more powerful attacks for longitudinal studies beyond those proposed here in Algorithm 2. Alternatively, research efforts could be directed towards developing more sophisticated compositions for 𝜖 -LDP mechanisms.
5.6.Case Study #4: LDP Auditing with Multidimensional Data
As discussed in Section 4.3, our audit results outlined in Section 5.2 are also valid for LDP mechanisms based on the standard SPL and SMP solutions for multidimensional data. Thus, in this section, we aim to audit LDP protocols following the RS+FD (Arcolezi et al., 2021a) solution. For these experiments, we use both Algorithms 3 and 4, considering:
•
LDP protocols. We audit five 𝜖 -LDP RS+FD protocols: RS+FD[GRR], RS+FD[SUE-z], RS+FD[SUE-r], RS+FD[OUE-z] and RS+FD[OUE-r]. The difference between UE-z and UE-r lies on how to generate the fake data (Arcolezi et al., 2021a). More precisely, UE-z initializes a zero-vector and UE-r initializes a random one-hot-encoded vector. Next, SUE or OUE is used to sanitize these vectors.
•
Theoretical upper bound. We vary the theoretical privacy parameter 𝜖 in high, mid and low privacy regimes over the same range 𝜖 ∈ { 0.25 , 0.5 , 0.75 , 1 , 2 , 4 , 6 , 10 } as in Section 5.2.
•
Domain size and number of attributes. We vary the domain size as 𝑘 ∈ { 2 , 25 , 50 , 100 } and we vary the number of attributes over 𝑑 ∈ { 2 , 10 } . When 𝑑
2 , 𝐤
[ 2 , 2 ] , 𝐤
[ 25 , 25 ] , 𝐤
[ 50 , 50 ] and 𝐤
[ 100 , 100 ] and, in a similar way for 𝑑
10 . We present results for 𝑘 ∈ { 2 , 100 } in the main paper and defer the others to Appendix G.3.
(a)Domain size 𝑘
2 . (b)Domain size 𝑘
100 . Figure 7.Theoretical 𝜖 (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 (y-axis) using our LDP-Auditor framework comparing different number of attributes 𝑑 for five RS+FD (Arcolezi et al., 2021a) protocols with domain sizes 𝑘
2 and 𝑘
100 .
Figure 7 illustrates the comparison of theoretical 𝜖 values (x-axis) with estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) for the five RS+FD protocols, based on the number of attributes 𝑑 and domain size 𝑘 ∈ { 2 , 100 } , utilizing our LDP-Auditor framework. From Figure 7, it is clear that, once again, GRR exhibits tighter empirical privacy losses 𝜖 𝑒 𝑚 𝑝 than UE-based protocols following the RS+FD solution. However, in contrast to Section 5.2, the estimated 𝜖 𝑒 𝑚 𝑝 for GRR now displays a “plateau behaviour” after theoretical 𝜖 ≥ 4 . This plateau arises because the probability of reporting the true value under GRR reaches high values with 𝜖 ≥ 4 . Notably, among the family of UE protocols, SUE demonstrates a tighter empirical 𝜖 𝑒 𝑚 𝑝 than OUE when the domain is binary (see Figure 7(a)). However, SUE exhibits lower 𝜖 𝑒 𝑚 𝑝 than OUE when 𝑘
100 (see Figure 7(b)). This observation can be attributed to the advantage of SUE in transmitting the true bit with a probability 𝑝 > 1 2 , while OUE has 𝑝
1 2 . Consequently, the distinguishability attack achieves higher accuracy for SUE, increasing the true positive rate and decreasing the false positive rate, resulting in higher 𝜖 𝑒 𝑚 𝑝 estimates. Moreover, different fake data generation procedures for UE protocols (UE-z vs UE-r) did not result in significant changes in the audit results.
Another intriguing result is that the empirical privacy loss is lower for a binary domain compared to when 𝑘
100 . This behavior is primarily due to the impact of fake data on distinguishability attacks. In a binary domain, fake data significantly increases the false positive rate, leading to a decrease in the estimated privacy loss 𝜖 𝑒 𝑚 𝑝 . However, for a higher domain size, fake data has a lesser impact on the false positive rate, as the distinguishability attack has more rooms for errors. Overall, these nuanced relationships underscore the intricate interplay between domain size, the use of fake data and the tightness of local privacy loss estimation in the context of RS+FD protocols.
5.7.Case Study #5: Debugging a Python Implementation of UE Protocols
Finally, we show how our LDP-Auditor framework can also serve as a tool for verifying the correctness of LDP implementations. In our case study, we focused on the pure-LDP (Maddock, 2021) package (version 1.1.2) and show that their UE protocols fail to meet the claimed level of 𝜖 -LDP. Our objective here is not to point out issues with respect to a particular code or library but rather to demonstrate the potentiality of our approach for verifying and debugging LDP protocols. Following a similar experimental setup as the one outlined in Section 5.2, Figure 8 presents a comparison of the theoretical 𝜖 values (x-axis) with the estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) using our LDP-Auditor framework. We consider different domain sizes 𝑘 for both the SUE and OUE protocols, implemented in the pure-LDP package. The inconsistencies we found between the lower and upper bounds are highlighted within the orange rectangle.
Figure 8.Theoretical 𝜖 (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 (y-axis) using our LDP-Auditor framework comparing different domain sizes 𝑘 for both SUE and OUE protocols, implemented in the pure-LDP package (Maddock, 2021). The orange rectangle highlights inconsistencies between the observed empirical privacy loss and the theoretical upper bound.
From Figure 8, it is clear that LDP-Auditor has detected inconsistencies between the lower and upper bounds, which are highlighted by the orange rectangle. After conducting an investigation into the pure-LDP code, we were able to identify the specific location of the implementation error. The error arises from the following steps in the _perturb function of the UEClient class:
(1)
The user initializes a zero-vector y
[ 0 , 0 , … , 0 ] of size 𝑘 ;
(2)
The user samples indexes of values in y that will flip from 0 to 1 with probability 𝑞 (as indicated in Equation (3)).
(3)
With probability 𝑝 (as indicated in Equation (3)), the index at position y 𝑣 (representing the user’s true value) is flipped from 0 to 1 .
(4)
Missing step: if y 𝑣 was set to 1 in step (2) but not in step (3), there should be a correction to revert it back to 0 .
This was a simple mistake that was directly fixed by the authors (Maddock, 2023) following our communication with them. However, it is crucial to emphasize that this minor error had implications for the 𝜖 -LDP guarantees. Specifically, the bit corresponding to the user’s value was transmitted more time than intended, particularly in high privacy regimes. In mid to low privacy regimes, the bug might go unnoticed, given the already high probability of transmitting the bit as 1. This explains why LDP-Auditor failed to detect inconsistencies between the empirical and upper bounds for 𝜖 ≥ 1 . In such cases, specialized tools designed for identifying DP violations, like DP-Sniper (Bichsel et al., 2021), would likely have been effective in detecting the bug. Therefore, we strongly encourage end-users of the pure-LDP package to update to the latest version 1.2.0.
6.Conclusion and Perspectives
In this work, we have introduced the LDP-Auditor framework as a powerful tool for empirically estimating the privacy loss of LDP frequency estimation protocols. Our main LDP audit results provide new insights into the empirical local privacy loss in practical adversarial settings. Through several case studies, we have demonstrated the framework’s effectiveness in identifying significant discrepancies between theoretical guarantees and empirical privacy loss. These findings contribute to a nuanced understanding of the challenges and considerations in the design and implementation of LDP mechanisms. As LDP continues to gain prominence in privacy-preserving data analysis, LDP-Auditor can serve as a valuable resource for practitioners and researchers aiming to assess and enhance the privacy guarantees of their systems.
Nevertheless, while we instantiated LDP-Auditor with distinguishability attacks on the user’s value (Emre Gursoy et al., 2022; Arcolezi et al., 2023a), our future plans involve expanding the scope of LDP auditing to incorporate other adversarial analysis proposed in the literature, such as inference pool (Gadotti et al., 2022), data change detection (Arcolezi et al., 2023b) and re-identification attacks (Murakami and Takahashi, 2021; Arcolezi et al., 2023a). We also aim and suggest extending LDP-Auditor to encompass a wider range of LDP applications (e.g., mean estimation) as these may introduce unique challenges and considerations during the auditing process. Additionally, we aim at integrating the Neyman-Pearson lemma into LDP-Auditor’s analysis to leverage its theoretical foundation to enhance the precision of our auditing framework. Lastly, one can envision utilizing LDP-Auditor as a means to establish a unified local privacy loss 𝜖 𝑒 𝑚 𝑝 when comparing mechanisms of different locally private definitions, such as 𝑑 -privacy (Chatzikokolakis et al., 2013), 𝛼 -PIE (Murakami and Takahashi, 2021) and LDP (Kasiviswanathan et al., 2011).
Acknowledgements. We thank Catuscia Palamidessi, Aurélien Bellet and Mathias Lécuyer for their helpful discussions and feedback throughout this project. The authors also deeply thank the anonymous PETS reviewers for their insightful suggestions. This work has been partially supported by the “ANR 22-PECY-0002” IPOP (Interdisciplinary Project on Privacy) project of the Cybersecurity PEPR. The work of Héber H. Arcolezi was partially supported by the European Research Council (ERC) project HYPATIA under the European Union’s Horizon 2020 research and innovation programme. Grant agreement n. 835294. Sébastien Gambs is supported by the Canada Research Chair program as well as a Discovery Grant from NSERC. References (1) ↑
Abadi et al. (2016) ↑ Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016.Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (Vienna, Austria) (CCS ’16). Association for Computing Machinery, New York, NY, USA, 308–318.https://doi.org/10.1145/2976749.2978318 Acharya et al. (2019) ↑ Jayadev Acharya, Ziteng Sun, and Huanyu Zhang. 2019.Hadamard Response: Estimating Distributions Privately, Efficiently, and with Little Communication. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 89), Kamalika Chaudhuri and Masashi Sugiyama (Eds.). PMLR, 1120–1129. Andrew et al. (2023) ↑ Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, H Brendan McMahan, and Vinith Suriyakumar. 2023.One-shot Empirical Privacy Estimation for Federated Learning.arXiv preprint arXiv:2302.03098 (2023). Arcolezi (2024) ↑ Héber H. Arcolezi. 2024.https://github.com/hharcolezi/ldp-audit. Arcolezi et al. (2021a) ↑ Héber H. Arcolezi, Jean-François Couchot, Bechara Al Bouna, and Xiaokui Xiao. 2021a.Random Sampling Plus Fake Data: Multidimensional Frequency Estimates With Local Differential Privacy. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. ACM, 47–57.https://doi.org/10.1145/3459637.3482467 Arcolezi et al. (2021b) ↑ Héber H. Arcolezi, Jean-François Couchot, Bechara Al Bouna, and Xiaokui Xiao. 2021b.Longitudinal Collection and Analysis of Mobile Phone Data with Local Differential Privacy. In Privacy and Identity Management, Michael Friedewald, Stefan Schiffner, and Stephan Krenn (Eds.). Springer International Publishing, Cham, 40–57.https://doi.org/10.1007/978-3-030-72465-8_3 Arcolezi et al. (2022a) ↑ Héber H. Arcolezi, Jean-François Couchot, Bechara Al Bouna, and Xiaokui Xiao. 2022a.Improving the utility of locally differentially private protocols for longitudinal and multidimensional frequency estimates.Digital Communications and Networks (July 2022).https://doi.org/10.1016/j.dcan.2022.07.003 Arcolezi et al. (2022b) ↑ Héber H. Arcolezi, Jean-François Couchot, Sébastien Gambs, Catuscia Palamidessi, and Majid Zolfaghari. 2022b.Multi-Freq-LDPy: Multiple Frequency Estimation Under Local Differential Privacy in Python. In Computer Security – ESORICS 2022, Vijayalakshmi Atluri, Roberto Di Pietro, Christian D. Jensen, and Weizhi Meng (Eds.). Springer Nature Switzerland, Cham, 770–775.https://doi.org/10.1007/978-3-031-17143-7_40 Arcolezi et al. (2023a) ↑ Héber H. Arcolezi, Sébastien Gambs, Jean-François Couchot, and Catuscia Palamidessi. 2023a.On the Risks of Collecting Multidimensional Data Under Local Differential Privacy.Proc. VLDB Endow. 16, 5 (jan 2023), 1126–1139.https://doi.org/10.14778/3579075.3579086 Arcolezi et al. (2023b) ↑ Héber H. Arcolezi, Carlos A Pinzón, Catuscia Palamidessi, and Sébastien Gambs. 2023b.Frequency Estimation of Evolving Data Under Local Differential Privacy. In Proceedings of the 26th International Conference on Extending Database Technology, EDBT 2023, Ioannina, Greece, March 28 - March 31, 2023. OpenProceedings.org, 512–525.https://doi.org/10.48786/EDBT.2023.44 Askin et al. (2022) ↑ Önder Askin, Tim Kutta, and Holger Dette. 2022.Statistical Quantification of Differential Privacy: A Local Approach. In 2022 IEEE Symposium on Security and Privacy (SP). 402–421.https://doi.org/10.1109/SP46214.2022.9833689 Balle (2018) ↑ Borja Balle. 2018.Analytic gaussian mechanism.https://github.com/BorjaBalle/analytic-gaussian-mechanism/blob/master/agm-example.py. Balle et al. (2018) ↑ Borja Balle, Gilles Barthe, and Marco Gaboardi. 2018.Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31. Curran Associates, Inc. Balle and Wang (2018) ↑ Borja Balle and Yu-Xiang Wang. 2018.Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 394–403. Bassily and Smith (2015) ↑ Raef Bassily and Adam Smith. 2015.Local, Private, Efficient Protocols for Succinct Histograms. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing (Portland, Oregon, USA) (STOC ’15). Association for Computing Machinery, New York, NY, USA, 127–135.https://doi.org/10.1145/2746539.2746632 Bhaila et al. (2024) ↑ Karuna Bhaila, Wen Huang, Yongkai Wu, and Xintao Wu. 2024.Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach. In Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). SIAM, 1–9.https://doi.org/10.1137/1.9781611978032.1 Bichsel et al. (2021) ↑ Benjamin Bichsel, Samuel Steffen, Ilija Bogunovic, and Martin Vechev. 2021.DP-Sniper: Black-Box Discovery of Differential Privacy Violations using Classifiers. In 2021 IEEE Symposium on Security and Privacy (SP). 391–409.https://doi.org/10.1109/SP40001.2021.00081 Bun et al. (2019) ↑ Mark Bun, Jelani Nelson, and Uri Stemmer. 2019.Heavy Hitters and the Structure of Local Privacy.ACM Trans. Algorithms 15, 4, Article 51 (oct 2019).https://doi.org/10.1145/3344722 Cebere et al. (2024) ↑ Tudor Cebere, Aurélien Bellet, and Nicolas Papernot. 2024.Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model.arXiv preprint arXiv:2405.14457 (2024). Chatzikokolakis et al. (2013) ↑ Konstantinos Chatzikokolakis, Miguel E Andrés, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. 2013.Broadening the scope of differential privacy using metrics. In Privacy Enhancing Technologies: 13th International Symposium, PETS 2013, Bloomington, IN, USA, July 10-12, 2013. Proceedings 13. Springer, 82–102. Chatzikokolakis et al. (2023) ↑ K. Chatzikokolakis, G. Cherubin, C. Palamidessi, and C. Troncoso. 2023.Bayes Security: A Not So Average Metric. In 2023 2023 IEEE 36th Computer Security Foundations Symposium (CSF) (CSF). IEEE Computer Society, Los Alamitos, CA, USA, 159–177.https://doi.org/10.1109/CSF57540.2023.00011 Clopper and Pearson (1934) ↑ Charles J Clopper and Egon S Pearson. 1934.The use of confidence or fiducial limits illustrated in the case of the binomial.Biometrika 26, 4 (1934), 404–413. Cormode et al. (2021) ↑ Graham Cormode, Samuel Maddock, and Carsten Maple. 2021.Frequency estimation under local differential privacy.Proceedings of the VLDB Endowment 14, 11 (July 2021), 2046–2058.https://doi.org/10.14778/3476249.3476261 Costa Filho and Machado (2023) ↑ José Serafim Costa Filho and Javam C Machado. 2023.FELIP: A local Differentially Private approach to frequency estimation on multidimensional datasets. In Proceedings of the 26th International Conference on Extending Database Technology, EDBT 2023, Ioannina, Greece, March 28 - March 31, 2023. OpenProceedings.org, 671–683.https://doi.org/10.48786/EDBT.2023.56 Desfontaines (2021) ↑ Damien Desfontaines. 2021.A list of real-world uses of differential privacy.Ted is writing things (2021).https://desfontain.es/privacy/real-world-differential-privacy.html. Ding et al. (2017) ↑ Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. 2017.Collecting Telemetry Data Privately.In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 3571–3580. Ding et al. (2018) ↑ Zeyu Ding, Yuxin Wang, Guanhong Wang, Danfeng Zhang, and Daniel Kifer. 2018.Detecting Violations of Differential Privacy. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (Toronto, Canada) (CCS ’18). Association for Computing Machinery, New York, NY, USA, 475–489.https://doi.org/10.1145/3243734.3243818 Duchi et al. (2013) ↑ John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013.Local Privacy and Statistical Minimax Rates. In 2013 IEEE 54th Annual Symposium on Foundations of Computer Science. IEEE, 429–438.https://doi.org/10.1109/focs.2013.53 Dwork et al. (2006) ↑ Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006.Calibrating Noise to Sensitivity in Private Data Analysis.In Theory of Cryptography. Springer Berlin Heidelberg, 265–284.https://doi.org/10.1007/11681878_14 Dwork et al. (2014) ↑ Cynthia Dwork, Aaron Roth, et al. 2014.The algorithmic foundations of differential privacy.Foundations and Trends® in Theoretical Computer Science 9, 3–4 (2014), 211–407. Emre Gursoy et al. (2022) ↑ M. Emre Gursoy, Ling Liu, Ka-Ho Chow, Stacey Truex, and Wenqi Wei. 2022.An Adversarial Approach to Protocol Analysis and Selection in Local Differential Privacy.IEEE Transactions on Information Forensics and Security 17 (2022), 1785–1799.https://doi.org/10.1109/TIFS.2022.3170242 Erlingsson et al. (2014) ↑ Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014.RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (Scottsdale, Arizona, USA). ACM, New York, NY, USA, 1054–1067.https://doi.org/10.1145/2660267.2660348 Gadotti et al. (2022) ↑ Andrea Gadotti, Florimond Houssiau, Meenatchi Sundaram Muthu Selva Annamalai, and Yves-Alexandre de Montjoye. 2022.Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple’s Count Mean Sketch in Practice. In 31st USENIX Security Symposium (USENIX Security 22). USENIX Association, Boston, MA, 501–518. Gorla et al. (2022) ↑ Daniele Gorla, Louis Jalouzot, Federica Granese, Catuscia Palamidessi, and Pablo Piantanida. 2022.On the (Im) Possibility of Estimating Various Notions of Differential Privacy.arXiv preprint arXiv:2208.14414 (2022). GÜRSOY (2024) ↑ Mehmet Emre GÜRSOY. 2024.Longitudinal attacks against iterative data collection with local differential privacy.Turkish Journal of Electrical Engineering and Computer Sciences 32, 1 (Feb. 2024), 198–218.https://doi.org/10.55730/1300-0632.4063 Jagielski et al. (2020) ↑ Matthew Jagielski, Jonathan Ullman, and Alina Oprea. 2020.Auditing Differentially Private Machine Learning: How Private is Private SGD?. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 22205–22216. Kairouz et al. (2016) ↑ Peter Kairouz, Keith Bonawitz, and Daniel Ramage. 2016.Discrete distribution estimation under local privacy. In International Conference on Machine Learning. PMLR, 2436–2444. Kairouz et al. (2015) ↑ Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2015.The Composition Theorem for Differential Privacy. In Proceedings of the 32nd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 37), Francis Bach and David Blei (Eds.). PMLR, Lille, France, 1376–1385. Kasiviswanathan et al. (2011) ↑ Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2011.What Can We Learn Privately?SIAM J. Comput. 40, 3 (2011), 793–826.https://doi.org/10.1137/090756090 Kazmi et al. (2024) ↑ Mishaal Kazmi, Hadrien Lautraite, Alireza Akbari, Mauricio Soroco, Qiaoyue Tang, Tao Wang, Sébastien Gambs, and Mathias Lécuyer. 2024.PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining.arXiv preprint arXiv:2402.09477 (2024). Kikuchi (2022) ↑ Hiroaki Kikuchi. 2022.Castell: Scalable Joint Probability Estimation of Multi-dimensional Data Randomized with Local Differential Privacy.arXiv preprint arXiv:2212.01627 (2022). Lam et al. (2015) ↑ Siu Kwan Lam, Antoine Pitrou, and Stanley Seibert. 2015.Numba: A LLVM-Based Python JIT Compiler. In Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC (Austin, Texas) (LLVM ’15). Association for Computing Machinery, New York, NY, USA, Article 7, 6 pages.https://doi.org/10.1145/2833157.2833162 Li et al. (2012) ↑ Ninghui Li, Wahbeh Qardaji, and Dong Su. 2012.On Sampling, Anonymization, and Differential Privacy or, k-Anonymization Meets Differential Privacy. In Proceedings of the 7th ACM Symposium on Information, Computer and Communications Security (Seoul, Korea) (ASIACCS ’12). Association for Computing Machinery, New York, NY, USA, 32–33.https://doi.org/10.1145/2414456.2414474 Lu et al. (2022b) ↑ Fred Lu, Joseph Munoz, Maya Fuchs, Tyler LeBlond, Elliott Zaresky-Williams, Edward Raff, Francis Ferraro, and Brian Testa. 2022b.A General Framework for Auditing Differentially Private Machine Learning. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 4165–4176. Lu et al. (2022a) ↑ Yun Lu, Malik Magdon-Ismail, Yu Wei, and Vassilis Zikas. 2022a.Eureka: A General Framework for Black-box Differential Privacy Estimators.Cryptology ePrint Archive, Paper 2022/1250.https://eprint.iacr.org/2022/1250. Maddock (2021) ↑ Samuel Maddock. 2021.pure-LDP.https://pypi.org/project/pure-ldp/. Maddock (2023) ↑ Samuel Maddock. 2023.Fix perturb bug in UEClient.https://github.com/Samuel-Maddock/pure-LDP/commit/fc622e338b565b9e6e7d75bc734e7859f6b6d2cc. Maddock et al. (2022) ↑ Samuel Maddock, Alexandre Sablayrolles, and Pierre Stock. 2022.CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning.arXiv preprint arXiv:2210.02912 (2022). Mahawaga Arachchige et al. (2020) ↑ Pathum Chamikara Mahawaga Arachchige, Peter Bertok, Ibrahim Khalil, Dongxi Liu, Seyit Camtepe, and Mohammed Atiquzzaman. 2020.Local Differential Privacy for Deep Learning.IEEE Internet of Things Journal 7, 7 (2020), 5827–5842.https://doi.org/10.1109/JIOT.2019.2952146 McCandless et al. (2021) ↑ David McCandless, Tom Evans, Miriam Quick, Ella Hollowood, Christian Miles, Dan Hampson, and Duncan Geere. 2021.World’s Biggest Data Breaches & Hacks.Available online: https://www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/ (accessed on 11 March 2023). Moritz et al. (2018) ↑ Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, and Ion Stoica. 2018.Ray: A Distributed Framework for Emerging AI Applications. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). USENIX Association, Carlsbad, CA, 561–577. Murakami and Takahashi (2021) ↑ Takao Murakami and Kenta Takahashi. 2021.Toward Evaluating Re-identification Risks in the Local Privacy Model.Transactions on Data Privacy 14, 3 (2021), 79–116. Nasr et al. (2023) ↑ Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, and Andreas Terzis. 2023.Tight Auditing of Differentially Private Machine Learning.arXiv preprint arXiv:2302.07956 (2023). Nasr et al. (2021) ↑ Milad Nasr, Shuang Songi, Abhradeep Thakurta, Nicolas Papernot, and Nicholas Carlin. 2021.Adversary instantiation: Lower bounds for differentially private machine learning. In 2021 IEEE Symposium on security and privacy (SP). IEEE, 866–882. Nguyên et al. (2016) ↑ Thông T Nguyên, Xiaokui Xiao, Yin Yang, Siu Cheung Hui, Hyejin Shin, and Junbum Shin. 2016.Collecting and analyzing data from smart device users with local differential privacy.arXiv preprint arXiv:1606.05053 (2016). Pillutla et al. (2023) ↑ Krishna Pillutla, Galen Andrew, Peter Kairouz, H Brendan McMahan, Alina Oprea, and Sewoong Oh. 2023.Unleashing the Power of Randomization in Auditing Differentially Private ML.arXiv preprint arXiv:2305.18447 (2023). Ren et al. (2018) ↑ Xuebin Ren, Chia-mu Yu, Weiren Yu, Shusen Yang, Senior Member, Xinyu Yang, Julie A Mccann, Philip S Yu, and Life Fellow. 2018.LoPub : High-Dimensional Crowdsourced Data.13, 9 (2018), 2151–2166.https://doi.org/10.1109/TIFS.2018.2812146 Steinke et al. (2023) ↑ Thomas Steinke, Milad Nasr, and Matthew Jagielski. 2023.Privacy Auditing with One (1) Training Run.arXiv preprint arXiv:2305.08846 (2023). Apple Differential Privacy Team (2017) ↑ Apple Differential Privacy Team. 2017.Learning with privacy at scale.https://docs-assets.developer.apple.com/ml-research/papers/learning-with-privacy-at-scale.pdf, (accessed January 2023). Tramer et al. (2022) ↑ Florian Tramer, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, and Nicholas Carlini. 2022.Debugging differential privacy: A case study for privacy auditing.arXiv preprint arXiv:2202.12219 (2022). Turati et al. (2023) ↑ Florian Turati, Karel Kubicek, Carlos Cotrini, and David Basin. 2023.Locality-Sensitive Hashing Does Not Guarantee Privacy! Attacks on Google's FLoC and the MinHash Hierarchy System.Proceedings on Privacy Enhancing Technologies 2023, 4 (Oct. 2023), 117–131.https://doi.org/10.56553/popets-2023-0101 van der Walt et al. (2011) ↑ Stefan van der Walt, S. Chris Colbert, and Gael Varoquaux. 2011.The NumPy Array: A Structure for Efficient Numerical Computation.Computing in Science & Engineering 13, 2 (2011), 22–30.https://doi.org/10.1109/MCSE.2011.37 Vidal et al. (2020) ↑ Israel De Castro Vidal, André Luís da Costa Mendonça, Franck Rousseau, and Javam De Castro Machado. 2020.ProTECting: An Application of Local Differential Privacy for IoT at the Edge in Smart Home Scenarios. In Anais XXXVIII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC 2020). Sociedade Brasileira de Computação.https://doi.org/10.5753/sbrc.2020.12308 Wang et al. (2019) ↑ Ning Wang, Xiaokui Xiao, Yin Yang, Jun Zhao, Siu Cheung Hui, Hyejin Shin, Junbum Shin, and Ge Yu. 2019.Collecting and Analyzing Multidimensional Data with Local Differential Privacy. In 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE.https://doi.org/10.1109/icde.2019.00063 Wang et al. (2016) ↑ Shaowei Wang, Liusheng Huang, Pengzhan Wang, Yiwen Nie, Hongli Xu, Wei Yang, Xiang-Yang Li, and Chunming Qiao. 2016.Mutual information optimally local private discrete distribution estimation.arXiv preprint arXiv:1607.08025 (2016). Wang et al. (2017) ↑ Tianhao Wang, Jeremiah Blocki, Ninghui Li, and Somesh Jha. 2017.Locally Differentially Private Protocols for Frequency Estimation. In 26th USENIX Security Symposium (USENIX Security 17). USENIX Association, Vancouver, BC, 729–745. Wang et al. (2018) ↑ Tianhao Wang, Ninghui Li, and Somesh Jha. 2018.Locally Differentially Private Frequent Itemset Mining. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE.https://doi.org/10.1109/sp.2018.00035 Wang et al. (2021a) ↑ Tianhao Wang, Ninghui Li, and Somesh Jha. 2021a.Locally Differentially Private Heavy Hitter Identification.IEEE Transactions on Dependable and Secure Computing 18, 2 (March 2021), 982–993.https://doi.org/10.1109/tdsc.2019.2927695 Wang et al. (2021b) ↑ Teng Wang, Jun Zhao, Zhi Hu, Xinyu Yang, Xuebin Ren, and Kwok-Yan Lam. 2021b.Local Differential Privacy for data collection and analysis.Neurocomputing 426 (Feb. 2021), 114–133.https://doi.org/10.1016/j.neucom.2020.09.073 Warner (1965) ↑ Stanley L. Warner. 1965.Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias.J. Amer. Statist. Assoc. 60, 309 (March 1965), 63–69.https://doi.org/10.1080/01621459.1965.10480775 Wong (2019) ↑ Julia Carrie Wong. 2019.Facebook to be fined $ 5 bn for Cambridge Analytica privacy violations – reports.Available online: https://www.theguardian.com/technology/2019/jul/12/facebook-fine-ftc-privacy-violations (accessed on 11 March 2023). Wu et al. (2023) ↑ Haonan Wu, Ruisheng Ran, Shunshun Peng, Mengmeng Yang, and Taolin Guo. 2023.Mining frequent items from high-dimensional set-valued data under local differential privacy protection.Expert Systems with Applications 234 (Dec. 2023), 121105.https://doi.org/10.1016/j.eswa.2023.121105 Ye and Barg (2018) ↑ Min Ye and Alexander Barg. 2018.Optimal Schemes for Discrete Distribution Estimation Under Locally Differential Privacy.IEEE Transactions on Information Theory 64, 8 (2018), 5662–5676.https://doi.org/10.1109/TIT.2018.2809790 Yilmaz et al. (2020) ↑ Emre Yilmaz, Mohammad Al-Rubaie, and J. Morris Chang. 2020.Naive Bayes Classification under Local Differential Privacy. In 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA). IEEE.https://doi.org/10.1109/dsaa49011.2020.00081 Zhang et al. (2018) ↑ Zhikun Zhang, Tianhao Wang, Ninghui Li, Shibo He, and Jiming Chen. 2018.CALM: Consistent adaptive local marginal for marginal release under local differential privacy.Proceedings of the ACM Conference on Computer and Communications Security (2018), 212–229.https://doi.org/10.1145/3243734.3243742 Appendix ASummary of Notations
The main notation used in this paper is summarized in Table 1.
Symbol Description
[ 𝑎 ] Set of integers { 1 , 2 , 3 , … , 𝑎 } .
𝐚 𝑖
𝑖 -th coordinate of vector 𝐚 .
𝑉 Data domain.
𝑘 Domain size 𝑘
| 𝑉 | .
𝑛 Number of users.
𝜖 Theoretical privacy loss.
𝛿 Maximum probability that privacy loss exceeds 𝜖 .
𝜖 𝑒 𝑚 𝑝 Empirical privacy loss.
𝜖 𝑂 𝑃 𝑇 Upper bound on Monte Carlo privacy loss.
ℳ ( 𝜖 , 𝛿 )-LDP mechanism.
𝒜 Distinguishability attack.
𝒜 𝐿 Distinguishability attack in longitudinal study.
𝒜 RS+FD Distinguishability attack on RS+FD protocols.
𝒜 ℳ Distinguishability attack of mechanism ℳ .
𝟙 ℳ Support set of mechanism ℳ .
𝑇 Number of trials.
𝛼 Confidence level.
𝑑 Number of attributes 𝑑 ≥ 2 .
𝜏 Number of data collections. Table 1.Symbols and Notations. Appendix BApproximate ( 𝜖 , 𝛿 )-LDP Protocols
Approximate GRR (AGRR) (Wang et al., 2021b). Similar to GRR in Section 3.1, given a value 𝑣 ∈ 𝑉 , AGRR ( 𝑣 ) outputs the true value 𝑣 with probability 𝑝 , and any other value 𝑣 ′ ∈ 𝑉 ∖ { 𝑣 } , otherwise. More formally:
(9) Pr [ AGRR ( 𝑣 )
𝑦 ]
{ 𝑝
𝑒 𝜖 + ( 𝑘 − 1 ) 𝛿 𝑒 𝜖 + 𝑘 − 1 if 𝑦
𝑣 ,
𝑞
1 − 𝛿 𝑒 𝜖 + 𝑘 − 1 if 𝑦 ≠ 𝑣 ,
in which 𝑦 ∈ 𝑉 is the perturbed value sent to the server. From Equation (9), Pr [ 𝑦
𝑣 ] > Pr [ 𝑦
𝑣 ′ ] for all 𝑣 ′ ∈ 𝑉 ∖ { 𝑣 } . Thus, the attack strategy 𝒜 AGRR is equivalent to 𝒜 GRR , i.e., to predict 𝑣 ^
𝑦 .
Approximate SUE (ASUE) (Wang et al., 2021b). Similar to the SUE protocol (Erlingsson et al., 2014) in Section 3.1, ASUE encode the user’s input data 𝑣 ∈ 𝑉 , as a one-hot 𝑘 -dimensional vector. The obfuscation function of ASUE randomizes the bits from v independently to generate y as follows:
(10) ∀ 𝑖 ∈ [ 𝑘 ] : Pr [ y 𝑖
1 ]
{ 𝑝
𝑒 𝜖 − 𝑒 𝜖 ( 1 − 𝛿 ) + 𝛿 𝑒 𝜖 − 1 , if v 𝑖
1 ,
𝑞
𝑒 𝜖 ( 1 − 𝛿 ) + 𝛿 − 1 𝑒 𝜖 − 1 , if v 𝑖
0 ,
in which y is sent to the server. As for UE protocols, with y, the adversary can construct the subset of all values 𝑣 ∈ 𝑉 that are set to 1, i.e., 𝟙 AUE
{ 𝑣 | y 𝑣
1 } . Then, the attack strategy 𝒜 ASUE is equivalent to 𝒜 AUE :
•
𝒜 ASUE 0 is a random choice 𝑣 ^
Uniform ( [ 𝑘 ] ) , if 𝟙 AUE
∅ ;
•
𝒜 ASUE 1 is a random choice 𝑣 ^
Uniform ( 𝟙 AUE ) , otherwise.
Approximate LH (ALH) (Wang et al., 2021b). Similar to the LH protocols (Wang et al., 2017; Bassily and Smith, 2015) in Section 3.1, ALH uses a hash function H ∈ ℋ to map the input data 𝑣 ∈ 𝑉 to a new domain of size 𝑔 ≥ 2 , and then apply AGRR to the hashed value ℎ
H ( 𝑣 ) . In particular, the ALH reporting mechanism is ALH ( 𝑣 ) ≔ ⟨ H , AGRR ( ℎ ) ⟩ , in which AGRR is given in Equation (9) while operating on the new domain [ 𝑔 ] . The two variants of ALH protocols are: (1) Approximate BLH (ABLH), which sets 𝑔
2 and (2) Approximate OLH (AOLH), which sets 𝑔
− 3 𝑒 𝜖 𝛿 − 𝑒 𝜖 − 1 ( 1 − 𝛿 ) ( 𝑒 𝜖 + 𝛿 − 9 𝑒 𝜖 𝛿 − 1 ) + 𝑒 𝜖 + 3 𝛿 − 1 2 𝛿 . Each user reports the hash function and obfuscated value ⟨ H , 𝑦 ⟩ to the server. With these elements, the adversary can construct the subset of all values 𝑣 ∈ 𝑉 that hash to 𝑦 , i.e., 𝟙 ALH
{ 𝑣 | H ( 𝑣 )
𝑦 } . Then, the attack strategy 𝒜 ALH is equivalent to 𝒜 LH :
•
𝒜 ALH 0 is a random choice 𝑣 ^
Uniform ( [ 𝑘 ] ) , if 𝟙 ALH
∅ ;
•
𝒜 ALH 1 is a random choice 𝑣 ^
Uniform ( 𝟙 ALH ) , otherwise.
Appendix CAdversarial Privacy Game
Figure 9 provides a comparative illustration of the adversarial privacy game in central and local differential privacy frameworks, highlighting scenarios of membership inference and value distinguishability attacks, respectively.
(a)Central DP with membership inference attack. (b)Local DP with value distinguishability attack. Figure 9.Comparison of the adversarial privacy game between the central and local DP settings. Appendix DClopper-Pearson Interval
The Clopper-Pearson method (Clopper and Pearson, 1934) is a statistical technique used to calculate exact confidence intervals for the success probability in binomial distributions. This method is known for its conservative nature, ensuring that the confidence interval computed does not rely on any asymptotic approximations and is therefore valid regardless of the sample size. Given 𝑥 successes in 𝑇 trials, the Clopper-Pearson interval computes the lower and upper confidence limits for the true probability of success, based on the beta distribution’s cumulative density function. Specifically, the Clopper-Pearson confidence interval is computed as follows:
(11) [ 𝔅 ( 𝛼 2 ; 𝑥 , 𝑇 − 𝑥 + 1 ) , 𝔅 ( 1 − 𝛼 2 ; 𝑥 + 1 , 𝑇 − 𝑥 ) ] ,
in which 𝔅 denotes the beta distribution quantile function, 𝑥 is the number of observed successes, 𝑇 is the total number of trials, 𝛼 represents the significance level and 𝔅 ( 𝑝 ; 𝑧 , 𝑤 ) is the 𝑝 -th quantile from a beta distribution with shape parameters 𝑧 and 𝑤 . This exact method is crucial in our LDP auditing framework, as it allows us to establish the lower and upper bounds for the true positive rate and false positive rate of Equation (8) with high confidence, ensuring that our empirical privacy loss estimations are both accurate and robust.
Appendix EProof of Theorem 1 Proof of Theorem 1.
First, the guarantee of the Clopper-Pearson confidence intervals is that, with probability at least 1 − 𝛼 , 𝑝 ^ 0 ≤ 𝑝 0 and 𝑝 ^ 1 ≥ 𝑝 1 , which implies 𝑝 0 / 𝑝 1 ≥ 𝑝 ^ 0 / 𝑝 ^ 1 . Second, if ℳ is ( 𝜖 , 𝛿 )-LDP, then we would have 𝑝 0 ≤ 𝑝 1 𝑒 𝜖 + 𝛿 , meaning ℳ is not ( 𝜖 ′ , 𝛿 ) -LDP for any 𝜖 ′ < ln ( ( 𝑝 0 − 𝛿 ) / 𝑝 1 ) . Combining the two statements, ℳ is not 𝜖 ′ for any 𝜖 ′ < ln ( ( 𝑝 ^ 0 − 𝛿 ) / 𝑝 ^ 1 )
𝜖 𝑒 𝑚 𝑝 . ∎
Appendix FMemoization-Based LDP Protocols
As mentioned in Section 4.2, in longitudinal studies, the privacy loss is linear on the number of data collections 𝜏 following the DP sequential composition. This accumulation allows attackers to employ “averaging attacks” to more easily distinguish a user’s true value among the noisy data. To counteract this, renowned LDP mechanisms for longitudinal studies, such as RAPPOR (Erlingsson et al., 2014) and 𝑑 BitFlipPM (Ding et al., 2017), incorporate a memoization-based strategy.
One way to employ memoization is to memorize an obfuscated value 𝑦
ℳ ( 𝑣 ) and consistently reuse it throughout time (Ding et al., 2017; Arcolezi et al., 2021b). Specifically, at each time 𝑡 ∈ [ 𝜏 ] , the user reports the memorized 𝑦 , which satisfies 𝜖 -LDP. Note that as there is only a single obfuscation round, our LDP-Auditor operates equivalently to auditing in a single data collection scenario (i.e., Algorithm 1).
An alternative memoization technique involves re-using the memorized obfuscated value 𝑦
ℳ ( 𝑣 ) as the input for a subsequent round of obfuscation (Erlingsson et al., 2014; Vidal et al., 2020; Arcolezi et al., 2022a, 2023a). This means that at each time 𝑡 ∈ [ 𝜏 ] the user reports 𝑦 𝑡
ℳ ( 𝑦 ) ; note that the input to ℳ is an already obfuscated value 𝑦 . In this setting, there are two levels of privacy guarantees (Erlingsson et al., 2014): 𝜖 1 , which is the privacy level of the first report 𝑦 1
ℳ ( 𝑦 ) following the second obfuscation round, and 𝜖 ∞ , which is the privacy guarantee offered by the first obfuscation round that generated 𝑦 . More precisely, 𝑦
ℳ ( 𝑣 ) satisfy 𝜖 ∞ -LDP because it establishes the upper bound for the privacy leakage as an adversary could only recover 𝑦 instead of 𝑣 after executing an “averaging attack” across an indefinite number of reports 𝑦 1 , 𝑦 2 , … , 𝑦 ∞ . Consequently, our LDP-Auditor framework described in Algorithm 1 can be deployed directly to estimate an empirical privacy loss 𝜖 𝑒 𝑚 𝑝 against the theoretical upper bound 𝜖 1 for a single data collection. For 𝑡 → ∞ data collections, the theoretical upper bound becomes 𝜖 ∞ , for which the distinguishability attack in longitudinal study 𝒜 𝐿 outlined in Algorithm 2 should be applied. In other words, while in Section 5.5 the upper bound is 𝜏 𝜖 -LDP, for memoization-based mechanisms with two obfuscation rounds, the upper bound is 𝜖 ∞ -LDP.
Appendix GAdditional Experiments G.1.Case Study #1: Auditing the Impact of 𝛿
Following the experimental setup detailed in Section 5.3, Figure 10 illustrates the theoretical 𝜖 values (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) when varying the 𝛿 parameter and domain size 𝑘 ∈ { 100 , 150 } , using our LDP-Auditor framework. Note that for both GM and AGM protocols, there is no 𝜖 𝑒 𝑚 𝑝 value when 𝛿
0 , as these protocols do not have pure 𝜖 -LDP variations. Finally, a similar trend as in Figure 4 can be observed in Figure 10, for which the discussion in Section 5.3 is equally applicable to these results.
(a)Domain size 𝑘
100 . (b)Domain size 𝑘
150 . Figure 10.Theoretical 𝜖 values (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) using our LDP-Auditor framework. We assess different privacy guarantees for six ( 𝜖 , 𝛿 )-LDP protocols across domain sizes 𝑘 ∈ { 100 , 150 } . The special case 𝛿
0 corresponds to pure 𝜖 -LDP, for which GM and AGM do not satisfy. G.2.Case Study #3: Auditing the LDP Sequential Composition in Longitudinal Studies
Following the experimental setup detailed in Section 5.5, Figure 11 illustrates the estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) for the eight 𝜖 -LDP and both GM and AGM ( 𝜖 , 𝛿 )-LDP protocols according to the the number of data collections 𝜏 (x-axis), per report 𝜖 and domain size 𝑘 ∈ { 25 , 50 } , using our LDP-Auditor framework. Notice that a similar trend as in Figure 6 can be observed in Figure 11, for which the discussion in Section 5.5 is equally applicable to these results.
(a)Domain size 𝑘
25 . (b)Domain size 𝑘
50 . Figure 11.Estimated 𝜖 𝑒 𝑚 𝑝 (y-axis) versus the number of data collections 𝜏 (x-axis) using our LDP-Auditor framework for different domain sizes 𝑘 ∈ { 25 , 50 } . We vary the per report 𝜖 -LDP guarantee for the following LDP frequency estimation protocols: GRR, SS, SUE, OUE, BLH, OLH, SHE, THE, GM and AGM. For both approximate-LDP protocols, namely GM and AGM, 𝛿
1 𝑒 − 5 . G.3.Case Study #4: LDP Auditing with Multidimensional Data
Following the experimental setup detailed in Section 5.6, Figure 12 illustrates the comparison of theoretical 𝜖 values (x-axis) with estimated 𝜖 𝑒 𝑚 𝑝 values (y-axis) for the five RS+FD protocols, based on the number of attributes 𝑑 and domain size 𝑘 ∈ { 25 , 50 } , utilizing our LDP-Auditor framework. Notice that a similar trend as in Figure 7 can be observed in Figure 12, for which the discussion in Section 5.6 is equally applicable to these results.
(a)Domain size 𝑘
25 . (b)Domain size 𝑘
50 . Figure 12.Theoretical 𝜖 (x-axis) versus estimated 𝜖 𝑒 𝑚 𝑝 (y-axis) using our LDP-Auditor framework comparing different number of attributes 𝑑 for five RS+FD (Arcolezi et al., 2021a) protocols with domain sizes 𝑘
25 and 𝑘
50 . Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 116 kB
- Xet hash:
- 9cee2c11dea8ca6e17f8935ec6c8b399dcd41db9fbd56a768a40a72e8b113aea
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.