| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:52:45.671693Z" |
| }, |
| "title": "Private Release of Text Embedding Vectors", |
| "authors": [ |
| { |
| "first": "Oluwaseyi", |
| "middle": [], |
| "last": "Feyisetan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Shiva", |
| "middle": [], |
| "last": "Kasiviswanathan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Ensuring strong theoretical privacy guarantees on text data is a challenging problem which is usually attained at the expense of utility. However, to improve the practicality of privacy preserving text analyses, it is essential to design algorithms that better optimize this tradeoff. To address this challenge, we propose a release mechanism that takes any (text) embedding vector as input and releases a corresponding private vector. The mechanism satisfies an extension of differential privacy to metric spaces. Our idea based on first randomly projecting the vectors to a lower-dimensional space and then adding noise in this projected space generates private vectors that achieve strong theoretical guarantees on its utility. We support our theoretical proofs with empirical experiments on multiple word embedding models and NLP datasets, achieving in some cases more than 10% gains over the existing state-of-the-art privatization techniques.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Ensuring strong theoretical privacy guarantees on text data is a challenging problem which is usually attained at the expense of utility. However, to improve the practicality of privacy preserving text analyses, it is essential to design algorithms that better optimize this tradeoff. To address this challenge, we propose a release mechanism that takes any (text) embedding vector as input and releases a corresponding private vector. The mechanism satisfies an extension of differential privacy to metric spaces. Our idea based on first randomly projecting the vectors to a lower-dimensional space and then adding noise in this projected space generates private vectors that achieve strong theoretical guarantees on its utility. We support our theoretical proofs with empirical experiments on multiple word embedding models and NLP datasets, achieving in some cases more than 10% gains over the existing state-of-the-art privatization techniques.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Privacy has emerged as a topic of strategic consequence across all computational fields. Differential Privacy (DP) is a mathematical definition of privacy proposed by (Dwork et al., 2006) . Ever since its introduction, DP has been widely adopted and as of today, it has become the de facto privacy definition in the academic world with also wide adoption in industry, e.g., (Erlingsson et al., 2014; Dajani et al., 2017; Team, 2017; Uber Security, 2017) . DP provides provable protection against adversaries with arbitrary side information and computational power, allows clear quantification of privacy losses, and satisfies graceful composition over multiple access to the same data. In DP, two parameters and \u03b4 control the level of privacy. Very roughly, is an upper bound on the amount of influence a single data point has on the information released and \u03b4 is the probability that this bound fails to hold, so the definition becomes more stringent as , \u03b4 \u2192 0.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 187, |
| "text": "(Dwork et al., 2006)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 374, |
| "end": 399, |
| "text": "(Erlingsson et al., 2014;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 400, |
| "end": 420, |
| "text": "Dajani et al., 2017;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 421, |
| "end": 432, |
| "text": "Team, 2017;", |
| "ref_id": null |
| }, |
| { |
| "start": 433, |
| "end": 453, |
| "text": "Uber Security, 2017)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The definition with \u03b4 = 0 is referred to as pure differential privacy, and with \u03b4 > 0 is referred to as approximate differential privacy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Within the field of Natural Language Processing (NLP), the traditional approach for privacy was to apply anonymization techniques such as kanonymity (Sweeney, 2002) and its variants. While this offers an intuitive way of expressing privacy guarantees as a function of an aggregation parameter k, all such methods are provably non-private (Korolova et al., 2009) . Given the sheer increase in data gathering occurring across a multiplicity of connected platforms -a great number of which is being done via user generated voice conversations, text queries, or other language based metadata (e.g., user annotations), it is imperative to advance the development of DP techniques in NLP.", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 164, |
| "text": "(Sweeney, 2002)", |
| "ref_id": null |
| }, |
| { |
| "start": 338, |
| "end": 361, |
| "text": "(Korolova et al., 2009)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Vector embeddings are a popular approach for capturing the \"meaning\" of text and a form of unsupervised learning useful for downstream tasks. Word embeddings were popularized via embedding schemes such as WORD2VEC (Mikolov et al., 2013) , GLOVE (Pennington et al., 2014) , and FAST-TEXT (Bojanowski et al., 2017) . There is also a growing literature on creating embeddings for sentences, documents, and other textual entities, in addition to embeddings in other domains such as in computer vision (Goodfellow et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 214, |
| "end": 236, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 245, |
| "end": 270, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 287, |
| "end": 312, |
| "text": "(Bojanowski et al., 2017)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 497, |
| "end": 522, |
| "text": "(Goodfellow et al., 2016)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recent works such as (Fernandes et al., 2019; Feyisetan et al., 2019 have attempted to directly adapt the methods of DP to word embeddings by borrowing ideas from the privacy methods used for map location data . In the DP literature, one standard way of achieving privacy is by adding properly calibrated noise to the output of a function (Dwork et al., 2006) . This is also the premise behind these previously proposed DP for text techniques, which are based on adding noise to the vector representation of words in a high dimensional embedding space and additional post-processing steps. The privacy guarantees of applying such a method is quite straightforward. However, the main issue is that the magnitude of the DP privacy noise scales with dimensionality of the vector, which leads to a considerable degradation to the utility when these techniques are applied to vectors produced through popular embedding techniques. In this paper, we seek to overcome this curse of dimensionality arising through the differential privacy requirement. Also unlike previous results which were focused on word embeddings, we focus on the general problem of privately releasing vector embeddings, thus making our scheme more widely applicable.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 45, |
| "text": "(Fernandes et al., 2019;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 46, |
| "end": 68, |
| "text": "Feyisetan et al., 2019", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 339, |
| "end": 359, |
| "text": "(Dwork et al., 2006)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Vector representations of words, sentences, and documents, have all become basic building blocks in NLP pipelines and algorithms. Hence, it is natural to consider privacy mechanisms that target these representations. The most relevant to this paper is the privacy mechanism proposed in that works by computing the vector representation x of a word in the embedding space, applying noise N calibrated to the global metric sensitivity to obtain a perturbed vector v = x + N , and then swapping the original word another word whose embedding is closest to v. showed that this mechanism satisfies the ( , 0)-Lipschitz privacy definition. However, the issue with this mechanism is that the magnitude (norm) of the added noise is proportional to d, which we avoid by projecting these vectors down before the noise addition step. Our focus here is also more general and not just on word embeddings. Additionally, we provide theoretical guarantees on our privatized vectors. We experimentally compare with this approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "The privacy mechanisms of (Fernandes et al., 2019; Feyisetan et al., 2019) are also based on similar noise addition ideas. However, (Fernandes et al., 2019) utilized the Earth mover metric to measure distances (instead of Euclidean), and (Feyisetan et al., 2019) perturb vector representations of words in high dimensional Hyperbolic space (instead of a real space). In this paper, we focus on the Euclidean space as it captures the most common choice of metric space with vector models.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 50, |
| "text": "(Fernandes et al., 2019;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 51, |
| "end": 74, |
| "text": "Feyisetan et al., 2019)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 132, |
| "end": 156, |
| "text": "(Fernandes et al., 2019)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Over the past decade, a large body of work has been developed to design basic algorithms and tools for achieving DP, understanding the privacyutility trade-offs in different data access setups, and on integrating DP with machine learning and statistical inference. We refer the reader to (Dwork and Roth, 2013) for a more comprehensive overview.", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 310, |
| "text": "(Dwork and Roth, 2013)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Dimensionality reduction for word embeddings using PCA was explored in (Raunak et al., 2019) for computational efficiency purposes. In this paper, we use random projections for dimensionality reduction that helps with reducing the magnitude of noise needed for privacy. Another issue with PCA like scheme is that there are strong lower bounds (that scale with dimension of the vectors d) on the amount of distortion needed for achieving differentially private PCA in the local privacy model (Wang and Xu, 2020) .", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 92, |
| "text": "(Raunak et al., 2019)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 491, |
| "end": 510, |
| "text": "(Wang and Xu, 2020)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Random projections have been used as a tool to design differentially private algorithms in other problem settings too (Blocki et al., 2012; Wang et al., 2015; Kenthapadi et al., 2013; Zhou et al., 2009; Kasiviswanathan and Jin, 2016) .", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 139, |
| "text": "(Blocki et al., 2012;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 140, |
| "end": 158, |
| "text": "Wang et al., 2015;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 159, |
| "end": 183, |
| "text": "Kenthapadi et al., 2013;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 184, |
| "end": 202, |
| "text": "Zhou et al., 2009;", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 203, |
| "end": 233, |
| "text": "Kasiviswanathan and Jin, 2016)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "We denote [n] = {1, . . . , n}. Vectors are in column-wise fashion. We measure the distance between embeddings through the Euclidean metric. For a vector x, we set x to denote the Euclidean (L 2 -) norm and x 1 denotes its L 1 -norm. For sets S, T , the Minkowski sum S + T = {a + b : a \u2208 S, b \u2208 T }. N (0, \u03c3 2 ) denotes the Gaussian distribution with mean 0 and variance \u03c3 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The privacy concerns around word embedding vectors stem from how they are created. For example, embeddings created using neural models inherit the side effects of unintended memorizations that come with such models (Carlini et al., 2019) . Similarly it has been demonstrated that text generation models that encode language representations also suffer from various degrees of information leakage (Song and Shmatikov, 2019; Lyu et al., 2020) . While this might not be concerning for off the shelf models trained on public data, it becomes important for word embeddings trained on non-public data.", |
| "cite_spans": [ |
| { |
| "start": 215, |
| "end": 237, |
| "text": "(Carlini et al., 2019)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 396, |
| "end": 422, |
| "text": "(Song and Shmatikov, 2019;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 423, |
| "end": 440, |
| "text": "Lyu et al., 2020)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Privacy Motivations for Text", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Recent studies (Song and Raghunathan, 2020; have shown that word embeddings are vulnerable to 3 types of attacks (1) embedding inversion where the vectors can be used to recreate some of the input training data; (2) attribution inference occurs when sensitive attributes (such as authorship) of the input data are revealed even when they are independent of the task at hand; and (3) membership inference where an attacker is able to determine if data from a particular user was used to train the word embedding model.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 43, |
| "text": "(Song and Raghunathan, 2020;", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Privacy Motivations for Text", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The privacy consequences are further amplified depending on the domain of data under consideration. For example, a study by (Abdalla et al., 2020) on word embeddings in the medical domain demonstrated that: (1) they were able to reconstruct up to 68.5% of full names based on the embeddings i.e., embedding inversion; (2) they were able to retrieve associated sensitive information to specific patients in the corpus i.e., attribution inference; and (3) by using the distance between the vector of a patient's name and a billing code, they could differentiate between patients that were billed, and those that weren't i.e., membership inference.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 146, |
| "text": "(Abdalla et al., 2020)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Privacy Motivations for Text", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "These findings all underscore the need to release text embeddings using a rigorous notion of privacy, such as differential privacy, that preserves user privacy and mitigates the attacks described above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Privacy Motivations for Text", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Differential privacy (Dwork et al., 2006) gives a formal standard of privacy, requiring that, for all pairs of datasets that differ in one element, the distribution of outputs should be similar. In this paper, we use the notion of local differential privacy (LDP) (Kasiviswanathan et al., 2011).", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 41, |
| "text": "(Dwork et al., 2006)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A randomized algorithm A : X \u2192 Z is ( , \u03b4)local differentially private (LDP) if for any two data x, x \u2208 X and all (measurable) sets U \u2286 Z,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Pr[A(x) \u2208 U ] \u2264 e Pr[A(x ) \u2208 U ] + \u03b4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The probability is taken over the random coins of A. Here, we think of \u03b4 as being cryptographically small, whereas is typically thought of as a moderately small constant. The above definition considers every pair of x and x (considered as adjacent for the purposes of DP). The LDP notion requires that the given x has a non-negligible probability of being transformed into any other x \u2208 X no matter how unrelated (far) x and x are. However, for text embeddings, this strong requirement makes it virtually impossible to enforce that the semantics of a word are approximately preserved by the privatized vector . To address this problem, we work with a modification of the above definition, referred to as Lipschitz (or metric) privacy, that is better suited for metric spaces defined through embedding models. Lipschitz privacy is closely related to LDP where the adjacency relation is defined through the Hamming metric, but also generalizes to include Euclidean, Manhattan, and Chebyshev metrics, among others Chatzikokolakis et al., 2015; Fernandes et al., 2019; Feyisetan et al., 2019 . Similar to differential privacy, Lipschitz privacy is preserved under post-processing and composition of mechanisms (Koufogiannis et al., 2016) . Definition 1 (Lipschitz Privacy (Dwork et al., 2006; ). Let (X , d) be a metric space. A randomized algorithm A : X \u2192 Z is ( , \u03b4)-Lipschitz private if for any two data x, x \u2208 X and all (measurable) sets U \u2286 Z,", |
| "cite_spans": [ |
| { |
| "start": 1011, |
| "end": 1040, |
| "text": "Chatzikokolakis et al., 2015;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1041, |
| "end": 1064, |
| "text": "Fernandes et al., 2019;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1065, |
| "end": 1087, |
| "text": "Feyisetan et al., 2019", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1206, |
| "end": 1233, |
| "text": "(Koufogiannis et al., 2016)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 1268, |
| "end": 1288, |
| "text": "(Dwork et al., 2006;", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Pr[A(x) \u2208 U ] \u2264 exp( d(x, x ))Pr[A(x ) \u2208 U ] + \u03b4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "An alternate equivalent way of stating this would be to say that with probability at least 1 \u2212 \u03b4, over a drawn from either A(x) or A(x ), we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "| ln Pr[A(x) = a]\u2212ln Pr[A(x ) = a]| \u2264 d(x, x ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The key difference between Lipschitz privacy and LDP is that the latter corresponds to a particular instance of the former when the distance function is given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "d(x, x ) = 1 for every x = x .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In this paper, the metric space of interest is defined by embeddings which organize discrete objects in a continuous real space such that objects that are \"similar\" result in vectors are \"close\" in the embedded space. For the distance measure, we focus on the Euclidean metric, d(x, x ) = x \u2212 x that is known to capture semantic similarity between discrete words in a continuous space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For a function, f : X \u2192 R m , the most basic technique in differential privacy to release f (x) is to answer f (x) + \u03bd , where \u03bd is instanceindependent additive noise (e.g., Laplace or Gaussian) with standard deviation proportional to the global sensitivity of the function f . Definition 2 (Global sensitivity). For a function f : X \u2192 R m , define the global sensitivity of f as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2206 f = max x,x \u2208X f (x) \u2212 f (x )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "x \u2212 x .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background on Differential Privacy.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Dimensionality reduction is the problem of embedding a set from high-dimensions into a lowdimensional space, while preserving certain properties of the original high-dimensional set. Perhaps the most fundamental result for dimensionality reduction is the Johnson-Lindenstrauss (JL) lemma which states that any set of p points in high dimensions can be embedded into O(log(p)/\u03b2 2 ) dimensions, while preserving the Euclidean norm of all points within a multiplicative factor between 1 \u2212 \u03b2 and 1 + \u03b2. In fact, one could embed an infinite continuum of points into lower dimensions while preserving the Euclidean norm of all point up to a multiplicative distortion. A classical result due to (Gordon, 1988) characterizes the relation between the \"size\" of the set and the required dimensionality of the embedding on the unit sphere. Before stating the result, we need to introduce the notion of Gaussian width which captures the L 2geometric complexity of X . Definition 3 (Gaussian Width). Given a closed set X \u2282 R d , its Gaussian width \u03c9(X ) is defined as:", |
| "cite_spans": [ |
| { |
| "start": 688, |
| "end": 702, |
| "text": "(Gordon, 1988)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\u03c9(X ) = E g\u2208N (0,1) d [sup x\u2208X x, g ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Many popular sets have low Gaussian width (Vershynin, 2016). For example, if X contains vector in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "R d that are c-sparse (at most c non-zero elements) then \u03c9(X ) = c log(d/c). If X contains vec- tors that are sparse in the L 1 -sense, say \u2200x \u2208 X , x 1 \u2264 c, then \u03c9(X ) = O(c log d). Similarly if X is the d-dimensional probability simplex, then \u03c9(X ) = O( \u221a log d).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Notice that in all these cases \u03c9(X ) 2 is exponentially smaller than d.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The following is a restatement of the original Gordon's theorem that is better suited for this paper. Theorem 1 (Gordon's Theorem (Gordon, 1988) ). Let \u03b2 \u2208 (0, 1), X be a subset of the unit ddimensional sphere and let \u03a6 \u2208 R m\u00d7d be a matrix with i.", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 144, |
| "text": "(Gordon, 1988)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "i.d. entries from N (0, 1/m). Then, | \u03a6x \u2212 1| \u2264 \u03b2, holds for all x \u2208 X with probability at least 1 \u2212 2 exp(\u2212\u03b3 2 /2) if m = \u2126((\u03c9(X ) + \u03b3) 2 /\u03b2 2 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In particular, for a set of points X \u2282 R d , we have the following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Pr [\u2200x \u2208 X , | \u03a6x \u2212 x | \u2264 \u03b2 x ] \u2265 1 \u2212 \u03b3, if m = \u2126((\u03c9(X ) + log(1/\u03b3)) 2 /\u03b2 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Since for any set X with |X | = p, w(X ) 2 \u2264 log p, therefore the above theorem is a generalization of the JL lemma. By a simple manipulation and adjusting \u03b2, Theorem 1 can be restated for preserving inner-products. Corollary 2. Under the setting of Theorem 1, for a set of points", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "X in R d , \u03a6x, \u03a6x \u2212 x, x \u2264 \u03b2 x x , holds for all x, x \u2208 X with probability at least 1 \u2212 \u03b3, if m = \u2126((\u03c9(X ) + log(1/\u03b3)) 2 /\u03b2 2 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The above result also holds if we replace the Gaussian random matrix \u03a6 by a sparse random matrix (Bourgain et al., 2015) . For simplicity, we use a Gaussian matrix \u03a6 for projection.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 120, |
| "text": "(Bourgain et al., 2015)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dimensionality Reduction.", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The main issue arising in constructing differentially private vector embeddings is that a direct noise addition to the vectors (such as in ) would require that the L 2 -norm of the noise vector scales almost linearly with the dimensionality of the vector. To overcome this dimension dependence, our mechanism is based on the idea of performing a dimensionality reduction and then adding noise to the projected vector. By carefully balancing the dimensionality of the vectors with the magnitude of the noise needed for DP, the mechanism achieves a superior performance overall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We will add noise calibrated to the sensitivity of the dimensionality reduction function. The noise is sampled from a d-dimensional distribution with density p(z) \u221d exp(\u2212 z /\u2206 f ). Sampling from this distribution is simple as noted in (Wu et al., 2017 ). 1 . The following simple claim (that holds for all functions f ) shows that this mechanism satisfies Definition 1. All the missing proofs from this section are collected in Appendix C. Let us first investigate the global sensitivity of f \u03a6 using Theorem 1. Instead of considering a fixed bound on global sensitivity, we provide a probabilistic upper bound.", |
| "cite_spans": [ |
| { |
| "start": 235, |
| "end": 251, |
| "text": "(Wu et al., 2017", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Claim 3. Let f : X \u2192 R m . Then pub- lishing A(x) = f (x) + \u03ba where \u03ba is sampled from the distribution in R m with density p(z) \u221d exp(\u2212 z /\u2206 f ) satisfies ( , 0)-Lipschitz privacy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Lemma 4. Let \u03a6 be an m \u00d7 d matrix with i.i.d. entries from N (0, 1/m). Let \u03b2 \u2208 (0, 1). If m = \u2126((\u03c9(Ran(M )) + log(1/\u03b4)) 2 /\u03b2 2 ), then with probability, at least 1 \u2212 \u03b4, \u2206 f \u03a6 \u2264 1 + \u03b2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let \u03b2 \u2208 (0, 1) be a fixed constant. Consider the mechanism which publishes A(x) = f \u03a6 (x) + \u03ba where \u03ba is drawn from the distribution with density p(z) \u221d exp(\u2212 z /(1 + \u03b2)). Given a set of sensitive words (x 1 , . . . , x n ), we can apply A(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x i ) to each word x i , to release A(x 1 ), . . . , A(x n ) \u2208 R m .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Algorithm PRIVEMB summarizes the mechanism. Since each vector is perturbed independently, the algorithm can be invoked locally. We now establish the privacy guarantee of PRIVEMB. The \u03b4 factor comes in from Lemma 4 because we only have a probabilistic bound on the global sensitivity, i.e., there exists pairs of x, x for whom the bound on global sensitivity of 1 + \u03b2 could fail.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For example, imagine a situation where there are n users each having a sensitive word (embedding). Given access to a common \u03a6, they can perturb their word locally and transmit only the perturbed vector. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "= \u2126((\u03c9(Ran(M )) + log(1/\u03b4)) 2 /\u03b2 2 ) Let \u03a6 \u223c i.i.d. N (0, 1/m) for i \u2208 {1, . . . , n} do wi = \u03a6xi + \u03bai where \u03bai is i.i.d. from the distr. with density p(z) \u221d exp(\u2212 z /(1 + \u03b2)) release (w1, . . . , wn)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Using Claim 3 and Lemma 4, we now establish that privacy proof for Algorithm PRIVEMB.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Proposition 5. Algorithm PRIVEMB is ( , \u03b4)-Lipschitz private. Let \u03b2 \u2208 (0, 1), \u03b4 > 0, > 0, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "m = \u2126((\u03c9(Ran(M )) + log(1/\u03b4)) 2 /\u03b2 2 ). Let \u03a6 be an m \u00d7 d matrix with i.i.d. entries from N (0, 1/m). Then publishing A(x) = f \u03a6 (x) + \u03ba where \u03ba is drawn from the distribution in R m with density p(z) \u221d exp(\u2212 z /(1 + \u03b2)) is ( , \u03b4)- Lipschitz private.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "It is important to note that the \u03b2 does not affect the privacy analysis, i.e., for any input parameter \u03b2, Algorithm PRIVEMB is ( , \u03b4)-Lipschitz private.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "While the idea behind Algorithm PRIVEMB is simple, it is widely applicable and effective. As an example consider vector representation of text such as through Bag-of-K-grams, which creates representations that are sparse in some very highdimensional space (say c-sparse vectors). In this case, even though d could be extremely large, we can project these vectors to \u2248 c log(d/c)dimensional space (due to their low Gaussian width) and add noise in the projected space for achieving privacy. On the other hand, the privacy mechanism of , with noise magnitude proportional to d will completely destroy the information in these vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We now provide utility performance bounds for Algorithm PRIVEMB. As mentioned earlier these are the first theoretical analysis for any private vector embedding scheme. We start with two important properties of interest based on distances and innerproducts that commonly arise when dealing with text embeddings. Our next result compares the loss of a linear model trained on these private vector embeddings to loss of a similar model trained on the original vector embeddings. All our error bounds depend on m \u2248 \u03c9(Ran(M )) 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Utility Analysis of Alg. PRIVEMB", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We start with a simple observation about the magnitude of the noise vector. Consider \u03ba drawn from the noise distribution with density p(z) \u221d exp(\u2212 z /(1 + \u03b2)). The Euclidean norm of \u03ba is distributed according to the Gamma distribution \u0393(m, (1 + \u03b2)/ ) (Wu et al., 2017) and satisfies the following bound.", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 268, |
| "text": "(Wu et al., 2017)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Utility Analysis of Alg. PRIVEMB", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Claim 6 ( (Wu et al., 2017; Chaudhuri et al., 2011) ). For the noise vector \u03ba, we have that with probability at least 1 \u2212 \u03b3, \u03ba \u2264 (m ln(m/\u03b3)(1 + \u03b2))/ .", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 27, |
| "text": "(Wu et al., 2017;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 28, |
| "end": 51, |
| "text": "Chaudhuri et al., 2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Utility Analysis of Alg. PRIVEMB", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Since \u03b2 < 1, we can simplify the right hand side of the above claim to (2m ln(m/\u03b3))/ . Let \u03c4 be the maximum Euclidean norm of the vectors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Utility Analysis of Alg. PRIVEMB", |
| "sec_num": "4" |
| }, |
| { |
| "text": "x 1 , . . . , x n , i.e., \u2200i \u2208 [n], x i \u2264 \u03c4 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Utility Analysis of Alg. PRIVEMB", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our first result compares the distances between the private vectors and between the original vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantee", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Proposition 7. Consider Algorithm PRIVEMB. With probability at least 1 \u2212 \u03b4, for all pairs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantee", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "x i , x j \u2208 (x 1 , . . . , x n ), | w i \u2212 w j \u2212 x i \u2212 x j | \u2264 2\u03b2\u03c4 + 4(m ln(2nm/\u03b4))/ .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantee", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As a baseline consider the privatization mechanism proposed by which computes a privatized version of an embedding vector x by adding noise N to the original vector x. Formally, defined a mechanism where the private vector v i is constructed from x i as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantee", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "v i = x i + N i where N i is drawn from the distribution in R d with den- sity p(z) \u221d exp(\u2212 z )) to x. Since the noise vector N i is now d-dimensional, its Euclidean norm will tightly concentrate around its mean E[ N i ] = O(d). Therefore, with high probability, | v i \u2212v j \u2212 x i \u2212x j | = \u2126(d)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantee", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "holds for the mechanism proposed in . However, in our mechanism, the dependence on d is replaced by m which as argued above is generally much smaller than d. On the flip side though, PRIVEMB satisfies ( , \u03b4)-Lipschitz privacy for \u03b4 > 0, whereas the mechanism in achieves the stronger ( , 0)-Lipschitz privacy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantee", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Word embeddings seek to capture word similarity, so similar words (e.g., synonyms) have embeddings with high inner product. We now compare the inner product between the private vectors to the inner product between the original embedding vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inner-Product Approximation Guarantee", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Proposition 8. Consider Algorithm PRIVEMB. With probability at least 1 \u2212 \u03b4, for all pairs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inner-Product Approximation Guarantee", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "x i , x j \u2208 (x 1 , . . . , x n ), | w i , w j \u2212 x i , x j | \u2264 \u03b2\u03c4 2 + 8\u03c4 m ln(2nm/\u03b4)/ + (2m ln(2nm/\u03b4)) 2 / 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inner-Product Approximation Guarantee", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We now discuss about the performance of the private vectors (w 1 , . . . , w n ) when used with common machine learning models. Given n datapoints, (x 1 , y 1 ), . . . , (x n , y n ) drawn from some universe R d \u00d7 R (where y i represents the label on point x i ), we consider the problem of learning a linear model on this labeled data. We assume that x i 's are sensitive whereas the y i 's are publicly known. Such situations arise commonly in practice. For example, consider a drug company investigating the effectiveness of a drug trail over n users. Here, y i could represent the response to the drug for user i which is known to the drug company, whereas x i could encode the medical history of user i which the user would like to keep private.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We focus on a broad class of models, where the loss functions have the form, ( x, \u03b8 ; y) for parameter \u03b8 \u2208 R d , where : R \u00d7 R \u2192 R. This captures a variety of learning problems, e.g., the linear regression is captured by setting ( x, \u03b8 ; y) = (y \u2212 x, \u03b8 ) 2 , logistic regression is captured by setting ( x, \u03b8 ; y) = ln(1 + exp(\u2212y x, \u03b8 )), support vector machine is captured by setting ( x, \u03b8 ; y) = hinge(y x, \u03b8 ), where hinge(a) = 1 \u2212 a if a \u2264 1 and 0 otherwise. We assume that the function is convex and Lipschitz in the first parameter. Let \u03bb denote the Lipschitz parameter of the loss function over the first parameter, i.e., | (a; y) \u2212 (b; y)| \u2264 \u03bb |a \u2212 b| for all a, b \u2208 R.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "On the data (x 1 , y 1 ), . . . , (x n , y n ), the (empirical) training loss for a parameter \u03b8 is defined as: 1 n n i=1 ( x i , \u03b8 ; y i ) and the goal in training (empirical risk minimization) is to minimize this loss over a parameter space \u0398. Let \u03b8 be a", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "true minimizer of 1 n n i=1 ( x i , \u03b8 ; y i ), i.e., \u03b8 \u2208 argmin \u03b8\u2208\u0398 1 n n i=1 ( x i , \u03b8 ; y i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Our goal will be to compare the loss of the model trained on the privatized points (w 1 , y 1 ), . . . (w n , y n ) where the w i 's are produced by Algorithm PRIVEMB to the true minimum loss (= 1 n n i=1 ( x i , \u03b8 ; y i )). Let \u0398 defined as sup \u03b8\u2208\u0398 \u03b8 denote the diameter of \u0398. The following proposition states our result. Proposition 9. Consider Algorithm PRIVEMB. With probability at least 1 \u2212 \u03b4,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "min \u03b8\u2208\u0398 1 n n i=1 ( wi, \u03a6\u03b8 ; yi) \u2264 1 n n i=1 ( xi, \u03b8 ; yi) + 4\u03bb (m ln(2nm/\u03b4)) \u0398 + \u03bb \u03b2\u03c4 \u0398 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In the above result the error terms will be negligible if \u03b2 1/(\u03bb \u03c4 \u0398 ) and \u03bb (m ln(2nm/\u03b4)) \u0398 . Though in our experiments (see Section 5), we notice good performance with private vectors even when \u03b2 and don't satisfy these conditions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Another point to note is that our setting, where we train ML models over a differentially private data release, is different from traditional literature on differentially private empirical risk minimization where the goal is to release only a private version of model parameter \u03b8, and not the data itself, see e.g., (Chaudhuri et al., 2011; Bassily et al., 2014) . In particular, this means that the results from traditional differentially private empirical risk minimization do not carry over to our setting. Our data release setup allows training any number of ML models on the private vectors without having to pay for the cost of composition on the privacy guarantees (as post-processing does not affect the privacy guarantee), which is a desirable property.", |
| "cite_spans": [ |
| { |
| "start": 316, |
| "end": 340, |
| "text": "(Chaudhuri et al., 2011;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 341, |
| "end": 362, |
| "text": "Bassily et al., 2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We carry out four experiments to demonstrate the improvement of our approach (Algorithm PRIVEMB), denoted as M2, over ( , 0)-Lipschitz privacy mechanism proposed in (Feyisetan et al., 2020) (denoted by M1). 2 The first three map to the theoretical guarantees described Section 4, i.e., (1) distance approximation guarantee, (2) inner-product approximation guarantee, and (3) performance on linear models. The final experiment provides further evidence for performance of using these private vectors for downstream classification tasks. All our experiments are on embeddings generated by GLOVE (Pennington et al., 2014) and FASTTEXT (Bojanowski et al., 2017) . The dimensionality of the embedding d = 300 in both cases. Due to space constraints, we present the FASTTEXT results in Appendix B.", |
| "cite_spans": [ |
| { |
| "start": 593, |
| "end": 618, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 632, |
| "end": 657, |
| "text": "(Bojanowski et al., 2017)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluations", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The value of \u03b4 is kept constant for all experiments (involving our scheme) at 1e \u2212 6. We set \u03c9(Ran(M )) = \u221a log d. The parameter \u03b2 only affects the utility guarantee, and Algorithm PRIVEMB is always ( , \u03b4)-Lipschitz private for any value of \u03b2. In our experiments, corroborating our theoretical guarantees, we do vary \u03b2 to illustrate the effect of \u03b2 on the guarantees. Remember that higher values of \u03b2 results in lowerdimensional vectors, so setting \u03b2 appropriately lets one trade-off between the loss of utility due to dimension reduction vs. the gain in the utility due to lesser noise needed for lower-dimensional vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluations", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We also vary the privacy parameter in our experiments. While lower values are certainly desirable, it is widely known that differentially private algorithms for certain problems (such as those arising in complex domains such as NLP) require slightly larger values to provide reasonable utility in practice (Fernandes et al., 2019; Xie et al., 2018; Ma et al., 2020) . For example, the related work on differential privately releasing text embeddings from Fernandes et al. (Fernandes et al., 2019) and ) report values of of up to 20 and 30 depending on the dimensionality of the space.", |
| "cite_spans": [ |
| { |
| "start": 306, |
| "end": 330, |
| "text": "(Fernandes et al., 2019;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 331, |
| "end": 348, |
| "text": "Xie et al., 2018;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 349, |
| "end": 365, |
| "text": "Ma et al., 2020)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 472, |
| "end": 496, |
| "text": "(Fernandes et al., 2019)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluations", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This experiment compares the distance between pairs of private vectors to that between the corresponding original vectors. We sampled 100 word vectors from the vocabulary. For each of these 100 vectors, we compare the distance to another set of 100 randomly sampled vectors. These 100 \u00d7 100 pair of vectors were kept constant across all experiment runs. For each embedding model, we", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantees", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "compared | v i \u2212 v j \u2212 x i \u2212 x j |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantees", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where the v i 's are generated by the schemed in ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantees", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "(M1), to | w i \u2212 w j \u2212 x i \u2212 x j |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantees", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where the w i 's are generated by our scheme (M2). The experiments were carried out at values of = 1, 2, and 5 for M1 and M2, while varying the values of \u03b2 for M2 between 0.5 and 0.7. Results. The results in Fig. 1 show the experiment outcomes across the different values of , \u03b2, and embeddings. Lower values on the y\u2212axis indicate better results in that the distance between the private vectors are a good approximation to the actual distances between the original vectors. Overall, the guarantees of our approach M2 are better than M1 as observed by the smaller distance differences across all conditions. Next, the results also highlight that for both mechanisms, as expected, the guarantees get better as increases, due to the introduction of less noise (note the different scales across ). Finally, the results reveal that for a given value of , as the value of \u03b2 increases, the guarantees of our scheme improve. This can be viewed through the guarantees of Proposition 7, which consists of two terms, the first term increases with \u03b2 and the second term due to its dependence on 1/\u03b2 2 (through m) decreases with \u03b2. Since the second (noise) term generally dominates, we get an improvement with \u03b2, suggesting that it is advantageous to pick a larger \u03b2 in practice.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 208, |
| "end": 214, |
| "text": "Fig. 1", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distance Approximation Guarantees", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "This experiment compares the inner product between pairs of private vectors to that between the corresponding original vectors. The setup here is identical to the distance approximation experiments (i.e., the same 100 \u00d7 100 word pairs and mix of and \u03b2). The results Results. The results in Fig. 2 show the experiment outcomes across , \u03b2, and embeddings. Similar to the findings in Fig. 1 , the results of M2 are an improvement over M1 with the same patterns of improvement. For a fixed privacy budget, the performance of M2 is better than that of M1 and the gap increases as \u03b2 increases. Again this suggests that one should pick a larger \u03b2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 290, |
| "end": 296, |
| "text": "Fig. 2", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 381, |
| "end": 387, |
| "text": "Fig. 1", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inner Prod Approximation Guarantees", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "capture | w i , w j \u2212 x i , x j |.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inner Prod Approximation Guarantees", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Non-Private Baselines M1: = 10 M2: = 10, \u03b2 = 0.9 Dataset InferSent SkipThought TRAIN ACC TEST ACC TRAIN ACC TEST ACC MR (Pang and Lee, 2005) 81.10 79.40 58.10 55.61 57.76 58.11 CR (Hu and Liu, 2004) 86.30 83.10 68.32 63.97 72.52 71.02 MPQA (Wiebe et al., 2005) 90 ", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 140, |
| "text": "(Pang and Lee, 2005)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 180, |
| "end": 198, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 240, |
| "end": 260, |
| "text": "(Wiebe et al., 2005)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inner Prod Approximation Guarantees", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We built a simple binary SVM linear mode to classify single keywords into 2 classes: positive and negative based on their conveyed sentiment. The dataset used was a list from (Hu and Liu, 2004) consisting of 4783 negative and 2006 positive words. We selected a subset of words that occurred in both GLOVE and FASTTEXT embeddings and capped both lists to have an equal number of words. The resulting datasets each had 1899 words. The purpose of this experiment was to explore the behaviors of M1 and M2 at different values of and \u03b2 for a linear model. Results shown are over 10 runs. Results. The results on the performance on linear models are presented in Fig. 3 . The performance metrics are (i) accuracy on a randomly selected 20% test set, and (ii) the area under the ROC curve (AUC). Higher values on the y\u2212axis indicate better results. The findings follow from our first 2 experiments which demonstrate that for a fixed privacy guarantee, the utility of M2 is better than that of M1 and the gap between the performance of M2 and M1 increases as \u03b2 increases.", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 193, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 657, |
| "end": 663, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance on Linear Models", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We further evaluated M2 against M1 at a fixed value of and \u03b2 on classification tasks on 5 NLP datasets. The experiments were done and can be replicated using SentEval (Conneau and Kiela, 2018) , an evaluation toolkit for sentence embeddings by replacing the default embeddings with the private embeddings. From the previous experiments, we know that it is better to pick a larger \u03b2, so we set \u03b2 = 0.9 here.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 192, |
| "text": "(Conneau and Kiela, 2018)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance on NLP Datasets", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Results. Table 1 presents the results and summarizes the datasets: MR (Pang and Lee, 2005) , CR (Hu and Liu, 2004) , MPQA (Wiebe et al., 2005) , SST-5 (Socher et al., 2013) , and TREC-6 (Li and Roth, 2002) . Table 1 presents the results from the experiments. We also present results of 2 nonprivate baselines on all the datasets based on Infersent and SkipThought described in (Conneau et al., 2017) . The evaluation metrics were train and test accuracies, therefore, higher scores indicate better utility. Not surprisingly, because of the noise addition there is is a performance drop when we compare the private mechanisms to the non-private baselines. However, the results reinforce our findings that the utility afforded by M2 are better than M1 at fixed values of . Some of the improvements are remarkably significant e.g., +7% on the CR dataset, and +20% on TREC-6. Summary of the Results. Overall, these experiments demonstrate that PRIVEMB offers better utility than the embedding privatization scheme of .", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 90, |
| "text": "(Pang and Lee, 2005)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 96, |
| "end": 114, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 122, |
| "end": 142, |
| "text": "(Wiebe et al., 2005)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 145, |
| "end": 172, |
| "text": "SST-5 (Socher et al., 2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 186, |
| "end": 205, |
| "text": "(Li and Roth, 2002)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 377, |
| "end": 399, |
| "text": "(Conneau et al., 2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 16, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 208, |
| "end": 215, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance on NLP Datasets", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this paper, we introduced an ( , \u03b4)-Lipschitz private algorithm for generating real valued embedding vectors. Our mechanism works by first reducing the dimensionality of the vectors though a random projection, then adding noise calibrated to the sensitivity of the dimensionality reduction function. The mechanism can be utilized for any welldefined embedding model including but not limited to word, sentence, and document embeddings. We prove theoretical bounds that show how various properties of interest important for vector embeddings are well-approximated through the private vectors, and our empirical results across multiple embedding models and NLP datasets demonstrate the superior utility guarantees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6" |
| }, |
| { |
| "text": "A Additional Experiments", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "We now investigate a slightly different setup where we perform the dimensionality reduction while training the embeddings (denoted as A1). So here instead of only assuming access to private embeddings vectors as in M1 and M2, we also assume access to the corpus and training platform. Fig. 4 presents results (with linear models as in Experiment 3) on 50d, 100d, and 200d GLOVE embeddings, and corresponding setting of \u03b2 = 0.93, 0.66 and 0.468 in M2 to match the dimensionality. Unsurprisingly the results below show that A1 obtains better results than M2 where the dimensionality reduction happens post training. Mechanism A1 however has two drawbacks compared to M2: (1) it assumes access to the original training corpus and platform which is not always accessible, and 2it is more computationally expensive as it requires retraining the embeddings from scratch. Proof. First note that f (x) + \u03ba has the same distribution as that of \u03ba but with a different mean. Consider any x, x \u2208 X . We will be interested in bounding the ratio", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 285, |
| "end": 291, |
| "text": "Fig. 4", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Pr[A(x) = w]/Pr[A(x ) = w]. Pr[A(x) = w] Pr[A(x ) = w] = exp(\u2212 w \u2212 f (x) /\u2206 f ) exp(\u2212 w \u2212 f (x ) /\u2206 f ) = exp( ( w \u2212 f (x ) \u2212 w \u2212 f (x) )/\u2206 f ) \u2264 exp( f (x) \u2212 f (x ) /\u2206 f ) \u2264 exp( x \u2212 x ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "where the first inequality follows from triangle inequality and the last one follows from the definition of global sensitivity (Definition 2). Therefore, for any measurable set", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "U \u2286 R m , Pr[A(x) \u2208 U ] \u2264 exp( x \u2212 x )Pr[A(x ) \u2208 U ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Lemma 11 (Lemma 4 Restated). Let \u03a6 be an m \u00d7 d matrix with i.i.d. entries from N (0, 1/m). Let \u03b2 \u2208 (0, 1). If m = \u2126((\u03c9(Ran(M )) + log(1/\u03b4)) 2 /\u03b2 2 ), then with probability, at least 1 \u2212 \u03b4, \u2206 f \u03a6 \u2264 1 + \u03b2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. Consider the set Ran(M ) \u2212 Ran(M ) (where \u2212 denotes the Minkowski difference between the sets). By properties of the Gaussian width (see Section 2), the Gaussian width of this new set is at most \u03c9(Ran(M )) + \u03c9(Ran(M )) \u2264 2\u03c9(Ran(M )). From Theorem 1, under the above setting of m, with probability at least 1 \u2212 \u03b4,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2206 f \u03a6 = max x,x \u2208Ran(M ) \u03a6x \u2212 \u03a6x x \u2212 x \u2264 (1 + \u03b2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "This completes the proof.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Proposition 12 (Proposition 5 Restated). Algorithm PRIVEMB is ( , \u03b4)-Lipschitz private.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. Let A(x) = f \u03a6 (x) + \u03ba = \u03a6x + \u03ba where \u03ba is drawn from the distribution in R m with density p(z) \u221d exp(\u2212 z /(1 + \u03b2)). Let E denote the event that the \u2206 f \u03a6 \u2264 1 \u2212 \u03b2. From Lemma 11, we know that over the choice of \u03a6, Pr[E] \u2265 1 \u2212 \u03b4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Consider any x, x \u2208 Ran(M ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Pr Since Algorithm PRIVEMB can be viewed as applying the above mechanism A on the x 1 , . . . , x n independently, we get that Algorithm PRIVEMB is ( , \u03b4)-Lipschitz private.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Proposition 13 (Proposition 7 Restated). Consider Algorithm PRIVEMB. With probability at least 1\u2212\u03b4, for all pairs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "x i , x j \u2208 (x 1 , . . . , x n ), | w i \u2212 w j \u2212 x i \u2212 x j | \u2264 2\u03b2\u03c4 + 4(m ln(2nm/\u03b4))/ .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. Let w i = \u03a6x i + \u03ba i and w j = \u03a6x j + \u03ba j . Using Theorem 1, with probability at least 1 \u2212 \u03b4,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "| w i \u2212 w j \u2212 x i \u2212 x j | = | \u03a6x i + \u03ba i \u2212 (\u03a6x j + \u03ba j ) \u2212 x i \u2212 x j | \u2264 | \u03a6(x i \u2212 x j ) \u2212 x i \u2212 x j + \u03ba i + \u03ba j | \u2264 \u03b2 x i \u2212 x j + \u03ba i + \u03ba j .", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "For a fixed i, from Claim 6, we get that with probability at least 1 \u2212 \u03b4, \u03ba i \u2264 (2m ln(m/\u03b4))/ . Using a union bound,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Pr[\u2200i \u2208 [n], \u03ba i \u2264 (2m ln(nm/\u03b4))/ ] \u2265 1 \u2212 \u03b4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Plugging this into (2), we get that with probability at least 1 \u2212 2\u03b4, for all i, j \u2208 [n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "| w i \u2212w j \u2212 x i \u2212x j | \u2264 \u03b2 x i \u2212x j +4(m ln(nm/\u03b4))/ .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Using x i \u2212 x j \u2264 2\u03c4 and scaling \u03b4 completes the proof.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Proposition 14 (Proposition 8 Restated). Consider Algorithm PRIVEMB. With probability at least 1 \u2212 \u03b4, for all pairs x i , x j \u2208 (x 1 , . . . , x n ), | w i , w j \u2212 x i , x j | \u2264 \u03b2\u03c4 2 +8\u03c4 m ln(2nm/\u03b4)/ + (2m ln(2nm/\u03b4)) 2 / 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. Let w i = \u03a6x i + \u03ba i and w j = \u03a6x j + \u03ba j . Using Corollary 2, with probability at least 1 \u2212 \u03b4,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "| w i , w j \u2212 x i , x j | = | \u03a6x i + \u03ba i , \u03a6x j + \u03ba j \u2212 x i , x j | = | \u03a6x i , \u03a6x j + \u03a6x i , \u03ba j + \u03ba i , \u03a6x j + \u03ba i , \u03ba j \u2212 x i , x j | \u2264 \u03b2 x i x j + | \u03a6x i , \u03ba j + \u03ba i , \u03a6x j + \u03ba i , \u03ba j | \u2264 \u03b2 x i x j + (1 + \u03b2) x i \u03ba j + (1 + \u03b2) x j \u03ba i + \u03ba i \u03ba j \u2264 \u03b2\u03c4 2 + 2\u03c4 ( \u03ba j + \u03ba i ) + \u03ba i \u03ba j .", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "As in Proposition 7,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Pr[\u2200i \u2208 [n], \u03ba i \u2264 (2m ln(nm/\u03b4))/ ] \u2265 1 \u2212 \u03b4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "Plugging this into (3), we get that with probability at least 1 \u2212 2\u03b4, for all i, j \u2208 [n]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": "| w i , w j \u2212 x i , x j | \u2264 \u03b2\u03c4 2 + 8\u03c4 m ln(nm/\u03b4) + (2m ln(nm/\u03b4)) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supplementary Material for \"Private Release of Text Embedding Vectors\"", |
| "sec_num": null |
| }, |
| { |
| "text": ".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "By scaling \u03b4 we get the claimed bound.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "Proposition 15 (Proposition 9 Restated). Consider Algorithm PRIVEMB. With probability at least 1 \u2212 \u03b4,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "min \u03b8\u2208\u0398 1 n n i=1 ( w i , \u03a6\u03b8 ; y i ) \u2264 1 n n i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "( x i , \u03b8 ; y i ) + 4\u03bb (m ln(2nm/\u03b4)) \u0398 + \u03bb \u03b2\u03c4 \u0398 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "Proof. By the Lipschitzness assumption,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "| ( w i , \u03a6\u03b8 ; y i ) \u2212 ( x i , \u03b8 ; y i )| \u2264 \u03bb | w i , \u03a6\u03b8 \u2212 x i , \u03b8 |.", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "Focusing on the right hand side, from Corollary 2, with probability at least 1 \u2212 \u03b4, for all i \u2208 [n],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "| w i , \u03a6\u03b8 \u2212 x i , \u03b8 | = | \u03a6x i + \u03ba i , \u03a6\u03b8 \u2212 x i , \u03b8 | \u2264 | \u03ba i , \u03a6\u03b8 | + \u03b2 x i \u03b8 \u2264 (1 + \u03b2) \u03ba i \u03b8 + \u03b2 x i \u03b8 \u2264 2 \u03ba i \u0398 + \u03b2\u03c4 \u0398 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "where we used \u03b2 \u2208 (0, 1), x i \u2264 \u03c4 , \u03b8 \u2264 \u0398 , and with probability at least 1 \u2212 \u03b4, \u03a6\u03b8 \u2264 (1 + \u03b2) \u03b8 (from Theorem 1). Using the bound on \u03ba i , we get that with probability at least 1 \u2212 \u03b4, for all i \u2208 [n], | w i , \u03a6\u03b8 \u2212 x i , \u03b8 | \u2264 4(m ln(2nm/\u03b4)) \u0398 +\u03b2\u03c4 \u0398 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "Plugging this into (4) and averaging over i gives that with probability at least 1 \u2212 \u03b4,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "1 n n i=1 ( w i , \u03a6\u03b8 ; y i ) \u2264 1 n n i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "( x i , \u03b8 ; y i ) + 4\u03bb (m ln(2nm/\u03b4)) \u0398 + \u03bb \u03b2\u03c4 \u0398 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "Since,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "min \u03b8\u2208\u0398 1 n n i=1 ( w i , \u03a6\u03b8 ; y i ) \u2264 1 n n i=1 ( w i , \u03a6\u03b8 ; y i ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "we get the claimed result.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "The idea is to first sample a uniform vector in the unit sphere in R m , say v and to sample a magnitude l from the Gamma distribution \u0393(m, \u2206 f / ), and output \u03ba = lv", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We choose this mechanism as the baseline as in this setup it achieves the current state-of-the-art utililty guarantees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Exploring the privacypreserving properties of word embeddings: Algorithmic validation study", |
| "authors": [ |
| { |
| "first": "Mohamed", |
| "middle": [], |
| "last": "Abdalla", |
| "suffix": "" |
| }, |
| { |
| "first": "Moustafa", |
| "middle": [], |
| "last": "Abdalla", |
| "suffix": "" |
| }, |
| { |
| "first": "Graeme", |
| "middle": [], |
| "last": "Hirst", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Rudzicz", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Journal of medical Internet research", |
| "volume": "22", |
| "issue": "7", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohamed Abdalla, Moustafa Abdalla, Graeme Hirst, and Frank Rudzicz. 2020. Exploring the privacy- preserving properties of word embeddings: Algorith- mic validation study. Journal of medical Internet re- search, 22(7):e18055.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Geo-indistinguishability: Differential privacy for location-based systems", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miguel E Andr\u00e9s", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Nicol\u00e1s", |
| "suffix": "" |
| }, |
| { |
| "first": "Konstantinos", |
| "middle": [], |
| "last": "Bordenabe", |
| "suffix": "" |
| }, |
| { |
| "first": "Catuscia", |
| "middle": [], |
| "last": "Chatzikokolakis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Palamidessi", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACM CCS", |
| "volume": "", |
| "issue": "", |
| "pages": "901--914", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miguel E Andr\u00e9s, Nicol\u00e1s E Bordenabe, Konstantinos Chatzikokolakis, and Catuscia Palamidessi. 2013. Geo-indistinguishability: Differential privacy for location-based systems. In ACM CCS, pages 901- 914. ACM.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Differentially private empirical risk minimization: Efficient algorithms and tight error bounds", |
| "authors": [ |
| { |
| "first": "Raef", |
| "middle": [], |
| "last": "Bassily", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhradeep", |
| "middle": [], |
| "last": "Thakurta", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "FOCS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raef Bassily, Adam Smith, and Abhradeep Thakurta. 2014. Differentially private empirical risk minimiza- tion: Efficient algorithms and tight error bounds. In FOCS. IEEE.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The johnson-lindenstrauss transform itself preserves differential privacy", |
| "authors": [ |
| { |
| "first": "Jeremiah", |
| "middle": [], |
| "last": "Blocki", |
| "suffix": "" |
| }, |
| { |
| "first": "Avrim", |
| "middle": [], |
| "last": "Blum", |
| "suffix": "" |
| }, |
| { |
| "first": "Anupam", |
| "middle": [], |
| "last": "Datta", |
| "suffix": "" |
| }, |
| { |
| "first": "Or", |
| "middle": [], |
| "last": "Sheffet", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "2012 IEEE 53rd Annual Symposium on Foundations of Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "410--419", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeremiah Blocki, Avrim Blum, Anupam Datta, and Or Sheffet. 2012. The johnson-lindenstrauss trans- form itself preserves differential privacy. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 410-419. IEEE.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Enriching word vectors with subword information", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Toward a unified theory of sparse dimensionality reduction in euclidean space", |
| "authors": [ |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Bourgain", |
| "suffix": "" |
| }, |
| { |
| "first": "Dirksen", |
| "middle": [], |
| "last": "Sjoerd", |
| "suffix": "" |
| }, |
| { |
| "first": "Jelani", |
| "middle": [], |
| "last": "Nelson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 47th ACM Symposium on Theory of Computing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean Bourgain, Dirksen Sjoerd, and Jelani Nelson. 2015. Toward a unified theory of sparse dimension- ality reduction in euclidean space. In Proceedings of the 47th ACM Symposium on Theory of Computing. Association for Computing Machinery.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", |
| "authors": [ |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Carlini", |
| "suffix": "" |
| }, |
| { |
| "first": "Chang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00dalfar", |
| "middle": [], |
| "last": "Erlingsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Jernej", |
| "middle": [], |
| "last": "Kos", |
| "suffix": "" |
| }, |
| { |
| "first": "Dawn", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "28th {USENIX} Security Symposium ({USENIX} Security 19)", |
| "volume": "", |
| "issue": "", |
| "pages": "267--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicholas Carlini, Chang Liu, \u00dalfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Eval- uating and testing unintended memorization in neu- ral networks. In 28th {USENIX} Security Sympo- sium ({USENIX} Security 19), pages 267-284.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Broadening the scope of differential privacy using metrics", |
| "authors": [ |
| { |
| "first": "Konstantinos", |
| "middle": [], |
| "last": "Chatzikokolakis", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [ |
| "E" |
| ], |
| "last": "Andr\u00e9s", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicol\u00e1s", |
| "middle": [ |
| "Emilio" |
| ], |
| "last": "Bordenabe", |
| "suffix": "" |
| }, |
| { |
| "first": "Catuscia", |
| "middle": [], |
| "last": "Palamidessi", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "PETS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Konstantinos Chatzikokolakis, Miguel E An- dr\u00e9s, Nicol\u00e1s Emilio Bordenabe, and Catuscia Palamidessi. 2013. Broadening the scope of differential privacy using metrics. In PETS.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Constructing elastic distinguishability metrics for location privacy", |
| "authors": [ |
| { |
| "first": "Konstantinos", |
| "middle": [], |
| "last": "Chatzikokolakis", |
| "suffix": "" |
| }, |
| { |
| "first": "Catuscia", |
| "middle": [], |
| "last": "Palamidessi", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Stronati", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "PETS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Marco Stronati. 2015. Constructing elastic dis- tinguishability metrics for location privacy. PETS.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Differentially private empirical risk minimization", |
| "authors": [ |
| { |
| "first": "Kamalika", |
| "middle": [], |
| "last": "Chaudhuri", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Monteleoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Anand D", |
| "middle": [], |
| "last": "Sarwate", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. 2011. Differentially private empirical risk minimization. Journal of Machine Learning Re- search, 12(3).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Senteval: An evaluation toolkit for universal sentence representations", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1803.05449" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. arXiv preprint arXiv:1803.05449.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Supervised learning of universal sentence representations from natural language inference data", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Lo\u00efc", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The modernization of statistical disclosure limitation at the u.s. census bureau", |
| "authors": [ |
| { |
| "first": "Aref", |
| "middle": [], |
| "last": "Dajani", |
| "suffix": "" |
| }, |
| { |
| "first": "Amy", |
| "middle": [], |
| "last": "Lauger", |
| "suffix": "" |
| }, |
| { |
| "first": "Phyllis", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Kifer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jerome", |
| "middle": [], |
| "last": "Reiter", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashwin", |
| "middle": [], |
| "last": "Machanavajjhala", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Garfinkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Scot", |
| "middle": [], |
| "last": "Dahl", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Vishesh", |
| "middle": [], |
| "last": "Karwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Leclerc", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Schmutte", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Sexton", |
| "suffix": "" |
| }, |
| { |
| "first": "Lars", |
| "middle": [], |
| "last": "Vilhuber", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Abowd", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Census Scientific Advisory Commitee Meetings", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aref Dajani, Amy Lauger, Phyllis Singer, Daniel Kifer, Jerome Reiter, Ashwin Machanavajjhala, Simon Garfinkel, Scot Dahl, Matthew Graham, Vishesh Karwa, Hang Kim, Philip Leclerc, Ian Schmutte, William Sexton, Lars Vilhuber, and John Abowd. 2017. The modernization of statistical disclosure limitation at the u.s. census bureau. Census Scien- tific Advisory Commitee Meetings.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Calibrating noise to sensitivity in private data analysis", |
| "authors": [ |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Dwork", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Mcsherry", |
| "suffix": "" |
| }, |
| { |
| "first": "Kobbi", |
| "middle": [], |
| "last": "Nissim", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "TCC", |
| "volume": "", |
| "issue": "", |
| "pages": "265--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitiv- ity in private data analysis. In TCC, pages 265-284. Springer.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The algorithmic foundations of differential privacy", |
| "authors": [ |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Dwork", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Theoretical Computer Science", |
| "volume": "9", |
| "issue": "", |
| "pages": "3--4", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cynthia Dwork and Aaron Roth. 2013. The algorith- mic foundations of differential privacy. Theoretical Computer Science, 9(3-4).", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Rappor: Randomized aggregatable privacy-preserving ordinal response", |
| "authors": [ |
| { |
| "first": "\u00dalfar", |
| "middle": [], |
| "last": "Erlingsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Vasyl", |
| "middle": [], |
| "last": "Pihur", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandra", |
| "middle": [], |
| "last": "Korolova", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACM SIGSAC CCS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "\u00dalfar Erlingsson, Vasyl Pihur, and Aleksandra Ko- rolova. 2014. Rappor: Randomized aggregat- able privacy-preserving ordinal response. In ACM SIGSAC CCS.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Generalised differential privacy for text document processing", |
| "authors": [ |
| { |
| "first": "Natasha", |
| "middle": [], |
| "last": "Fernandes", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dras", |
| "suffix": "" |
| }, |
| { |
| "first": "Annabelle", |
| "middle": [], |
| "last": "Mciver", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Principles of Security and Trust", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Natasha Fernandes, Mark Dras, and Annabelle McIver. 2019. Generalised differential privacy for text docu- ment processing. Principles of Security and Trust.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Privacy-and utility-preserving textual analysis via calibrated multivariate perturbations", |
| "authors": [ |
| { |
| "first": "Oluwaseyi", |
| "middle": [], |
| "last": "Feyisetan", |
| "suffix": "" |
| }, |
| { |
| "first": "Borja", |
| "middle": [], |
| "last": "Balle", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Drake", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Diethe", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "ACM WSDM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, and Tom Diethe. 2020. Privacy-and utility-preserving textual analysis via calibrated multivariate perturba- tions. In ACM WSDM.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Leveraging hierarchical representations for preserving privacy and utility in text", |
| "authors": [ |
| { |
| "first": "Oluwaseyi", |
| "middle": [], |
| "last": "Feyisetan", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Diethe", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Drake", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "IEEE ICDM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oluwaseyi Feyisetan, Tom Diethe, and Thomas Drake. 2019. Leveraging hierarchical representations for preserving privacy and utility in text. In IEEE ICDM.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Deep learning", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Goodfellow", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "On Milman's inequality and random subspaces which escape through a mesh in R n", |
| "authors": [ |
| { |
| "first": "Yehoram", |
| "middle": [], |
| "last": "Gordon", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yehoram Gordon. 1988. On Milman's inequality and random subspaces which escape through a mesh in R n . Springer.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "ACM SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In ACM SIGKDD, pages 168-177.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "What can we learn privately?", |
| "authors": [ |
| { |
| "first": "Homin", |
| "middle": [], |
| "last": "Shiva Kasiviswanathan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kobbi", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sofya", |
| "middle": [], |
| "last": "Nissim", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Raskhodnikova", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "SIAM Journal on Computing", |
| "volume": "40", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiva Kasiviswanathan, Homin Lee, Kobbi Nissim, So- fya Raskhodnikova, and Adam Smith. 2011. What can we learn privately? SIAM Journal on Comput- ing, 40(3).", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Efficient private empirical risk minimization for high-dimensional learning", |
| "authors": [ |
| { |
| "first": "Hongxia", |
| "middle": [], |
| "last": "Shiva Prasad Kasiviswanathan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "488--497", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiva Prasad Kasiviswanathan and Hongxia Jin. 2016. Efficient private empirical risk minimization for high-dimensional learning. In International Confer- ence on Machine Learning, pages 488-497.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Privacy via the johnson-lindenstrauss transform", |
| "authors": [ |
| { |
| "first": "Krishnaram", |
| "middle": [], |
| "last": "Kenthapadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandra", |
| "middle": [], |
| "last": "Korolova", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Mironov", |
| "suffix": "" |
| }, |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Mishra", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Journal of Privacy and Confidentiality", |
| "volume": "5", |
| "issue": "1", |
| "pages": "39--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Krishnaram Kenthapadi, Aleksandra Korolova, Ilya Mironov, and Nina Mishra. 2013. Privacy via the johnson-lindenstrauss transform. Journal of Privacy and Confidentiality, 5(1):39-71.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Releasing search queries and clicks privately", |
| "authors": [ |
| { |
| "first": "Aleksandra", |
| "middle": [], |
| "last": "Korolova", |
| "suffix": "" |
| }, |
| { |
| "first": "Krishnaram", |
| "middle": [], |
| "last": "Kenthapadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Mishra", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandros", |
| "middle": [], |
| "last": "Ntoulas", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aleksandra Korolova, Krishnaram Kenthapadi, Nina Mishra, and Alexandros Ntoulas. 2009. Releasing search queries and clicks privately. In WebConf. ACM.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Gradual release of sensitive data under differential privacy", |
| "authors": [ |
| { |
| "first": "Fragkiskos", |
| "middle": [], |
| "last": "Koufogiannis", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuo", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "George J", |
| "middle": [], |
| "last": "Pappas", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Journal of Privacy and Confidentiality", |
| "volume": "7", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fragkiskos Koufogiannis, Shuo Han, and George J Pap- pas. 2016. Gradual release of sensitive data under differential privacy. Journal of Privacy and Confi- dentiality, 7(2).", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Learning question classifiers", |
| "authors": [ |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xin Li and Dan Roth. 2002. Learning question classi- fiers. In COLING.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Towards differentially private text representations", |
| "authors": [ |
| { |
| "first": "Lingjuan", |
| "middle": [], |
| "last": "Lyu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yitong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuanli", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Xiao", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "1813--1816", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lingjuan Lyu, Yitong Li, Xuanli He, and Tong Xiao. 2020. Towards differentially private text representa- tions. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1813-1816.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Rdp-gan: A r\u00e9nyi-differential privacy based generative adversarial network", |
| "authors": [ |
| { |
| "first": "Chuan", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kang", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Weng", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Vincent Poor", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chuan Ma, Jun Li, Ming Ding, Bo Liu, Kang Wei, Jian Weng, and H. Vincent Poor. 2020. Rdp-gan: A r\u00e9nyi-differential privacy based generative adversar- ial network.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NeurIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NeurIPS, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lillian", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In ACL.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP, pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Effective dimensionality reduction for word embeddings", |
| "authors": [ |
| { |
| "first": "Vikas", |
| "middle": [], |
| "last": "Raunak", |
| "suffix": "" |
| }, |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Metze", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 4th Workshop on Representation Learning for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "235--243", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vikas Raunak, Vivek Gupta, and Florian Metze. 2019. Effective dimensionality reduction for word embed- dings. In Proceedings of the 4th Workshop on Rep- resentation Learning for NLP (RepL4NLP-2019), pages 235-243.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Perelygin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Chuang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1631--1642", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In EMNLP, pages 1631-1642.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "formation leakage in embedding models", |
| "authors": [ |
| { |
| "first": "Congzheng", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Ananth", |
| "middle": [], |
| "last": "Raghunathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2004.00053" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Congzheng Song and Ananth Raghunathan. 2020. In- formation leakage in embedding models. arXiv preprint arXiv:2004.00053.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Auditing data provenance in text-generation models", |
| "authors": [ |
| { |
| "first": "Congzheng", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Vitaly", |
| "middle": [], |
| "last": "Shmatikov", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ACM SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Congzheng Song and Vitaly Shmatikov. 2019. Audit- ing data provenance in text-generation models. In ACM SIGKDD.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "2002. k-anonymity: A model for protecting privacy", |
| "authors": [ |
| { |
| "first": "Latanya", |
| "middle": [], |
| "last": "Sweeney", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "IJUFKS", |
| "volume": "10", |
| "issue": "05", |
| "pages": "557--570", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Latanya Sweeney. 2002. k-anonymity: A model for protecting privacy. IJUFKS, 10(05):557-570.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Apple's Differential Privacy Team", |
| "authors": [], |
| "year": 2017, |
| "venue": "Apple Machine Learning Journal", |
| "volume": "1", |
| "issue": "9", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Apple's Differential Privacy Team. 2017. Learning with privacy at scale. Apple Machine Learning Jour- nal, 1(9).", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Investigating the impact of pre-trained word embeddings on memorization in neural networks", |
| "authors": [ |
| { |
| "first": "Aleena", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "Ifeoluwa" |
| ], |
| "last": "Adelani", |
| "suffix": "" |
| }, |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Davody", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Mogadala", |
| "suffix": "" |
| }, |
| { |
| "first": "Dietrich", |
| "middle": [], |
| "last": "Klakow", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "International Conference on Text, Speech, and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "273--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aleena Thomas, David Ifeoluwa Adelani, Ali Davody, Aditya Mogadala, and Dietrich Klakow. 2020. In- vestigating the impact of pre-trained word embed- dings on memorization in neural networks. In Inter- national Conference on Text, Speech, and Dialogue, pages 273-281. Springer.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Uber releases open source project for differential privacy", |
| "authors": [ |
| { |
| "first": "Uber", |
| "middle": [], |
| "last": "Security", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Uber Security. 2017. Uber releases open source project for differential privacy. https://medium.com/ uber-security-privacy/7892c82c42b6.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "High dimensional probability. An Introduction with Applications", |
| "authors": [ |
| { |
| "first": "Roman", |
| "middle": [], |
| "last": "Vershynin", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roman Vershynin. 2016. High dimensional probability. An Introduction with Applications.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Principal component analysis in the local differential privacy model. Theoretical Computer Science", |
| "authors": [ |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinhui", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "809", |
| "issue": "", |
| "pages": "296--312", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Di Wang and Jinhui Xu. 2020. Principal component analysis in the local differential privacy model. The- oretical Computer Science, 809:296-312.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "A deterministic analysis of noisy sparse subspace clustering for dimensionality-reduced data", |
| "authors": [ |
| { |
| "first": "Yining", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu-Xiang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Aarti", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML-15)", |
| "volume": "", |
| "issue": "", |
| "pages": "1422--1431", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yining Wang, Yu-Xiang Wang, and Aarti Singh. 2015. A deterministic analysis of noisy sparse subspace clustering for dimensionality-reduced data. In Pro- ceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1422-1431.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Annotating expressions of opinions and emotions in language", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. LREC.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Bolt-on differential privacy for scalable stochastic gradient descent-based analytics", |
| "authors": [ |
| { |
| "first": "Xi", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Fengan", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Arun", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Kamalika", |
| "middle": [], |
| "last": "Chaudhuri", |
| "suffix": "" |
| }, |
| { |
| "first": "Somesh", |
| "middle": [], |
| "last": "Jha", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Naughton", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 ACM SIGMOD", |
| "volume": "", |
| "issue": "", |
| "pages": "1307--1322", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, and Jeffrey Naughton. 2017. Bolt-on differential privacy for scalable stochastic gradient descent-based analytics. In Proceedings of the 2017 ACM SIGMOD, pages 1307-1322.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Differentially private generative adversarial network", |
| "authors": [ |
| { |
| "first": "Liyang", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaixiang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiayu", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. 2018. Differentially private generative adversarial network.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "A differentially private text perturbation method using regularized mahalanobis metric", |
| "authors": [ |
| { |
| "first": "Zekun", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhinav", |
| "middle": [], |
| "last": "Aggarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Oluwaseyi", |
| "middle": [], |
| "last": "Feyisetan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathanael", |
| "middle": [], |
| "last": "Teissier", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Second Workshop on Privacy in NLP at EMNLP 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "7--17", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, and Nathanael Teissier. 2020. A differentially pri- vate text perturbation method using regularized ma- halanobis metric. In Proceedings of the Second Workshop on Privacy in NLP at EMNLP 2020, pages 7-17.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Differential privacy with compression", |
| "authors": [ |
| { |
| "first": "Shuheng", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Katrina", |
| "middle": [], |
| "last": "Ligett", |
| "suffix": "" |
| }, |
| { |
| "first": "Larry", |
| "middle": [], |
| "last": "Wasserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "IEEE International Symposium on", |
| "volume": "", |
| "issue": "", |
| "pages": "2718--2722", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shuheng Zhou, Katrina Ligett, and Larry Wasserman. 2009. Differential privacy with compression. In In- formation Theory, 2009. ISIT 2009. IEEE Interna- tional Symposium on, pages 2718-2722. IEEE.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Let \u03a6 be an m \u00d7 d matrix with i.i.d. entries from N (0, 1/m). Consider an embedding model M . Let Dom(M ) denote the domain and Ran(M ) \u2282 R d denote the range of M . Define a function f \u03a6 : Ran(M ) \u2192 R m as f\u03a6(x) = \u03a6x and \u03a6 \u2208 R m\u00d7d i.i.d. from N (0, 1/m) . (1)" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Algorithm 1: PRIVEMB Input: x1, . . . , xn \u2208 Ran(M ) for model M , privacy parameters , \u03b4 > 0, and dimensionality reduction parameter \u03b2 \u2208 (0, 1) Output: private vector embeddings w1, . . . , wn Let m" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "-||xi \u2212 xj|| at \u03b5 = 5.0 Distance Approximation (GLOVE)" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "\u27e8wi, wj\u27e9 -\u27e8xi, xj\u27e9 at \u03b5 = 5.0" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Inner Prod Approximation (GLOVE)" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Linear Model Performance (GLOVE)" |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Comparing effects of dimensionality reduction during training vs. after (GLOVE). -||xi \u2212 xj|| at \u03b5 = 5.0" |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Claim 3 Restated). Let f : X \u2192 R m . Then publishing A(x) = f (x) + \u03ba where \u03ba is sampled from the distribution in R m with density p(z) \u221d exp(\u2212 z /\u2206 f ) satisfies ( , 0)-Lipschitz privacy." |
| }, |
| "FIGREF8": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "[A(x) = w] = Pr[A(x) = w | E]Pr[E] + Pr[A(x) = w |\u0112]Pr[\u0112] \u2264 Pr[A(x) = w | E] + \u03b4,where we used that Pr[E] \u2264 1, Pr[A(x) = w |\u0112] \u2264 1, and Pr[\u0112] \u2264 \u03b4. Now under E, from Claim 10, Pr[A(x) = w] \u2264 exp( x \u2212 x )Pr[A(x ) = w]. Since the above argument holds for all x, x simultaneously, we get the A is ( , \u03b4)-Lipschitz private." |
| }, |
| "TABREF1": { |
| "text": "Training and test accuracy scores on classification tasks.", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |