Monketoo commited on
Commit
3744075
·
verified ·
1 Parent(s): 91509df

Add files using upload-large-folder tool

Browse files
Files changed (43) hide show
  1. samples/pdfs/1123995.pdf +0 -0
  2. samples/pdfs/1623821.pdf +0 -0
  3. samples/pdfs/1640880.pdf +0 -0
  4. samples/pdfs/1693876.pdf +0 -0
  5. samples/pdfs/1789294.pdf +0 -0
  6. samples/pdfs/1834803.pdf +0 -0
  7. samples/pdfs/2605706.pdf +0 -0
  8. samples/pdfs/2981612.pdf +0 -0
  9. samples/pdfs/3114781.pdf +0 -0
  10. samples/pdfs/4283718.pdf +0 -0
  11. samples/pdfs/4767451.pdf +0 -0
  12. samples/pdfs/491078.pdf +0 -0
  13. samples/pdfs/5640834.pdf +0 -0
  14. samples/pdfs/6274397.pdf +0 -0
  15. samples/pdfs/7659893.pdf +0 -0
  16. samples/pdfs/901380.pdf +0 -0
  17. samples/texts/2675357/page_1.md +32 -0
  18. samples/texts/2675357/page_10.md +29 -0
  19. samples/texts/2675357/page_11.md +25 -0
  20. samples/texts/2675357/page_12.md +33 -0
  21. samples/texts/2675357/page_13.md +19 -0
  22. samples/texts/2675357/page_14.md +39 -0
  23. samples/texts/2675357/page_15.md +40 -0
  24. samples/texts/2675357/page_16.md +34 -0
  25. samples/texts/2675357/page_17.md +38 -0
  26. samples/texts/2675357/page_18.md +33 -0
  27. samples/texts/2675357/page_19.md +5 -0
  28. samples/texts/2675357/page_2.md +9 -0
  29. samples/texts/2675357/page_3.md +15 -0
  30. samples/texts/2675357/page_4.md +25 -0
  31. samples/texts/2675357/page_5.md +23 -0
  32. samples/texts/2675357/page_6.md +33 -0
  33. samples/texts/2675357/page_7.md +38 -0
  34. samples/texts/2675357/page_8.md +41 -0
  35. samples/texts/2675357/page_9.md +35 -0
  36. samples/texts/2875771/page_1.md +26 -0
  37. samples/texts/2875771/page_10.md +35 -0
  38. samples/texts/2875771/page_13.md +31 -0
  39. samples/texts/2875771/page_16.md +27 -0
  40. samples/texts/2875771/page_17.md +37 -0
  41. samples/texts/2875771/page_19.md +19 -0
  42. samples/texts/2875771/page_2.md +9 -0
  43. samples/texts/2875771/page_20.md +7 -0
samples/pdfs/1123995.pdf ADDED
Binary file (59.9 kB). View file
 
samples/pdfs/1623821.pdf ADDED
Binary file (61.9 kB). View file
 
samples/pdfs/1640880.pdf ADDED
Binary file (72.2 kB). View file
 
samples/pdfs/1693876.pdf ADDED
Binary file (57.1 kB). View file
 
samples/pdfs/1789294.pdf ADDED
Binary file (54.4 kB). View file
 
samples/pdfs/1834803.pdf ADDED
Binary file (78 kB). View file
 
samples/pdfs/2605706.pdf ADDED
Binary file (77.2 kB). View file
 
samples/pdfs/2981612.pdf ADDED
Binary file (32.1 kB). View file
 
samples/pdfs/3114781.pdf ADDED
Binary file (69.4 kB). View file
 
samples/pdfs/4283718.pdf ADDED
Binary file (36.1 kB). View file
 
samples/pdfs/4767451.pdf ADDED
Binary file (31.1 kB). View file
 
samples/pdfs/491078.pdf ADDED
Binary file (76.3 kB). View file
 
samples/pdfs/5640834.pdf ADDED
Binary file (59.1 kB). View file
 
samples/pdfs/6274397.pdf ADDED
Binary file (13.5 kB). View file
 
samples/pdfs/7659893.pdf ADDED
Binary file (42.7 kB). View file
 
samples/pdfs/901380.pdf ADDED
Binary file (45.9 kB). View file
 
samples/texts/2675357/page_1.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Optimal locally private estimation
2
+ under $\ell_p$ loss for $1 \le p \le 2$
3
+
4
+ Min Ye
5
+
6
+ *Department of Electrical Engineering*
7
+ Princeton University
8
+ Princeton, NJ, 08544
9
+ e-mail: yeemmi@gmail.com
10
+
11
+ Alexander Barg*
12
+
13
+ *Department of Electrical and Computer Engineering*
14
+ *and Institute for Systems Research*
15
+ University of Maryland
16
+ College Park, MD 20742
17
+ e-mail: abarg@umd.edu
18
+
19
+ **Abstract:** We consider the minimax estimation problem of a discrete distribution with support size $k$ under locally differential privacy constraints. A privatization scheme is applied to each raw sample independently, and we need to estimate the distribution of the raw samples from the privatized samples. A positive number $\epsilon$ measures the privacy level of a privatization scheme.
20
+
21
+ In our previous work (IEEE Trans. Inform. Theory, 2018), we proposed a family of new privatization schemes and the corresponding estimator. We also proved that our scheme and estimator are order optimal in the regime $e^\epsilon \ll k$ under both $\ell_2^2$ (mean square) and $\ell_1$ loss. In this paper, we sharpen this result by showing asymptotic optimality of the proposed scheme under the $\ell_p^p$ loss for all $1 \le p \le 2$. More precisely, we show that for any $p \in [1, 2]$ and any $k$ and $\epsilon$, the ratio between the worst-case $\ell_p^p$ estimation loss of our scheme and the optimal value approaches 1 as the number of samples tends to infinity. The lower bound on the minimax risk of private estimation that we establish as a part of the proof is valid for any loss function $\ell_p^p, p \ge 1$.
22
+
23
+ AMS 2000 subject classifications: 62G05.
24
+ Keywords and phrases: Minimax estimation, local differential privacy.
25
+
26
+ Received August 2018.
27
+
28
+ # 1. Introduction
29
+
30
+ This paper continues our work [28]. The context of the problem that we consider is related to a major challenge in the statistical analysis of user data, namely, the conflict between learning accurate statistics and protecting sensitive information about the individuals. As in [28], we rely on a particular formalization
31
+
32
+ * A. Barg is also with Institute for Problems of Information Transmission (IITP), Russian Academy of Sciences, 127051 Moscow, Russia. His research was partially supported by NSF grants CCF1814487 and CCF1618603.
samples/texts/2675357/page_10.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ applying $Q$ to the raw samples. More precisely, for an estimator $\hat{p}_i$, we define
2
+ its Bayes estimation loss
3
+
4
+ $$r_{i,S_i(p^*)}^{\ell_u^u}(Q, \hat{p}_i) := \mathbb{E}_{P \sim \text{Unif}(S_i(p^*))} \left[ \mathbb{E}_{Y^n \sim (PQ)^n} |\hat{p}_i(Y^n) - P_i|^u \right], \quad i \in [k]$$
5
+
6
+ then the optimal estimation loss is
7
+
8
+ $$r_{i,S_i(p^*)}^{\ell_u^u}(Q) := \inf_{\hat{p}_i} r_{i,S_i(p^*)}^{\ell_u^u}(Q, \hat{p}_i), \quad i \in [k].$$
9
+
10
+ Our approach to obtain the lower bound on this Bayes estimation loss relies on a classical method in asymptotic statistics, namely, local asymptotic normality (LAN) of sequences of statistical models [19, 13], see also [15, 24]. The exact formulation of the results that pertain to the method involves several technical concepts; we will limit ourselves to explaining the general idea and the implications for our problem. We will also confine ourselves to the one-dimensional case as opposed to the general formulation of LAN. Let $p_\theta$ be the density function of a distribution $P_\theta$, where the parameter $\theta$ takes values in an open subset $\Theta \subset \mathbb{R}$. For every fixed $x$ we have the following Taylor expansion:
11
+
12
+ $$\log \frac{p_{\theta+h}}{p_{\theta}}(x) = h \frac{\partial}{\partial \theta} \log p_{\theta}(x) + \frac{1}{2} h^2 \frac{\partial^2}{\partial \theta^2} \log p_{\theta}(x) + o(h^2).$$
13
+
14
+ Suppose that $X^n$ are $n$ i.i.d. samples drawn from the distribution $P_\theta$. It follows
15
+ that
16
+
17
+ $$\log \prod_{i=1}^{n} \frac{p_{\theta+h/\sqrt{n}}(X_i)}{p_{\theta}} = \frac{h}{\sqrt{n}} \sum_{i=1}^{n} \frac{\partial}{\partial \theta} \log p_{\theta}(X_i) + \frac{1-h^2}{2n} \sum_{i=1}^{n} \frac{\partial^2}{\partial \theta^2} \log p_{\theta}(X_i) + o(1).$$
18
+
19
+ Under some mild smoothness conditions, we have
20
+
21
+ $$\mathbb{E}_{X \sim P_{\theta}} \frac{\partial}{\partial \theta} \log p_{\theta}(X) = 0,$$
22
+
23
+ $$\mathbb{E}_{X \sim P_{\theta}} \left( \frac{\partial}{\partial \theta} \log p_{\theta}(X) \right)^2 = -\mathbb{E}_{X \sim P_{\theta}} \frac{\partial^2}{\partial \theta^2} \log p_{\theta}(X) = I_{\theta},$$
24
+
25
+ where $I_\theta$ is the Fisher information of $\theta$, which is assumed to be nonzero. Therefore, by central limit theorem, $\frac{1}{\sqrt{n}} \sum_{i=1}^n \frac{\partial}{\partial \theta} \log p_\theta(X_i)$ is asymptotically normal with mean zero and variance $I_\theta$. Furthermore, $\frac{1}{n} \sum_{i=1}^n \frac{\partial^2}{\partial \theta^2} \log p_\theta(X_i)$ converges to $I_\theta$, by the law of large numbers. Consequently, under suitable conditions we have
26
+
27
+ $$\log \prod_{i=1}^{n} \frac{p_{\theta+h/\sqrt{n}}(X_i)}{p_{\theta}} = hX - \frac{1}{2} I_{\theta}h^2 + o(1), \quad \text{where } X \sim N(0, I_{\theta})$$
28
+
29
+ The quadratic form on the right-hand side is very similar to the exponent of the Gaussian distribution, and one can derive a normal approximation from this similarity. More precisely, if $T_n$ is a sequence of statistics in the experiments
samples/texts/2675357/page_11.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ($P_{\theta+h/\sqrt{n}}: h \in \mathbb{R}$) such that $T_n$ converges in distribution for every $h$, then there exists a (randomized) statistic $T$ in the experiment $\mathcal{N}(h, I_\theta^{-1})$, $h \in \mathbb{R}$ such that $T_n$ converges in distribution to $T$ for every $h$; in other words, every converging sequence of statistics in the local experiments ($P_{\theta+h/\sqrt{n}}: h \in \mathbb{R}$) approaches in distribution the statistic of a single normal experiment. We refer in particular to [24, Ch.7] for a detailed, accessible account of the above informal discussion.
2
+
3
+ The implications of the general LAN results for our problem can be stated as follows. When the constant $D'$ in (16) is large enough, the conditional distribution of $P_i$ given $Y^n = y^n$ is approximately a Gaussian distribution with variance $(I_{p_i^*})^{-1}$ for almost all$^2$ $y^n \in \mathcal{Y}^n$ as $n$ goes to infinity, where $I_{p_i}$ is the Fisher information of the parameter $p_i$. Before we calculate the value of $I_{p_i^*}$, let us recall a simple fact about Gaussian distribution: Suppose that $X$ is a Gaussian random variable, then one can easily verify$^3$ that for any $u \ge 1$,
4
+
5
+ $$ \mathbb{E}X = \underset{a}{\operatorname{argmin}} \mathbb{E}|X - a|^u. \quad (18) $$
6
+
7
+ Therefore, the estimator $\hat{p}_i(y^n) = \mathbb{E}(P_i|Y^n = y^n)$ is asymptotically optimal for this Bayes estimation problem under the $\ell_u^u$ loss function for all $u \ge 1$. Since the variance of $P_i$ given $Y^n = y^n$ is $(I_{p_i^*})^{-1}$ for almost all $y^n \in \mathcal{Y}^n$, the Bayes estimation loss of this asymptotically optimal estimator is
8
+
9
+ $$ C_u(I_{p_i^*})^{-u/2}(1-o(1)). $$
10
+
11
+ Thus we conclude that
12
+
13
+ $$ r_{i,S_i(p^*)}^{\ell_u^u}(Q) \ge C_u(I_{p_i^*})^{-u/2}(1-o(1)) \quad \text{for all } u \ge 1. \quad (19) $$
14
+
15
+ Now we are left to calculate the value of $I_{p_i^*}$. To this end, we introduce some notation. For a given privatization scheme $Q \in \mathcal{D}_{\epsilon,E}$ with output size $L$, we write its output alphabet as $\mathcal{Y} = \{1,2,\dots,L\}$, and we use the shorthand notation
16
+
17
+ $$ q_{jv} := Q(j|v) \quad (20) $$
18
+
19
+ for all $j \in [L]$ and $v \in [k]$. For $j \in [L]$ and $y^n = (y^{(1)}, y^{(2)}, \dots, y^{(n)}) \in \mathcal{Y}^n$, define $w_j(y^n) := \sum_{v=1}^n 1[y^{(v)} = j]$ to be the number of times that symbol $j$ appears in $y^n$. Let $\mathbb{P}(y^n; p_i)$ be the probability mass function of a random vector $Y^n$ formed of i.i.d. samples drawn according to the distribution $m = p_i Q$, where the other components of $p_i$ are calculated from $p_i$ according to (17). The random variables $w_j(Y^n)$ follow the multinomial distribution, and $\mathbb{E}w_j(Y^n) = nm(j)$, $j \in [L]$. Therefore,
20
+
21
+ $$ \log \mathbb{P}(y^n; p_i) = \sum_{j=1}^{L} w_j(y^n) \log \left( \sum_{v=1}^{k} p_v q_{jv} \right) $$
22
+
23
+ $^2$More precisely, for any $\epsilon_1, \epsilon_2 > 0$ there is $N$ such that for any $n > N$ there is a subset $E \subseteq y^n$ such that (1) $\mathbb{P}(E) > 1 - \epsilon_1$, and (2) for all $y^n \in E$ the relative difference between the pdf of conditional distribution of $P_i$ given $Y^n = y^n$ and the Gaussian pdf is at most $\epsilon_2$.
24
+
25
+ $^3$Let $\phi(x)$ be the pdf of $X$ and note that $\phi(x) = \phi(2\mathbb{E}X - x)$ for all real $x$. By convexity of $\cdot |^u$, $u \ge 1$ we have $|x - \mathbb{E}X|^u \le (1/2)(|a - x|^u + |2\mathbb{E}X - x - a|^u)$ for all $a$. Integrating against $\phi(x)$ and using the symmetry condition, we obtain that $\mathbb{E}|X - \mathbb{E}X|^u \le \mathbb{E}|X - a|^u$ for all $u \ge 1, a \in \mathbb{R}$.
samples/texts/2675357/page_12.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $$ = \sum_{j=1}^{L} w_j(y^n) \log \left( p_i q_{ji} + \sum_{v \neq i} \left( p_v^* - \frac{1}{k-1}(p_i - p_i^*) \right) q_{jv} \right), $$
2
+
3
+ and the Fisher information of $p_i$ is
4
+
5
+ $$
6
+ \begin{align*}
7
+ I(p_i) &= - \mathbb{E}_{Y^n \sim (\mathbf{p}\mathbf{Q})^n} \left[ \frac{d^2}{dp_i^2} \log \mathbb{P}(y^n; p_i) \right] \\
8
+ &= \sum_{j=1}^{L} \frac{(q_{ji} - \frac{1}{k-1} \sum_{v \neq i} q_{jv})^2}{\left( p_i q_{ji} + \sum_{v \neq i} \left( p_v^* - \frac{1}{k-1}(p_i - p_i^*) \right) q_{jv} \right)^2} \mathbb{E}_{Y^n \sim (\mathbf{p}\mathbf{Q})^n} w_j(Y^n) \\
9
+ &= \sum_{j=1}^{L} \frac{\left( q_{ji} - \frac{1}{k-1} \sum_{v \neq i} q_{jv} \right)^2}{\left( \sum_{v=1}^{k} p_v q_{jv} \right)^2} \mathbb{E}_{Y^n \sim (\mathbf{p}\mathbf{Q})^n} w_j(Y^n) \\
10
+ &= n \sum_{j=1}^{L} \frac{\left( q_{ji} - \frac{1}{k-1} \sum_{v \neq i} q_{jv} \right)^2}{\sum_{v=1}^{k} p_v q_{jv}} \\
11
+ &= \frac{n k^2}{(k-1)^2} \sum_{j=1}^{L} \frac{\left( q_{ji} - \frac{1}{k} \sum_{v=1}^{k} q_{jv} \right)^2}{\sum_{v=1}^{k} p_v q_{jv}},
12
+ \end{align*}
13
+ $$
14
+
15
+ where $p_v$'s on the last line are given by (17). In particular,
16
+
17
+ $$ I_{p_i^*} = \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji} - \frac{1}{k} \sum_{v=1}^{k} q_{jv})^2}{\sum_{v=1}^{k} p_v^* q_{jv}}. $$
18
+
19
+ Combining this with (19), we have that for all $u \ge 1$,
20
+
21
+ $$ r_{i,S_i(p^*)(\mathbf{Q})}^{\ell_u^u} \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{\left( q_{ji} - \frac{1}{k} \sum_{v=1}^{k} q_{jv} \right)^2}{\sum_{v=1}^{k} p_v^* q_{jv}} \right)^{-u/2} - o(n^{-u/2}). $$
22
+
23
+ For $j \in [L]$, define
24
+
25
+ $$ q_j := \frac{1}{k} \sum_{v=1}^{k} q_{jv}. \tag{21} $$
26
+
27
+ It is clear that when $\mathbf{p}^*$ is in the neighborhood of the uniform distribution $\mathbf{p}_U$, i.e., when $p_v^* = 1/k + o_n(1)$ for all $v \in [k]$, we have
28
+
29
+ $$ r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - o(n^{-u/2}) \quad \text{for all } u \ge 1. \tag{22} $$
30
+
31
+ ### 4.3. *Proof of* (14)
32
+
33
+ Our first step in this section will be to prove a lower bound on $r_{i,Bayes(\mathbf{Q})}^{\ell_u^u}$. Let us phrase the claim in (22) in a more detailed form: For any $\delta > 0$, there exists
samples/texts/2675357/page_13.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $D_0 > 0$ such that whenever the constant $D'$ in the definition of $S_i(\mathbf{p}^*)$ is larger than $D_0$,
2
+
3
+ $$r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - \delta n^{-u/2} \quad \text{for all } u \ge 1. \quad (23)$$
4
+
5
+ The constant $D'$ is required to be large for the local asymptotic normality arguments to hold (refer again to [15, Chapter 2, Theorem 1.1] and [24, Ch. 7]).
6
+
7
+ **Proposition 4.1.** Let $\mathcal{P}$ be the Euclidean ball around $\mathbf{p}_U$ defined in (13). For a sufficiently large constant $D$ and any $u \ge 1$ we have
8
+
9
+ $$r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - o(n^{-u/2}). \quad (24)$$
10
+
11
+ *Proof.* We can view $\mathcal{P}$ as a union of (uncountably many) parallel line segments with direction vector $\mathbf{v}_i$ defined in (15). Each of these line segments can be written as $S_i(\mathbf{p}^*)$ (see (16)), with a suitably chosen midpoint $\mathbf{p}^* \in \mathcal{P}$. Since the midpoints of all the line segments lie inside $\mathcal{P}$, which is a neighborhood of the uniform distribution, by (23) we have that for any estimator $\hat{\mathbf{p}}_i$, the average $\ell_u^u$ estimation loss $r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i)$ on any of these line segments $S_i(\mathbf{p}^*)$ with $D' \ge D_0$ is lower bounded by
12
+
13
+ $$r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i) \ge r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - \delta n^{-u/2}$$
14
+
15
+ for $u \ge 1$. To compute the average estimation loss $r_{i,\text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i)$ on $\mathcal{P}$ we need to average over all the segments with weight proportional to the length of the segment. Given $D_0$, we can choose $D$ in (13) large enough so that the proportion of the segments $S_i(\mathbf{p}^*)$ with $D' \ge D_0$ out of all the segments in $\mathcal{P}$ is arbitrarily close to one (formally, denote the union of such segments as $\mathcal{P}_0$, then $\text{Vol}(\mathcal{P}_0)/\text{Vol}(\mathcal{P})$ can be made arbitrarily close to 1 as long as we set $D/D_0$ to be large enough). The average estimation loss along each of these segments is uniformly bounded below as in (23), and thus the average loss on $\mathcal{P}_0$ is lower bounded by the same quantity. Combining the fact that $\text{Vol}(\mathcal{P}_0)/\text{Vol}(\mathcal{P}) = 1 - o(1)$, we have
16
+
17
+ $$r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - o(n^{-u/2}) \quad \text{for all } u \ge 1.$$
18
+
19
+ This lower bound holds for any estimator $\hat{\mathbf{p}}_i$, and this implies the claimed lower bound (24). □
samples/texts/2675357/page_14.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ We will need the following lemma.
2
+
3
+ **Lemma 4.2.** For every $\mathbf{Q} \in \mathcal{D}_{\epsilon,E}$ with output alphabet $\mathcal{Y} = \{1,2,\dots,L\}$ we have
4
+
5
+ $$ \sum_{i=1}^{k} \frac{q_{ji}^2}{q_j^2} \le k(1+(e^\epsilon-1)^2 \frac{d^*(k-d^*)}{(d^*e^\epsilon+k-d^*)^2}) \quad \text{for all } j \in [L]. $$
6
+
7
+ *Proof.* Let $m_j := \min_{i \in [k]} q_{ji}$. According to the definition of $\mathcal{D}_{\epsilon,E}$ in (4), the coordinates of the vector $(q_{ji}, i \in [k])$ are either $m_j e^\epsilon$ or $m_j$. Let $d$ be the number of $m_j e^\epsilon$ entries, then
8
+
9
+ $$ q_j = \frac{m_j}{k}(de^\epsilon + k - d), $$
10
+
11
+ $$ \sum_{i=1}^{k} q_{ji}^{2} = m_{j}^{2}(de^{2\epsilon} + k - d). $$
12
+
13
+ We obtain
14
+
15
+ $$
16
+ \begin{aligned}
17
+ \sum_{i=1}^{k} \frac{q_{ji}^2}{q_j^2} &= \frac{k^2(de^{2\epsilon} + k - d)}{(de^\epsilon + k - d)^2} = k \frac{(de^{2\epsilon} + k - d)(d + k - d)}{(de^\epsilon + k - d)^2} \\
18
+ &= k \frac{d^2 e^{2\epsilon} + (k - d)^2 + d(k - d)(e^{2\epsilon} + 1)}{(de^\epsilon + k - d)^2} \\
19
+ &= k \frac{d^2 e^{2\epsilon} + 2d(k - d)e^\epsilon + (k - d)^2 + d(k - d)(e^{2\epsilon} - 2e^\epsilon + 1)}{(de^\epsilon + k - d)^2} \\
20
+ &= k \frac{(de^\epsilon + k - d)^2 + d(k - d)(e^\epsilon - 1)^2}{(de^\epsilon + k - d)^2} \\
21
+ &= k(1 + (e^\epsilon - 1)^2 \frac{d(k - d)}{(de^\epsilon + k - d)^2}) \\
22
+ &\le k(1 + (e^\epsilon - 1)^2 \frac{d^*(k - d^*)}{(d^*e^\epsilon + k - d^*)^2}),
23
+ \end{aligned}
24
+ $$
25
+
26
+ where the last inequality follows from the definition of $d^*$ in (9). $\square$
27
+
28
+ Now we are ready to prove (14). Using the obvious relations $\sum_{j=1}^L q_{ji} =$ $\sum_{j=1}^L q_j = 1$, we can simplify the right-hand side of (24) as follows:
29
+
30
+ $$
31
+ \begin{aligned}
32
+ \sum_{j=1}^{L} \left(\frac{(q_{ji} - q_j)^2}{q_j}\right) &= \sum_{j=1}^{L} \left(\sum_{k=1}^{L} \frac{q_{ji}^2}{q_k} - 2\sum_{j=1}^{L} q_{ji} + \sum_{j=1}^{L} q_j\right) \\
33
+ &= \sum_{j=1}^{L} \frac{q_{ji}^2}{q_j} - 1.
34
+ \end{aligned}
35
+ $$
36
+
37
+ Now let us sum (24) over $i \in [k]$ on both sides and use the simplification above:
38
+
39
+ $$ \sum_{i=1}^{k} r_{i, \text{Bayes}}^{\ell_u^i}(\mathbf{Q}) \ge C_u \sum_{i=1}^{k} \left( \frac{n k^2}{(k-1)^2} \left( \sum_{j=1}^{L} \frac{q_{ji}^2}{q_j} - 1 \right) \right)^{-u/2} - o(n^{-u/2}). \quad (25) $$
samples/texts/2675357/page_15.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Since for $u > 0$, $x^{-u/2}$ is a convex function for $x > 0$, we can further bound below the right-hand side of (25):
2
+
3
+ $$
4
+ \begin{align*}
5
+ & \sum_{i=1}^{k} \left( \frac{nk^2}{(k-1)^2} \left( \sum_{j=1}^{L} \frac{q_{ji}^2}{q_j} - 1 \right) \right)^{-u/2} \ge k \left( \frac{1}{k} \sum_{i=1}^{k} \frac{nk^2}{(k-1)^2} \left( \sum_{j=1}^{L} \frac{q_{ji}^2}{q_j} - 1 \right) \right)^{-u/2} \\
6
+ &= k \left( \frac{nk}{(k-1)^2} \sum_{j=1}^{L} \sum_{i=1}^{k} \frac{q_{ji}^2}{q_j} - \frac{nk^2}{(k-1)^2} \right)^{-u/2} \\
7
+ &= k \left( \frac{nk}{(k-1)^2} \sum_{j=1}^{L} \left( q_j \sum_{i=1}^{k} \frac{q_{ji}^2}{q_j^2} \right) - \frac{nk^2}{(k-1)^2} \right)^{-u/2} \\
8
+ &\ge k \left( \frac{nk^2}{(k-1)^2} \left( 1 + (e^\epsilon - 1)^2 \frac{d^*(k-d^*)}{(d^*e^\epsilon + k - d^*)^2} \right) \sum_{j=1}^{L} q_j - \frac{nk^2}{(k-1)^2} \right)^{-u/2} \\
9
+ &= k \left( \frac{nk^2(e^\epsilon - 1)^2}{(k-1)^2 (d^*e^\epsilon + k - d^*)^2} \right)^{-u/2} \\
10
+ &= \frac{k}{n^{u/2}} M(k, \epsilon)^{u/2} && \text{for all } Q \in D_{\epsilon,E}
11
+ \end{align*}
12
+ $$
13
+
14
+ where the second inequality follows by Lemma 4.2 (note the inverted inequality of the Lemma because of the negative power $-u/2$). Combining this with (25), we conclude that
15
+
16
+ $$
17
+ \sum_{i=1}^{k} r_{i, \text{Bayes}}^{\ell_u^u}(Q) \geq \frac{k}{n^{u/2}} C_u M(k, \epsilon)^{u/2} - o(n^{-u/2}) \quad \text{for all } Q \in D_{\epsilon,E}.
18
+ $$
19
+
20
+ Thus we have established (14), and this completes the proof of Theorem 3.2.
21
+
22
+ **5. Proof of Theorem 3.3**
23
+
24
+ We begin with showing that for the privatization scheme $Q_{k,\epsilon,d}$ defined in (6) and the estimator (7), the $\ell_u^u$ estimation loss is maximized for the uniform distribution $p_U$ for all $0 < u \le 2$ when $n$ is large. To shorten the notation, rewrite (7) as
25
+
26
+ $$
27
+ \hat{p}_i(y^n) = A \frac{t_i(y^n)}{n} - B, \quad i \in [k],
28
+ $$
29
+
30
+ where
31
+
32
+ $$
33
+ A := \frac{(k-1)e^\epsilon + \frac{(k-1)(k-d)}{d}}{(k-d)(e^\epsilon - 1)}, \quad B := \frac{(d-1)e^\epsilon + k-d}{(k-d)(e^\epsilon - 1)}.
34
+ $$
35
+
36
+ In [28] we have shown that the estimator $\hat{p}_i(y^n)$ is unbiased, i.e.,
37
+
38
+ $$
39
+ p_i = A_{Y^n \sim (\mathbf{p}_{\mathcal{Q}_{k,\epsilon,d}})^n} \left(\frac{t_i(Y^n)}{n}\right) - B, \quad i \in [k].
40
+ $$
samples/texts/2675357/page_16.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ By definition,
2
+
3
+ $$t_i(Y^n) = \sum_{j=1}^{n} 1[Y_i^{(j)} = 1]$$
4
+
5
+ is the sum of $n$ i.i.d. Bernoulli random variables with parameter
6
+
7
+ $$\mathbb{P}[Y_i^{(j)} = 1] = \mathbb{E} \frac{t_i(Y^n)}{n} = \frac{p_i}{A} + \frac{B}{A}.$$
8
+
9
+ Therefore the variance of $\frac{t_i(Y^n)}{n}$ is $\frac{1}{n}\left(\frac{p_i}{A} + \frac{B}{A}\right)\left(1 - \frac{p_i}{A} - \frac{B}{A}\right)$, and the variance of $\hat{p}_i(Y^n)$ is
10
+
11
+ $$\mathrm{Var} \hat{p}_i(Y^n) = A^2 \frac{1}{n} \left( \frac{p_i}{A} + \frac{B}{A} \right) \left( 1 - \frac{p_i}{A} - \frac{B}{A} \right) = \frac{1}{n} (p_i + B)(A - p_i - B).$$
12
+
13
+ Using the Central Limit Theorem, we then obtain for the absolute moment of $\hat{p}_i(Y^n)$ around $p_i$ the following approximation:
14
+
15
+ $$\mathbb{E}_{Y^n \sim (\mathbf{p}_{\mathcal{Q}_{k,\epsilon,d}})^n} |\hat{p}_i(Y^n) - p_i|^u = C_u \left( \frac{1}{n}(p_i + B)(A - p_i - B) \right)^{u/2} + o(n^{-u/2}),$$
16
+
17
+ where $C_u$ is the absolute moment of the $\mathcal{N}(0,1)$ RV; see Section 3. Therefore,
18
+
19
+ $$
20
+ \begin{align*}
21
+ \mathbb{E}_{Y^n \sim (\boldsymbol{p}_{\mathcal{Q}_{k,\epsilon,d}})^n} \ell_u^u(\hat{\boldsymbol{p}}(Y^n), \boldsymbol{p}) &= \sum_{i=1}^k C_u \left( \frac{1}{n}(p_i + B)(A-p_i-B) \right)^{u/2} + o(n^{-u/2}) \\
22
+ &\le k C_u n^{-u/2} \left( \frac{1}{k} \sum_{i=1}^k (p_i+B)(A-p_i-B) \right)^{u/2} + o(n^{-u/2}) \\
23
+ &= k C_u n^{-u/2} \left( \frac{A}{k} - \frac{2B}{k} + AB - B^2 - \frac{1}{k} \sum_{i=1}^k p_i^2 \right)^{u/2} + o(n^{-u/2}) \\
24
+ &\le k C_u n^{-u/2} \left( \frac{A}{k} - \frac{2B}{k} + AB - B^2 - \frac{1}{k^2} \right)^{u/2} + o(n^{-u/2}),
25
+ \end{align*}
26
+ $$
27
+
28
+ where the first inequality follows from the fact that $x^{u/2}$ is a concave function of $x$ on $(0, +\infty)$ for all positive $0 < u \le 2$, and the last line uses the Cauchy-Schwarz inequality. Both inequalities hold with equality if and only if **p** is the uniform distribution. Thus when $n$ is large, for all $0 < u \le 2$ and all $1 \le d \le k-1$, we have
29
+
30
+ $$r_{k,n}^{\ell_u^u}(\mathbf{Q}_{k,\epsilon,d}, \hat{\mathbf{p}}) = \mathbb{E}_{Y^n \sim (\mathbf{p}_U \mathbf{Q}_{k,\epsilon,d})^n} \ell_u^u(\hat{\mathbf{p}}(Y^n), \mathbf{p}_U).$$
31
+
32
+ In particular, it also holds for $d=d^*$. Next we calculate the estimation loss at the uniform distribution. By symmetry, it is clear that
33
+
34
+ $$\mathbb{E}_{Y^n \sim (\mathbf{p}_U \mathcal{Q}_{k,\epsilon,d^*})^n} |\hat{p}_i(Y^n) - \frac{1}{k}|^2 = \frac{1}{k} \left( \mathbb{E}_{Y^n \sim (\mathbf{p}_U \mathcal{Q}_{k,\epsilon,d^*})^n} \ell_2^2(\hat{\mathbf{p}}(Y^n), \mathbf{p}_U) \right)$$
samples/texts/2675357/page_17.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $$ = \frac{1}{k} r_{k,n}^{\ell_2^2}(\mathbf{Q}_{k,\epsilon,d^*}, \hat{\mathbf{p}}) = \frac{M(k, \epsilon)}{n}. $$
2
+
3
+ Therefore when the input distribution is uniform, $\hat{p}_i(Y^n)$ can be approximated for large $n$ by a Gaussian random variable with mean $1/k$ and variance $\frac{M(k,\epsilon)}{n}$.
4
+
5
+ Thus,
6
+
7
+ $$ \mathrm{E}_{Y^n \sim (\boldsymbol{p}_U \boldsymbol{Q}_{k,\epsilon,d^*})^n} \left| \hat{p}_i(Y^n) - \frac{1}{k} \right|^u = C_u \left( \frac{M(k, \epsilon)}{n} \right)^{u/2} + o(n^{-u/2}), $$
8
+
9
+ so for $0 < u \le 2$,
10
+
11
+ $$
12
+ \begin{aligned}
13
+ r_{k,n}^{\ell_u^u}(\boldsymbol{Q}_{k,\epsilon,d^*}, \hat{\boldsymbol{p}}) &= \mathop{\mathbb{E}}_{Y^n \sim (\boldsymbol{p}_U \boldsymbol{Q}_{k,\epsilon,d^*})^n} \ell_u^u(\hat{\boldsymbol{p}}(Y^n), \boldsymbol{p}_U) \\
14
+ &= \frac{k}{n^{u/2}} C_u M(k, \epsilon)^{u/2} + o(n^{-u/2}).
15
+ \end{aligned}
16
+ $$
17
+
18
+ This completes the proof of Theorem 3.3.
19
+
20
+ **References**
21
+
22
+ [1] ACHARYA, J., KAMATH, G., SUN, Z. and ZHANG, H. (2018). INSPECTRE: privately estimating the unseen. In Proceedings of the 35th International Conference on Machine Learning 80 30–39.
23
+
24
+ [2] ACHARYA, J., SUN, Z. and ZHANG, H. (2018). Differentially private testing of identity and closeness of discrete distributions. In Advances in Neural Information Processing Systems 31 6878–6891. Curran Associates, Inc.
25
+
26
+ [3] ACHARYA, J., SUN, Z. and ZHANG, H. (2019). Hadamard response: estimating distributions privately, efficiently, and with little communication. In Proceedings of Machine Learning Research 89 1120–1129.
27
+
28
+ [4] BASSILY, R. and SMITH, A. (2015). Local, private, efficient protocols for succinct histograms. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing 127–135. ACM. MR3388190
29
+
30
+ [5] DUCHI, J., WAINWRIGHT, M. J. and JORDAN, M. I. (2013). Local privacy and minimax bounds: Sharp rates for probability estimation. In Advances in Neural Information Processing Systems 1529–1537. MR3727612
31
+
32
+ [6] DUCHI, J. C., JORDAN, M. I. and WAINWRIGHT, M. J. (2013). Local privacy and statistical minimax rates. In 54th Annual IEEE Symposium on the Foundations of Computer Science (FOCS) 429–438. MR3246246
33
+
34
+ [7] DUCHI, J. C., JORDAN, M. I. and WAINWRIGHT, M. J. (2018). Minimax optimal procedures for locally private estimation. Journal of the American Statistical Association 113 182–201. MR3803452
35
+
36
+ [8] DWORK, C. (2008). Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation 1–19. Springer.
37
+
38
+ [9] DWORK, C., MCSHERRY, F., NISSIM, K. and SMITH, A. (2006). Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference 265–284. Springer. MR2241676
samples/texts/2675357/page_18.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [10] ERLINGSSON, Ú., PIHUR, V. and KOROLOVA, A. (2014). RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In *Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security* 1054–1067. ACM.
2
+
3
+ [11] GABARDI, M. and ROGERS, R. (2018). Local private hypothesis testing: chi-square tests. In *Proc. 35th ICML, Stockholm, Sweden, 2018* **80** 1626–1635.
4
+
5
+ [12] GHOSH, A., ROUGHGARDEN, T. and SUNDARARAJAN, M. (2012). Universally utility-maximizing privacy mechanisms. *SIAM Journal on Computing* **41** 1673–1693. MR3029267
6
+
7
+ [13] HÁJEK, J. (1972). Local asymptotic minimax and admissibility in estimation. In *Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability* **1** 175–194. MR0400513
8
+
9
+ [14] HSU, J., KHANNA, S. and ROTH, A. (2012). Distributed private heavy hitters. In *International Colloquium on Automata, Languages, and Programming* 461–472. Springer. MR2995330
10
+
11
+ [15] IBRAGIMOV, I. A. and HAS'MINSKII, R. Z. (1981). *Statistical Estimation*. Springer.
12
+
13
+ [16] KAIROUZ, P., BONAWITZ, K. and RAMAGE, D. (2016). Discrete distribution estimation under local privacy. In *Proc. 33rd Int. Conf. Machine Learning* **48** 2436–2444.
14
+
15
+ [17] KAIROUZ, P., OH, S. and VISWANATH, P. (2016). Extremal mechanisms for local differential privacy. *Jounral of Machine Learning Research* **17** 1–51. MR3491111
16
+
17
+ [18] KAMATH, S., ORLITSKY, A., PICHAPATI, V. and SURESH, A. T. (2015). On learning distributions from their samples. *Jounral of Machine Learning Research: Workshop and Conference Proceedings* **40** 1–35.
18
+
19
+ [19] LE CAM, L. (2012). *Asymptotic Methods in Statistical Decision Theory*. Springer Science & Business Media. MR0856411
20
+
21
+ [20] LEHMANN, E. L. and CASELLA, G. (2006). *Theory of Point Estimation*. Springer Science & Business Media. MR1639875
22
+
23
+ [21] MISHRA, N. and SANDLER, M. (2006). Privacy via pseudorandom sketches. In *Proceedings of the Twenty-Fifth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems* 143–152. ACM.
24
+
25
+ [22] SMITH, A. (2011). Privacy-preserving statistical estimation with optimal convergence rates. In *Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing* 813–822. ACM. MR2932032
26
+
27
+ [23] THAKURTA, A. G. and SMITH, A. (2013). (Nearly) optimal algorithms for private online learning in full-information and bandit settings. In *Advances in Neural Information Processing Systems* 2733–2741.
28
+
29
+ [24] VAN DER VAART, A. W. (1998). *Asymptotic Statistics*. Cambridge University Press. MR1652247
30
+
31
+ [25] WANG, S., HUANG, L., WANG, P., NIE, Y., XU, H., YANG, W., LI, X. and QIAO, C. (2016). Mutual information optimally local private discrete distribution estimation. arXiv:1607.08025.
32
+
33
+ [26] WARNER, S. L. (1965). Randomized response: A survey technique for elim-
samples/texts/2675357/page_19.md ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ inating evasive answer bias. *Journal of the American Statistical Association* **60** 63–69.
2
+
3
+ [27] YE, M. and BARG, A. (2017). Asymptotically optimal private estimation under mean square loss. arXiv:1708.00059.
4
+
5
+ [28] YE, M. and BARG, A. (2018). Optimal schemes for discrete distribution estimation under locally differential privacy. *IEEE Trans. Inform. Theory* **64** 5662–5676. MR3832328
samples/texts/2675357/page_2.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ of user privacy called *differential privacy*, introduced in [9, 8]. Generally speaking, differential privacy requires that the adversary not be able to reliably infer an individual's data from public statistics even with access to all the other users' data. The concept of differential privacy has been developed in two different contexts: the *global privacy* context (for instance, when institutions release statistics related to groups of people) [12], and the *local privacy* context when individuals disclose their personal data [6].
2
+
3
+ In this paper, we consider the minimax estimation problem of a discrete distribution with support size $k$ under locally differential privacy. This problem has been studied in the non-private setting [18, 20], where we can learn the distribution from the raw samples. In the private setting, we need to estimate the distribution of raw samples from the privatized samples which are generated independently from the raw samples according to a conditional distribution $Q$ (also called a *privatization scheme*). Given a privacy parameter $\epsilon > 0$, we say that $Q$ is $\epsilon$-locally differentially private if the probabilities of the same output conditional on different inputs differ by a factor of at most $e^\epsilon$. Clearly, smaller $\epsilon$ means that it is more difficult to infer the original data from the privatized samples, and thus leads to higher privacy. For a given $\epsilon$, our objective is to find the optimal $\epsilon$-private scheme that minimizes the expected estimation loss for the worst-case distribution. In this paper, we are mainly concerned with the scenario where we have a large number of samples, which captures the modern trend toward “big data” analytics.
4
+
5
+ **1.1. Existing results**
6
+
7
+ The following two privatization schemes are the most well-known in the literature: the $k$-ary Randomized Aggregatable Privacy-Preserving Ordinal Response ($k$-RAPPOR) scheme [5, 10], and the $k$-ary Randomized Response ($k$-RR) scheme [26, 17]. The $k$-RAPPOR scheme is order optimal in the high privacy regime where $\epsilon$ is very close to 0, and the $k$-RR scheme is order optimal in the low privacy regime where $\epsilon^\epsilon \approx k$ [16]. Very recently, a family of privatization schemes and the corresponding estimators were proposed independently by Wang et al. [25] and the present authors [28]. In [28], we further showed that under both $\ell_2^2$ (mean square) and $\ell_1$ loss, these privatization schemes and the corresponding estimators are order-optimal in the medium to high privacy regimes when $\epsilon^\epsilon \ll k$. Subsequent to our work, [3] proposed another privatization scheme and proved that it is order optimal in all regimes for $\ell_1$ loss. At the same time, prior to this paper, no schemes were shown to be asymptotically optimal in the literature.
8
+
9
+ Duchi et al. [7] gave an order-optimal lower bound on the minimax private estimation loss for the high privacy regime where $\epsilon$ is very close to 0. In [28], we proved a stronger lower bound which is order-optimal in the whole region $\epsilon^\epsilon \ll k$. This lower bound implies that the schemes and the estimators proposed in [25, 28] are order optimal in this regime. Here order-optimal means that the ratio between the true value and the lower bound is upper bounded by a constant (larger than 1) when $n$ and $k/e^\epsilon$ both become large enough.
samples/texts/2675357/page_3.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## 1.2. Our contributions
2
+
3
+ In this paper, we study the private estimation problem under the $\ell_p^p$ loss for $1 \le p \le 2$, which in particular includes the widely used $\ell_1$ and $\ell_2^2$ loss. We prove an asymptotically tight lower bound on the $\ell_p^p$ loss of the minimax private estimation for all values of $k, \epsilon$ and $1 \le p \le 2$. This improves upon the lower bounds in [28] and [7] for the following three reasons: First, although the lower bounds in [28] and [7] are order-optimal, they differ from the true value by a factor of several hundred. In practice, an improvement of several percentage points is already considered as a substantial advance (see for instance, [16]), so tighter bounds are of interest. Second, the bounds in [28] and [7] only hold for certain regions of $k$ and $\epsilon$ while the lower bound in this paper holds for all values of $k$ and $\epsilon$. Finally, previous results were limited to $\ell_1$ and $\ell_2^2$ loss functions while the results in this paper hold for all $\ell_p^p$ loss functions, where $1 \le p \le 2$.
4
+
5
+ Furthermore, as an immediate consequence of our lower bound, we show that the schemes and the estimators proposed in [25, 28] are universally optimal under the $\ell_p^p$ loss for all $1 \le p \le 2$ in the sense that the ratio between the lower bound and the worst-case estimation loss of these schemes and estimators goes to 1 when $n$ goes to infinity.
6
+
7
+ In this paper we both generalize the results, and shorten the proofs in the preprint [27] which addressed only the case of mean square loss.
8
+
9
+ ## 1.3. Related work
10
+
11
+ While in this paper we consider only the sample complexity, a recent work by Acharya et al. [3] took communication complexity into consideration and proposed a new privatization scheme with reduced communication complexity while maintaining the optimal order of sample complexity for the $\ell_1$ loss function. Apart from the $\ell_p$ loss measures considered in this paper, significant attention in the literature was devoted to the $\ell_\infty$ estimation of a discrete distribution (also called the heavy hitters problem) under local differential privacy [21, 14, 4]. Although we only consider the case where the same privatization scheme is applied to each raw sample in this paper, one can also construct privatization schemes that depend on the values of previously observed privatized samples. Such interactive privatization schemes are important for online and sequential procedures in private learning [22, 23, 7]. A recent work [1] addresses the private estimation problem of distributional properties when the support size $k$ is not known to the estimator. Other estimation-related problems that were studied under local differential privacy constraints include the problem of testing identity and closeness of discrete distributions [2] and hypothesis testing [11].
12
+
13
+ ## 1.4. Organization of the paper
14
+
15
+ In Section 2, we formulate the problem and give a more detailed review of the existing results. Section 3 is devoted to an overview of the main results of this paper. The proofs of the main results are given in Sections 4-5.
samples/texts/2675357/page_4.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## 2. Problem formulation and existing results
2
+
3
+ **Notation:**
4
+
5
+ Let $\mathcal{X} = \{1, 2, \dots, k\}$ be the source alphabet and let $\boldsymbol{p} = (p_1, p_2, \dots, p_k)$ be a probability distribution on $\mathcal{X}$. Denote by $\Delta_k = \{\boldsymbol{p} \in \mathbb{R}^k : p_i \ge 0 \text{ for } i = 1, 2, \dots, k, \sum_{i=1}^k p_i = 1\}$ the $k$-dimensional probability simplex. Let $X$ be a random variable (RV) that takes values on $\mathcal{X}$ according to $\boldsymbol{p}$, so that $p_i = P(X = i)$. Denote by $X^n = (X^{(1)}, X^{(2)}, \dots, X^{(n)})$ the vector formed of $n$ independent copies of the RV $X$. Denote the uniform distribution as $\boldsymbol{p}_U = (1/k, 1/k, \dots, 1/k)$.
6
+
7
+ ### 2.1. Problem formulation
8
+
9
+ In the classical (non-private) distribution estimation problem, we are given direct access to i.i.d. samples $\{X^{(i)}\}_{i=1}^n$ drawn according to some unknown distribution $\boldsymbol{p} \in \Delta_k$. Our goal is to estimate $\boldsymbol{p}$ based on the samples [20]. We define an estimator $\hat{\boldsymbol{p}}$ as a function $\hat{\boldsymbol{p}} : \mathcal{X}^n \to \mathbb{R}^k$, and assess its quality in terms of the worst-case risk (expected loss)
10
+
11
+ $$ \sup_{\boldsymbol{p} \in \Delta_k} \mathbb{E}_{X^n \sim p^n} \ell(\hat{\boldsymbol{p}}(X^n), \boldsymbol{p}), $$
12
+
13
+ where $\ell$ is some loss function. The minimax risk is defined as the solution of the following saddlepoint problem:
14
+
15
+ $$ r_{k,n}^{\ell} := \inf_{\hat{\boldsymbol{p}}} \sup_{\boldsymbol{p} \in \Delta_k} \mathbb{E}_{X^n \sim p^n} \ell(\hat{\boldsymbol{p}}(X^n), \boldsymbol{p}). $$
16
+
17
+ In the private distribution estimation problem, we can no longer access the raw samples $\{X^{(i)}\}_{i=1}^n$. Instead, we estimate the distribution $\boldsymbol{p}$ from the privatized samples $\{Y^{(i)}\}_{i=1}^n$, obtained by applying a privatization mechanism $\boldsymbol{Q}$ independently to each raw sample $X^{(i)}$. A privatization mechanism (also called privatization scheme) $\boldsymbol{Q}: \mathcal{X} \to \mathcal{Y}$ is simply a conditional distribution $\boldsymbol{Q}_{\mathcal{Y}|X}$. The privatized samples $Y^{(i)}$ take values in a set $\mathcal{Y}$ (the “output alphabet”) that does not have to be the same as $\mathcal{X}$.
18
+
19
+ The quantities $\{Y^{(i)}\}_{i=1}^n$ are i.i.d. samples drawn according to the marginal distribution $\boldsymbol{m}$ given by
20
+
21
+ $$ m(S) = \sum_{i=1}^{k} Q(S|i)p_i \quad (1) $$
22
+
23
+ for any $S \in \sigma(\mathcal{Y})$, where $\sigma(\mathcal{Y})$ denotes an appropriate $\sigma$-algebra on $\mathcal{Y}$. In accordance with this setting, the estimator $\hat{\boldsymbol{p}}$ is a measurable function $\hat{\boldsymbol{p}} : \mathcal{Y}^n \to \mathbb{R}^k$. We assess the quality of the privatization scheme $\boldsymbol{Q}$ and the corresponding estimator $\hat{\boldsymbol{p}}$ by the worst-case risk
24
+
25
+ $$ r_{k,n}^{\ell}(\boldsymbol{Q}, \hat{\boldsymbol{p}}) := \sup_{\boldsymbol{p} \in \Delta_k} \mathbb{E}_{Y^n \sim m^n} \ell(\hat{\boldsymbol{p}}(\boldsymbol{Y}^n), \boldsymbol{p}), $$
samples/texts/2675357/page_5.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ where $\mathbf{m}^n$ is the $n$-fold product distribution and $\mathbf{m}$ is given by (1). Define the minimax risk of the privatization scheme $Q$ as
2
+
3
+ $$r_{k,n}^{\ell}(Q) := \inf_{\hat{p}} r_{k,n}^{\ell}(Q, \hat{p}). \quad (2)$$
4
+
5
+ **Definition 2.1.** For a given $\epsilon > 0$, a privatization mechanism $Q : X \to Y$ is said to be $\epsilon$-locally differentially private if for all $x, x' \in X$
6
+
7
+ $$\sup_{S \in \sigma(y)} \log \frac{Q(Y \in S | X = x)}{Q(Y \in S | X = x')} \le \epsilon. \quad (3)$$
8
+
9
+ Denote by $\mathcal{D}_{\epsilon}$ the set of all $\epsilon$-locally differentially private mechanisms. Given a privacy level $\epsilon$ and a loss function $\ell$, we seek to find the optimal $Q \in \mathcal{D}_{\epsilon}$ with the smallest possible minimax risk $r_{k,n}^{\ell}(Q)$ among all the $\epsilon$-locally differentially private mechanisms. As already mentioned, in this paper we will consider¹ $\ell = \ell_u^u$ for $1 \le u \le 2$, where for $x = (x_1, x_2, \dots, x_k) \in \mathbb{R}^k$
10
+
11
+ $$\ell_u^u(x) := \sum_{i=1}^{k} |x_i|^u.$$
12
+
13
+ It is easy to see that for any valid privatization scheme $Q$, the order of its $\ell_u^u$ minimax estimation risk is $\Theta(n^{-u/2})$, and $\lim_{n \to \infty} r_{k,n}^{\ell_u^u}(Q)n^{u/2}$ is the coefficient of the dominant term, which measures the performance of $Q$ when $n$ is large.
14
+
15
+ **Main Problem:** Suppose that the cardinality $k$ of the source alphabet is known to the estimator. For a given privacy level $\epsilon$, we would like to find the optimal (smallest possible) value of $\lim_{n \to \infty} r_{k,n}^{\ell_u^u}(Q)n^{u/2}$ among all $Q \in \mathcal{D}_{\epsilon}$ and to construct a privatization mechanism and a corresponding estimator to achieve this optimal value.
16
+
17
+ It is this problem that we address—and resolve—in this paper. Specifically, we prove a lower bound on $\lim_{n \to \infty} r_{k,n}^{\ell_u^u}(Q)n^{u/2}$ for $Q \in \mathcal{D}_{\epsilon}$, which implies that the mechanism and the corresponding estimator proposed in [28] are universally optimal for all loss functions $\ell_u^u$, $1 \le u \le 2$.
18
+
19
+ ## 2.2. Previous results
20
+
21
+ In this section we briefly review known results that are relevant to our problem. In Sect. 1.1 we mentioned several papers that have considered it, viz., [26, 5, 10, 17, 16, 25, 7, 3]. In this section we focus on the results of [28] because they are stated in the form convenient for our presentation.
22
+
23
+ ¹The standard notation for the loss function should be $\ell_p^u$, as we used in the Introduction. However, in order to avoid confusion with the notation for probability distribution, we will use $\ell_u^u$ from now on.
samples/texts/2675357/page_6.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Let $\mathcal{D}_{\epsilon,F}$ be the set of $\epsilon$-locally differentially private schemes with finite output alphabet. Let
2
+
3
+ $$ \mathcal{D}_{\epsilon,E} = \left\{ \mathbf{Q} \in \mathcal{D}_{\epsilon,F} : \frac{\mathcal{Q}(y|x)}{\min_{x' \in \mathcal{X}} \mathcal{Q}(y|x')} \in \{1, e^{\epsilon}\} \text{ for all } x \in \mathcal{X} \text{ and all } y \in \mathcal{Y} \right\}. \quad (4) $$
4
+
5
+ In [28, Theorem 13], we have shown that
6
+
7
+ $$ r_{k,n}^{\ell_u^u}(\mathbf{Q}) \geq \inf_{\mathbf{Q}' \in \mathcal{D}_{\epsilon,E}} r_{k,n}^{\ell_u^u}(\mathbf{Q}') \quad \text{for all } \mathbf{Q} \in \mathcal{D}_{\epsilon}. \quad (5) $$
8
+
9
+ As a result, below we limit ourselves to schemes $\mathbf{Q} \in \mathcal{D}_{\epsilon,E}$ in this paper. For such schemes, since the output alphabet is finite, we can write the marginal distribution $\mathbf{m}$ in (1) as a vector $\mathbf{m} = (\sum_{j=1}^k p_j \mathbf{Q}(y|j), y \in \mathcal{Y})$. We will also use the shorthand notation $\mathbf{m} = \mathbf{p}\mathbf{Q}$ to denote this vector.
10
+
11
+ In [28], we introduced a family of privatization schemes which are parameterized by the integer $d \in \{1, 2, \dots, k-1\}$. Given $k$ and $d$, let the output alphabet be $\mathcal{Y}_{k,d} = \{y \in \{0,1\}^k : \sum_{i=1}^k y_i = d\}$, so $|\mathcal{Y}_{k,d}| = \binom{k}{d}$.
12
+
13
+ **Definition 2.2** ([28]). Consider the following privatization scheme:
14
+
15
+ $$ Q_{k,\epsilon,d}(y|i) = \frac{e^{\epsilon}y_i + (1-y_i)}{\binom{k-1}{d-1}e^{\epsilon} + \binom{k-1}{d}} \quad (6) $$
16
+
17
+ for all $y \in \mathcal{Y}_{k,d}$ and all $i \in \mathcal{X}$. The corresponding empirical estimator of $\mathbf{p}$ under $Q_{k,\epsilon,d}$ is defined as follows: For $y^n = (y^{(1)}, y^{(2)}, \dots, y^{(n)}) \in \mathbb{Y}_{k,d}^n$,
18
+
19
+ $$ \hat{p}_i(y^n) = \left( \frac{(k-1)e^\epsilon + \frac{(k-1)(k-d)}{d}}{(k-d)(e^\epsilon - 1)} \right) \frac{t_i(y^n)}{n} - \frac{(d-1)e^\epsilon + k - d}{(k-d)(e^\epsilon - 1)}, \quad i \in [k] \quad (7) $$
20
+
21
+ where $t_i(y^n) = \sum_{j=1}^n y_i^{(j)}$ is the number of privatized samples whose $i$-th coordinate is 1.
22
+
23
+ Some papers [3] call $Q_{k,\epsilon,d}$ the *Subset Selection* mechanism. It is easy to verify that $Q_{k,\epsilon,d}$ is $\epsilon$-locally differentially private. The worst-case estimation loss under $Q_{k,\epsilon,d}$ and the empirical estimator is calculated in the following proposition.
24
+
25
+ **Proposition 2.3.** [28, Prop. 4-5] Let $\mathbf{Q} = Q_{k,\epsilon,d}$ and suppose that the empirical estimator $\hat{\mathbf{p}}$ is given by (7). Let $\mathbf{m} = p\mathbf{Q}_{k,\epsilon,d}$. The estimation loss $\mathbb{E}_{Y^n \sim m^n} \ell_2^2(\hat{\mathbf{p}}(Y^n), \mathbf{p})$ is maximized for the uniform distribution $\mathbf{p}_U$, and
26
+
27
+ $$ r_{k,n}^{\ell_2^2}(\boldsymbol{Q}_{k,\epsilon,d}, \hat{\boldsymbol{p}}) = \mathop{\mathbb{E}}_{Y^n \sim m_U^n} \ell_2^2(\hat{\boldsymbol{p}}(Y^n), \boldsymbol{p}_U) = \frac{(k-1)^2}{nk(e^\epsilon - 1)^2} \frac{(de^\epsilon + k - d)^2}{d(k-d)}, \quad (8) $$
28
+
29
+ where $m_U = p_U Q_{k,\epsilon,d}$.
30
+
31
+ It is clear that the smallest value of the risk $r$ is obtained by optimizing on $d$ in (8). Namely, given $k$ and $\epsilon$, let
32
+
33
+ $$ d^* = d^*(k, \epsilon) := \operatorname*{argmin}_{1 \le d \le k-1} \frac{(de^{\epsilon} + k - d)^2}{d(k-d)}, \quad (9) $$
samples/texts/2675357/page_7.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ where the ties are resolved arbitrarily. We find that $d^*$ takes one the following two values:
2
+
3
+ $$d^* = \lfloor k/(e^\epsilon + 1) \rfloor \text{ or } \lfloor k/(e^\epsilon + 1) \rfloor.$$
4
+
5
+ Therefore, when $k/(e^\epsilon + 1) \le 1$, $d^* = 1$, and when $k/(e^\epsilon + 1) > 1$, the value of $d^*$ can be determined by simple comparison.
6
+
7
+ As a consequence of Prop. 2.3 we find that
8
+
9
+ $$r_{k,n}^{\ell_2}(\mathbf{Q}_{k,\epsilon,d^*}, \hat{\mathbf{p}}) = \min_{1 \le d \le k-1} r_{k,n}^{\ell_2}(\mathbf{Q}_{k,\epsilon,d}, \hat{\mathbf{p}}).$$
10
+
11
+ While in [28] we proved the above results for the mean-square loss (and a similar claim for $\ell = \ell_1$), in this paper we show that they apply more universally. Namely, let
12
+
13
+ $$M(k, \epsilon) := \frac{(k-1)^2}{k^2(e^\epsilon - 1)^2} \frac{(d^*e^\epsilon + k - d^*)^2}{d^*(k-d^*)}, \quad (10)$$
14
+
15
+ and note that $r_{k,n}^{\ell_2}(\mathbf{Q}_{k,\epsilon,d^*}, \hat{\mathbf{p}}) = \frac{k}{n}M(k, \epsilon)$. In this paper we show that the quantity $M(k, \epsilon)$ bounds below the main term of the minimax risk for all loss functions $\ell_u^u, u \ge 1$.
16
+
17
+ ### 3. Main result of the paper
18
+
19
+ Our main result is that the scheme $\mathbf{Q}_{k,\epsilon,d^*}$ and the empirical estimator $\hat{\mathbf{p}}$ defined by (7) are universally optimal for all loss functions $\ell_u^u, 1 \le u \le 2$. Namely, the following is true.
20
+
21
+ **Theorem 3.1.** Let $k = |\mathcal{X}|$, let $\epsilon > 0, 1 \le u \le 2$. Then
22
+
23
+ $$\lim_{n \to \infty} \frac{r_{k,n}^{\ell_u^u}(\mathbf{Q})}{r_{k,n}^{\ell_u^u}(\mathbf{Q}_{k,\epsilon,d^*}, \hat{\mathbf{p}})} \ge 1 \quad \text{for all } \mathbf{Q} \in \mathcal{D}_\epsilon.$$
24
+
25
+ This theorem is a consequence of two results which we state next.
26
+ Let $X \sim N(0, 1)$ and define the constant
27
+
28
+ $$C_u := E|X|^u = 2^{u/2}\Gamma((u+1)/2)/\sqrt{\pi} \quad \text{for } u > 0.$$
29
+
30
+ **Theorem 3.2.** For any $\epsilon > 0$, any $u \ge 1$, and any mechanism $\mathbf{Q} \in \mathcal{D}_\epsilon$
31
+
32
+ $$\lim_{n \to \infty} r_{k,n}^{\ell_u^u}(\mathbf{Q}) n^{u/2} \geq k C_u M(k, \epsilon)^{u/2}. \quad (11)$$
33
+
34
+ Note that this lower bound holds for any loss function $\ell_u^u, u \ge 1$. The proof of this theorem is given in Section 4.
35
+
36
+ **Theorem 3.3.** Consider the privatization scheme $\mathbf{Q} = \mathbf{Q}_{k,\epsilon,d^*}$ and let $\hat{\mathbf{p}}$ be the empirical estimator given by (7). For every $k$ and $\epsilon$ and every $0 < u \le 2$,
37
+
38
+ $$r_{k,n}^{\ell_u^u}(\mathbf{Q}_{k,\epsilon,d^*}, \hat{\mathbf{p}}) = \frac{k}{n^{u/2}} C_u M(k, \epsilon)^{u/2} + o(n^{-u/2}).$$
samples/texts/2675357/page_8.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The proof of this theorem is given in Section 5. Note that, unlike Theorem 3.2, the claim that we make here allows the values of $u \in (0, 1)$. The special cases of Theorem 3.3 for $u = 1$ and $u = 2$ were addressed in our previous paper [28], see in particular Theorem 10.
2
+
3
+ The crux of our argument is in the proof of Theorem 3.2, where we reduce
4
+ the estimation problem in the $k$-dimensional space to a one-dimensional prob-
5
+ lem. Generally, it is well known that the local minimax risk can be calculated
6
+ from the inverse of the Fisher information matrix. However, it is difficult to
7
+ obtain the exact expression of the inverse of a large-size matrix, and without
8
+ it, the path to the desired estimates is not so clear. To work around this com-
9
+ plication, we view a ball in a high-dimensional space as a union of parallel line
10
+ segments with a certain direction $\mathbf{v}_i$. We first consider the estimation problem
11
+ on each line segment individually. Since this is a one-dimensional problem, its
12
+ minimax rate can be easily calculated from the Fisher information of the corre-
13
+ sponding parameter. For the estimation of each component $p_i$ of the probability
14
+ distribution, we choose a suitable direction vector $\mathbf{v}_i$. In this way, we reduce
15
+ the original $k$-dimensional estimation problem to $k$ one-dimensional estimation
16
+ problems and then rely on the additivity of the loss function for the final result.
17
+
18
+ 4. Proof of Theorem 3.2
19
+
20
+ **4.1. Bayes estimation loss**
21
+
22
+ In light of (5), to prove Theorem 3.2, it suffices to show that for every $u \ge 1$,
23
+
24
+ $$ \lim_{n \to \infty} r_{k,n}^{\ell_u^u}(\mathcal{Q}) n^{u/2} \ge k C_u M(k, \epsilon)^{u/2} \quad \text{for all } \mathcal{Q} \in \mathcal{D}_{\epsilon,E}. \quad (12) $$
25
+
26
+ Since the worst-case estimation loss is always lower bounded by the average estimation loss, the minimax risk $r_{k,n}^{\ell_u^u}(\mathcal{Q})$ can be bounded below by the Bayes estimation loss. More specifically, we assume that $\boldsymbol{p} := \{p_1, p_2, \ldots, p_k\}$ is drawn uniformly from
27
+
28
+ $$ \mathcal{P} := \left\{ \boldsymbol{p} \in \Delta_k : \| \boldsymbol{p} - \boldsymbol{p}_U \|_2 \le \frac{D}{\sqrt{n}} \right\}, \quad (13) $$
29
+
30
+ where $D \gg 1$ is a constant. Let $\mathbf{P} = (P_1, P_2, \dots, P_k)$ denote the random vector that corresponds to $\mathbf{p}$. For a given privatization scheme $\mathcal{Q}$ and the corresponding estimator $\hat{\mathbf{p}} := (\hat{p}_1, \hat{p}_2, \dots, \hat{p}_k)$, the $\ell_u^u$ Bayes estimation loss is defined as
31
+
32
+ $$
33
+ \begin{align*}
34
+ r_{\text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}) &:= \underset{\mathbf{P} \sim \text{Unif}(\mathcal{P})}{\mathbb{E}} \left[ \underset{Y^n \sim (\mathbf{P}\mathbf{Q})^n}{\mathbb{E}} \ell_u^u(\hat{\mathbf{p}}(Y^n), \mathbf{P}) \right] \\
35
+ &= \sum_{i=1}^k \left( \underset{\mathbf{P} \sim \text{Unif}(\mathcal{P})}{\mathbb{E}} \left[ \underset{Y^n \sim (\mathbf{P}\mathbf{Q})^n}{\mathbb{E}} |\hat{p}_i(Y^n) - P_i|^u \right] \right),
36
+ \end{align*}
37
+ $$
38
+
39
+ and the optimal Bayes estimation loss for $\mathcal{Q}$ is
40
+
41
+ $$ r_{\text{Bayes}}^{\ell_u^u}(\mathcal{Q}) := \inf_{\hat{\mathbf{p}}} r_{\text{Bayes}}^{\ell_u^u}(\mathcal{Q}, \hat{\mathbf{p}}). $$
samples/texts/2675357/page_9.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ We further define component-wise Bayes estimation loss for $Q$ and $\hat{p}$
2
+
3
+ $$r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{p}_i) := \underset{\mathbf{P} \sim \text{Unif}(\mathcal{P})}{\mathbb{E}} \left[ \sum_{Y^n \sim (\mathbf{P}\mathbf{Q})^n} |\hat{p}_i(Y^n) - P_i|^u \right], \quad i \in [k],$$
4
+
5
+ and the optimal component-wise Bayes estimation loss for $Q$
6
+
7
+ $$r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}) := \inf_{\hat{p}_i} r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{p}_i), \quad i \in [k].$$
8
+
9
+ Therefore,
10
+
11
+ $$r_{\text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}) = \sum_{i=1}^{k} r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{p}_i), \quad r_{\text{Bayes}}^{\ell_u^u}(\mathbf{Q}) = \sum_{i=1}^{k} r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}).$$
12
+
13
+ As mentioned above,
14
+
15
+ $$r_{k,n}^{\ell_u^u}(\mathbf{Q}) \geq r_{\text{Bayes}}^{\ell_u^u}(\mathbf{Q}) = \sum_{i=1}^{k} r_{i,\text{Bayes}}^{\ell_u^u}(\mathbf{Q}).$$
16
+
17
+ We will prove (12) by showing that
18
+
19
+ $$\sum_{i=1}^{k} r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}) \geq \frac{k}{n^{u/2}} C_u M(k, \epsilon)^{u/2} - o(n^{-u/2}) \quad \text{for all } \mathbf{Q} \in \mathcal{D}_{\epsilon, E}. \quad (14)$$
20
+
21
+ ## 4.2. Lower bound on one-dimensional Bayes estimation loss
22
+
23
+ Below we will prove a lower bound on $r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q})$. To this end, in this section we consider a one-dimensional Bayes estimation problem. Define the following vectors:
24
+
25
+ $$v_i := \left(-\frac{1}{k-1}, \dots, -\frac{1}{k-1}, 1, -\frac{1}{k-1}, \dots, -\frac{1}{k-1}\right), \quad i \in [k], \quad (15)$$
26
+
27
+ where the 1 is in the *i*th position and all the other coordinates are $-\frac{1}{k-1}$. Let $\mathbf{p}^* := (p_1^*, p_2^*, \dots, p_k^*) \in \Delta_k$ be a probability distribution and let $S_i(\mathbf{p}^*)$ be a line segment with midpoint $\mathbf{p}^*$ and direction vector $v_i$:
28
+
29
+ $$S_i(\mathbf{p}^*) := \left\{ \mathbf{p}^* + s v_i : |s| \leq \frac{D'}{\sqrt{n}} \right\}, \quad i \in [k], \quad (16)$$
30
+
31
+ where $D' \gg 1$ is a constant. Let $\mathbf{p} = (p_1, \dots, p_k)$ be a PMF in the segment $S_i(\mathbf{p}^*)$. Given the value $p_i$, we can find all the other components of $\mathbf{p}$ as follows:
32
+
33
+ $$p_v = p_v^* - \frac{1}{k-1}(p_i - p_i^*) \quad \text{for all } v \neq i. \quad (17)$$
34
+
35
+ Assume that $\mathbf{p} = (p_1, p_2, \dots, p_k)$ is drawn uniformly from $S_i(\mathbf{p}^*)$, and we consider the Bayes estimation of $p_i$ from the privatized samples $Y^n$ obtained from
samples/texts/2875771/page_1.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Simple Equilibria in General Contests*
2
+
3
+ Spencer Bastani†
4
+
5
+ Thomas Giebe‡
6
+
7
+ Oliver Gürtler§
8
+
9
+ First version: December 3, 2019
10
+ This version: May 17, 2021
11
+
12
+ ## Abstract
13
+
14
+ We show how symmetric equilibria emerge in general two-player contests in which skill and effort are combined to produce output according to a general production technology and players have skills drawn from different distributions. We also show how contests with heterogeneous production technologies, cost functions and prizes can be analyzed in a surprisingly simple manner using a transformed contest that has a symmetric equilibrium. Our paper provides intuition regarding how the contest components interact to determine the incentive to exert effort, sheds new light on classic comparative statics results, and discusses the implications for the optimal composition of teams.
15
+
16
+ **Keywords:** contest theory, symmetric equilibrium, heterogeneity, risk, stochastic dominance
17
+
18
+ **JEL classification:** C72, D74, D81, J23, M51
19
+
20
+ *An earlier working paper version of this paper was circulated under the title "A General Framework for Studying Contests". We thank Peter Cramton, Qiang Fu, Stephan Lauermann, Mark Le Quement, Johannes Münster, Christoph Schottmüller, Dirk Sliwka, Lennart Struth, Zhenda Yin, seminar participants at the University of Cologne, the University of East Anglia, the Berlin-Munich Behavioral Seminar, conference participants at the EALE SOLE AASLE World Conference 2020, the CMID20 Conference on Mechanism and Institution Design in Klagenfurt, and the 2020 Annual Meeting of the Verein für Socialpolitik for helpful comments. All authors gratefully acknowledge financial support from the Jan Wallander and Tom Hedelius Foundation (grant no. P18-0208). Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2126/1 390838866.
21
+
22
+ †Institute for Evaluation of Labour Market and Education Policy (IFAU) and Research Institute of Industrial Economics (IFN); Uppsala Center for Fiscal Studies; Uppsala Center for Labor Studies; CESifo. E-mail: spencer.bastani@ifau.uu.se.
23
+
24
+ ‡Department of Economics and Statistics, School of Business and Economics, Linnaeus University, Sweden. E-mail: thomas.giebe@lnu.se.
25
+
26
+ §Department of Economics, University of Cologne, Germany. E-mail: oliver.guentler@uni-koeln.de
samples/texts/2875771/page_10.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 4 Equilibrium Characterization
2
+
3
+ We focus on pure-strategy Nash equilibria in which both players choose the same level of effort. The following lemma provides a sufficient condition for such a symmetric equilibrium to exist.
4
+
5
+ **Lemma 1.** A sufficient condition for a symmetric equilibrium to exist is that $\frac{\partial P_i(e_i, e_k)}{\partial e_i}|_{e_i=e_k=e}$ is the same for $i, k \in \{1,2\}, i \neq k$, and all $e \in \text{int } E$.
6
+
7
+ *Proof.* See Appendix A.1. □
8
+
9
+ We will make use of Lemma 1 to prove the existence of a symmetric equilibrium by checking the sufficient condition. Since this condition depends on the winning probability, we need to specify this probability first. For each $e > 0$, we define the function $g_e: \mathbb{R} \to \mathbb{R}$ by $g_e(x) = g(x,e)$. The function $g_e(x)$ is strictly increasing in $x$ and thus invertible, and we denote the (strictly increasing) inverse by $g_e^{-1}$. This notation can be motivated by the fact that the event of player $i$ winning over player $k$ can be written as
10
+
11
+ $$
12
+ \begin{align*}
13
+ & g(\theta_k, e_k) < g(\theta_i, e_i) \\
14
+ \Leftrightarrow & g_{e_k}(\theta_k) < g_{e_i}(\theta_i) \\
15
+ \Leftrightarrow & \theta_k < g_{e_k}^{-1}(g_{e_i}(\theta_i)).
16
+ \end{align*}
17
+ $$
18
+
19
+ Considering all potential realizations of $\Theta_i$ and $\Theta_k$, the winning probability of player $i$ is
20
+
21
+ $$ P_i(e_i, e_k) = \int_{\mathbb{R}} F_k(g_{e_k}^{-1}(g_{e_i}(x))) f_i(x) dx. $$
22
+
23
+ By symmetry, the winning probability of player $k$ is
24
+
25
+ $$ P_k(e_i, e_k) = \int_{\mathbb{R}} F_i(g_{e_i}^{-1}(g_{e_k}(x))) f_k(x) dx. $$
26
+
27
+ The derivative of player $i$'s winning probability with respect to $e_i$ is given by:¹⁴
28
+
29
+ $$ \frac{\partial P_i(e_i, e_k)}{\partial e_i} = \int_{\mathbb{R}} f_k(g_{e_k}^{-1}(g_{e_i}(x))) \frac{d}{de_i} (g_{e_k}^{-1}(g_{e_i}(x))) f_i(x) dx. \quad (1) $$
30
+
31
+ The derivative of player $k$'s winning probability with respect to $e_k$ is given by:
32
+
33
+ $$ \frac{\partial P_k(e_i, e_k)}{\partial e_k} = \int_{\mathbb{R}} f_i(g_{e_i}^{-1}(g_{e_k}(x))) \frac{d}{de_k} (g_{e_i}^{-1}(g_{e_k}(x))) f_k(x) dx. \quad (2) $$
34
+
35
+ ¹⁴Notice that $F_k$ is differentiable almost everywhere, since it is the cdf of the absolutely continuous random variable $\Theta_k$ with $f_k$ as the corresponding pdf.
samples/texts/2875771/page_13.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $\theta_1 e_1 / e_2$. The probability of that event, and its first derivative with respect to $e_1$, are
2
+
3
+ $$P_1(e_1, e_2) = \int_{-\infty}^{\infty} F_2\left(x \frac{e_1}{e_2}\right) f_1(x) dx,$$
4
+
5
+ $$\frac{\partial P_1(e_1, e_2)}{\partial e_1} = \int_{-\infty}^{\infty} f_2\left(x \frac{e_1}{e_2}\right) \left(\frac{x}{e_2}\right) f_1(x) dx.$$
6
+
7
+ The first-order condition of player 1's maximization problem is
8
+
9
+ $$\frac{\partial P_1(e_1, e_2)}{\partial e_1} V = e_1.$$
10
+
11
+ In a symmetric equilibrium with $e_1 = e_2 = e$, this can now be written as
12
+
13
+ $$V \int_{-\infty}^{\infty} f_2(x) x f_1(x) dx = e^2.$$
14
+
15
+ For player 2 we obtain the same expression. Using our distributional assumptions, the
16
+ left-hand side becomes
17
+
18
+ $$V \int_{-\infty}^{\infty} f_2(x) x f_1(x) dx = V \int_{1}^{2} \frac{x}{\pi(1+x^2)} dx = V \frac{1}{2\pi} \log \left(\frac{5}{2}\right).$$
19
+
20
+ We thus have a symmetric equilibrium, and the corresponding effort is $e^* = \sqrt{\frac{V \log(\frac{5}{2})}{2\pi}} \approx 0.38\sqrt{V}$.
21
+
22
+ # 5 Generalizations
23
+
24
+ In our previous analysis, the symmetry of the equilibrium was derived under the assumption that the production functions, prizes and cost functions were the same for the competing players. We show next that these assumptions can, under certain conditions, be relaxed. This allows us to deal with heterogeneity between players beyond heterogeneity in skill distributions in a surprisingly simple manner. At the end of the subsection, we will also provide an illustrative example that combines heterogeneity in skill distributions, prizes, cost functions, and production functions.
25
+
26
+ We begin with the possibility to allow for heterogeneity in production technologies,
27
+ using the observation that, in some situations, different production functions can be
28
+ reh interpreted as different skill distributions. This allows us to show that a symmetric
29
+ equilibrium exists using the results of Theorem 1.
30
+
31
+ **Corollary 1.** *Suppose that the production functions are different for the two competing*
samples/texts/2875771/page_16.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $e_2^*$ being determined by
2
+
3
+ $$ \int_{\mathbb{R}} \tilde{f}_1(x) \tilde{f}_2(x) x dx V = e_2^* c'(e_2^*), $$
4
+
5
+ where $\tilde{f}_1$ and $\tilde{f}_2$ denote the pdfs of the random variables $\tilde{\Theta}_1 := s^{\frac{1}{\delta}} \Theta_1^{\frac{\alpha}{\beta}}$ and $\tilde{\Theta}_2 := \Theta_2^{\frac{\alpha}{\beta}}$, respectively.
6
+
7
+ (ii) If the production technology is given by $g(\theta,e) = \alpha\theta + \beta e$, with $\alpha, \beta > 0$, and $\tilde{c} := c \circ \ln$ is homogeneous of degree $\delta > 0$, an equilibrium exists with efforts given by $e_1^* = \ln(s^{\frac{1}{\delta}} \tilde{e}_2^*)$ and $e_2^* = \ln(\tilde{e}_2^*)$, where $\tilde{e}_2^*$ is determined by
8
+
9
+ $$ \int_{\mathbb{R}} \tilde{f}_1(x) \tilde{f}_2(x) x dx V = \tilde{e}_2^* \tilde{c}'(\tilde{e}_2^*), $$
10
+
11
+ and $\tilde{f}_1$ and $\tilde{f}_2$ denote the pdfs of the random variables $\tilde{\Theta}_1 := \exp(\frac{\alpha}{\beta}\Theta_1)s^{1/\delta}$ and $\tilde{\Theta}_2 := \exp(\frac{\alpha}{\beta}\Theta_2)$, respectively.
12
+
13
+ *Proof.* See Appendix A.5. □
14
+
15
+ We conclude this section by presenting an example in which players, in addition to having different skill distributions as in our baseline case, have different production functions and cost functions, and face different prizes. This example is equivalent to a Tullock lottery contest with heterogeneous prizes and quadratic effort costs.
16
+
17
+ **Example 1.** Suppose that:
18
+
19
+ $$ g_1(\theta_1, e_1) = \frac{\theta_1 e_1}{2}, \quad c_1(e_1) = e_1^2, \quad V_1 = \frac{V}{2}, \quad f_1(x) = 2 \frac{\exp(-2x^{-1})}{x^2} I_{\{x>0\}}, $$
20
+
21
+ $$ g_2(\theta_2, e_2) = \theta_2 e_2, \quad c_2(e_2) = \frac{e_2^2}{2}, \quad V_2 = V, \quad f_2(x) = \frac{\exp(-x^{-1})}{x^2} I_{\{x>0\}}, $$
22
+
23
+ implying that player 2 has a more efficient production technology, lower cost of exerting effort and faces a higher prize. Define $\tilde{\Theta}_1 := \frac{\Theta_1}{2}$, and denote by $\bar{f}_1(x) = \frac{\exp(-x^{-1})}{x^2} I_{\{x>0\}}$ and $\bar{F}_1(t) = \int_{-\infty}^{t} \frac{\exp(-x^{-1})}{x^2} I_{\{x>0\}} dx$ the corresponding pdf and cdf.
24
+
25
+ The objective function of player 1 can then be stated as:
26
+
27
+ $$ \int_{\mathbb{R}} F_2 \left( \frac{e_1 x}{e_2} \right) \bar{f}_1(x) dx \frac{V}{2} - e_1^2 = 2 \left( \int_{\mathbb{R}} F_2 \left( \frac{e_1 x}{e_2} \right) \bar{f}_1(x) dx \frac{V}{4} - \frac{e_1^2}{2} \right). $$
samples/texts/2875771/page_17.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The objective function of player 2 can be stated as:
2
+
3
+ $$
4
+ \int_{\mathbb{R}} \bar{F}_1 \left( \frac{e_2 x}{e_1} \right) f_2(x) dxV - \frac{e_2^2}{2}.
5
+ $$
6
+
7
+ Hence, we have transformed the original contest into a contest with different prizes $\tilde{V}_1 := \frac{V}{4}$ and $V_2 = V$, but identical production functions and cost functions. According to part (i) of Proposition 2 (noting that $s = \frac{1}{4}$ and $\delta = 2$ in the transformed contest), an equilibrium exists with efforts given by $e_1^* = \frac{e_2^*}{\sqrt{4}} = \frac{e_2^*}{2}$ where $e_2^*$ is determined by
8
+
9
+ $$
10
+ V \int_{\mathbb{R}} \tilde{f}_1(x) f_2(x) x dx = (e_2^*)^2 \iff e_2^* = \sqrt{V \int_{\mathbb{R}} \tilde{f}_1(x) f_2(x) x dx},
11
+ $$
12
+
13
+ with $\tilde{f}_1(x) = \frac{\exp(-\frac{x-1}{2})}{2x^2} I_{\{x>0\}}$ being the pdf corresponding to the random variable $\frac{\Theta_1}{2}$. Using the specific density functions, we obtain
14
+
15
+ $$
16
+ e_2^* = \sqrt{V \int_0^\infty \frac{\exp\left(-\frac{x-1}{2}\right)}{2x^2} \frac{\exp(-x^{-1})}{x^2} xdx} = \sqrt{V \int_0^\infty \frac{\exp\left(-\frac{3}{2}x^{-1}\right)}{2x^3} dx} = \sqrt{\frac{2V}{9}}.
17
+ $$
18
+
19
+ Notice that the same result as in Example 1 would be obtained by directly solving the
20
+ Tullock contest with different prizes and quadratic costs, in which players maximize the
21
+ objectives $\frac{e_1}{e_1+e_2} \frac{V}{4} - \frac{e_1^2}{2}$ and $\frac{e_2}{e_1+e_2} V - \frac{e_2^2}{2}$, respectively.
22
+
23
+ **6 Comparative Statics Results**
24
+
25
+ In this section, we investigate the consequences of player heterogeneity, in terms of the
26
+ statistical properties of the skill distributions of the competing players, on the incentive
27
+ to exert effort. To facilitate the derivation of these results, we define $r_{e,i}: \mathbb{R} \rightarrow \mathbb{R}$ given by
28
+ $r_{e,i}(x) = a_e(x)f_i(x)$. Equation (4) can thus be written as:
29
+
30
+ $$
31
+ V \int_{\mathbb{R}} r_{e*,i}(x) f_k(x) dx = c'(e^*). \qquad (5)
32
+ $$
33
+
34
+ The integral now has the same structure as a decision maker's expected utility in deci-
35
+ sion theory (e.g., Levy 1992), where the function $r_{e,i}$ corresponds to the decision maker's
36
+ utility function. As we will see, this link proves useful in deriving several key results.
37
+ We also need one additional assumption:
samples/texts/2875771/page_19.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Due to Assumption 2, equilibrium effort increases if a change in the primitives of the model leads to an increase in $\int_{\mathbb{R}} r_{e,i}(x)f_k(x)dx$. As indicated before, this expression has the same structure as a decision maker's expected utility in decision theory, where the function $r_{e,i}$ is replaced by the decision maker's utility function. Since the structure of the problems is the same, we can make extensive use of results from decision theory in our analysis. We obtain the following proposition.
2
+
3
+ **Proposition 3.** Consider two contests with skill distributions $(\tilde{F}_k, F_i)$ and $(F_k, F_i)$ where $\text{supp}(\tilde{f}_k)$ and $\text{supp}(f_k)$ both are subsets of $\text{supp}(f_i)$. Let $\tilde{e}^*$ and $e^*$ denote, respectively, the (symmetric) equilibrium efforts associated with these contests. Then, $\tilde{e}^* > e^*$ if either one of the following statements hold:
4
+
5
+ (i) $r_{e,i}(x)$ is strictly increasing for all $x \in \text{supp}(f_i)$ and all $e \ge 0$, and $\tilde{F}_k$ dominates $F_k$ in the sense of first-order stochastic dominance.
6
+
7
+ (ii) $r_{e,i}(x)$ is strictly decreasing for all $x \in \text{supp}(f_i)$ and all $e \ge 0$, and $\tilde{F}_k$ is dominated by $F_k$ in the sense of first-order stochastic dominance.
8
+
9
+ *Proof.* See Appendix A.6. □
10
+
11
+ Note that Proposition 3 holds independently of whether $\mu_k \le \mu_i$ or $\mu_k \ge \mu_i$. Combining Definition 1 with Proposition 3, we have the following corollary.¹⁷
12
+
13
+ **Corollary 2.** *Effort can be higher when contestants are more heterogeneous in a first-order sense.*
14
+
15
+ We illustrate the intuition behind Proposition 3 and Corollary 2 through two examples. In each example, we start from a situation of equal expected skills, and then introduce a first-order stochastic dominance shift. In the first example, which has a somewhat simpler intuition than the second, $r_{e,i}(x)$ is strictly decreasing and effort gets higher as player $k$ becomes weaker, illustrating part (ii) of Proposition 3. In the second example, $r_{e,i}(x)$ is strictly increasing and effort gets higher as player $k$ becomes stronger, illustrating part (i) of Proposition 3.
16
+
17
+ **Example 2.** Suppose that $g(\theta, e) = \theta + e$, $\Theta_i \sim \text{Exp}(\frac{4}{3})$, $\Theta_k \sim U[\frac{1}{2}, 1]$, $\tilde{\Theta}_k \sim U[\frac{7}{16}, \frac{15}{16}]$, $c(e) = \frac{e^2}{2}$, $V = 1$. Then $e^* = \frac{2(\exp(\frac{2}{3})-1)}{\exp(\frac{4}{3})} \approx 0.499$ and $\tilde{e}^* = \frac{2(\exp(\frac{2}{3})-1)}{\exp(\frac{5}{4})} \approx 0.543$.
18
+
19
+ ¹⁷There is one small caveat to Corollary 2 that we should mention. If equilibrium effort increases as contestants become more heterogeneous, then a symmetric equilibrium in which both players exert positive effort will fail to exist if the heterogeneity between players becomes too large. The reason is that the weaker player would eventually receive a negative payoff, meaning that this player would prefer to choose zero effort.
samples/texts/2875771/page_2.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # 1 Introduction
2
+
3
+ In a contest, two or more players invest effort or other costly resources to win a prize. Many economic interactions can be modeled as a contest. Promotions, for example, represent an important incentive in many firms and organizations. Employees exert effort to perform better than their colleagues and, thus, to be considered for promotion to a more highly paid position. Litigation can also be understood as a contest, in which the different parties spend time and resources to prevail in court. Procurement is a third example, where different firms invest resources into developing a proposal or lobbying politicians, thereby increasing the odds of being selected, receiving some rent in return.
4
+
5
+ Players participating in contests are typically heterogeneous in some respect. For instance, employees differ with respect to their skills, the litigant parties differ with respect to the quality of the available evidence, and firms differ with respect to their capabilities of designing a proposal. When accounting for such heterogeneity in contest models, equilibria are often asymmetric, meaning that players choose different levels of effort. Due to this asymmetry, to keep the analyses tractable, researchers have often imposed rather strict assumptions regarding the production technology and the distributions of stochastic components of the contest.
6
+
7
+ In this paper, we consider a general contest model that allows players to be heterogeneous in terms of their skill distributions, production technologies, prizes, and cost functions. We show that, despite players being heterogeneous, there is often a simple relation between equilibrium efforts. In particular, we identify conditions, under which there is a symmetric equilibrium in which players choose the same effort. We further show that some contests that do not fulfill these conditions can be transformed in such a way that the transformed contest has a symmetric equilibrium, enabling us to establish simple relationship between the asymmetric efforts of the original contest and the symmetric efforts of the transformed contest.
8
+
9
+ The details of our model are as follows. Two players compete for a prize, deciding on their effort. The output of each player, and thereby the player's *production* or *contribution* to the contest, is determined according to a general function of individual effort and the realization of a random variable. The player with the highest output wins the prize. We refer to the random variable as the skill of the player (typically, and equivalently, referred to as noise in the contest theory literature) and the statistical distributions of possible skill realizations are allowed to be different for the competing players. The model is general in terms of the production function and the skill distributions and in-
samples/texts/2875771/page_20.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ In Example 2, the first thing to notice is that the additive production technology implies that $a_e(x) = 1$. This further implies that $r_{e,i}(x)$ is strictly decreasing for all relevant $x$, since $f_i(x)$ is the decreasing pdf of the exponential skill distribution. The fact that $a_e(x) = 1$ also implies that the incentive to supply effort, as given by (4), only depends on the collision density $f_k(x)f_i(x)$. Since $f_i(x)$ is decreasing, and $f_k(x)$ is uniform and shifted to the left, the collision density between $\tilde{f}_k$ and $f_i$ is everywhere larger than the collision density between $f_k$ and $f_i$, see Figure 1 for an illustration. Thus, both players have a higher incentive to exert effort. The simple intuition for the example is that the marginal incentive to supply effort for both players is positive only in situations where they have equal skill, and the considered shift in distributions makes such situations unambiguously “more likely” to happen.
2
+
3
+ Figure 1: Illustration of Example 2
4
+
5
+ **Example 3.** Suppose that $g(\theta,e) = \theta \cdot e$, $\Theta_i \sim U[0,1]$, $\Theta_k \sim U[\frac{1}{4}, \frac{3}{4}]$, $\tilde{\Theta}_k \sim U[\frac{5}{16}, \frac{13}{16}]$, $c(e) = \frac{e^2}{2}$, $V = 1$. Then $e^* = \frac{1}{\sqrt{2}} \approx 0.707$ and $\tilde{e}^* = \frac{3}{4} = 0.75$.
6
+
7
+ In Example 3, the multiplicative production technology implies that $a_e(x) = x/e$ which is a strictly increasing function of $x$. This further implies that $r_{e,i}(x)$ is strictly increasing on $[0,1]$ because $f_i$ is uniform. The shift in the skill distribution of player $k$ from $F_k$ to $\tilde{F}_k$ implies that the expected skill of player $k$ increases. However, the height of the density of player $k$'s skill distribution does not change ($f_k(x) = 2, x \in [\frac{1}{4}, \frac{3}{4}]$ and $\tilde{f}_k(x) = 2, x \in [\frac{5}{16}, \frac{13}{16}]$). Thus, since $f_i(x) = 1$, we have that $f_k(x)f_i(x) = \tilde{f}_k(x)f_i(x) = 2$ at all points where these collision densities are non-zero. However, due to the distributional shift, the subset of $\mathbb{R}$ where the two uniform distributions overlap shifts to the right. Therefore, the two