Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 8
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 52780)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 8
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} \label{SEC:INTRO} There are many problems where the purpose is to optimize several objectives $f_1(\mathbf{x}), \ldots, f_K(\mathbf{x})$ while fulfilling certain constraints $c_1(\mathbf{x}), \ldots, c_C(\mathbf{x})$, where $K$ and $C$ are the number of objectives and constraints. Also, normally, the input space $\mathcal{X}$ is bounded, \emph{i.e.} we optimize in $\mathcal{X} \subset \mathbb{R}^d$, where $d$ is the dimensionality of $\mathcal{X}$. For example, one might want to maximize the speed of a robot while simultaneously minimizing its energy consumption \cite{ariizumi2014expensive}. Moreover one would like to avoid breaking any of its joints. To achieve this one could change the dimensions of the robot's gears and the materials of its manufacturing process. Another example would be to minimize the classification error of a deep neural network while at the same time minimizing the time needed to predict and not exceeding a certain amount of memory space. In this second example, one could modify the learning rate of the network, the number of hidden layers and the number of neurons in each hidden layer. In problems where several objectives are optimized, most of the time there is no single optimal point, but a set of optimal points: the Pareto set $\mathcal{X}^\star$ \cite{collette2004multiobjective}. The objective values associated to the points in $\mathcal{X}^\star$ constitute the Pareto front $\mathcal{Y}^\star$. All the points in the Pareto set are optimal because they are not \emph{Pareto dominated} by any other point in $\mathcal{X}$. In a minimization context, a point $\mathbf{x}_1$ \emph{Pareto dominates} $\mathbf{x}_2$ if $f_k(\mathbf{x}_1) \leq f_k(\mathbf{x}_2)$, $\forall k = \{1, ..., K\}$, with at least one strictly minor inequality. This means that it is not possible to improve the value in one objective without deteriorating the values obtained in others. Moreover, the points of the Pareto set must be valid, \emph{i.e.}, they must satisfy all the constraints $c_j(\mathbf{x}) \geq 0$, $\forall j = \{1, ..., C\}$. On the other hand, as generally the potential size of $\mathcal{X}^\star$ is infinite (and therefore also that of $\mathcal{Y}^\star$), it is necessary to approximate the Pareto set. The optimization problems described above have three main characteristics. First, there is no analytical form for the objectives or the constraints, thus they can be considered black-boxes. Second, the evaluations may be contaminated by noise. Third, new evaluations are quite expensive in some way, \emph{e.g.}, economically or temporally. In the example of the robot, we do not know beforehand what will be its speed and power consumption given some gear. Furthermore, building and testing the robot are likely to introduce some noise into the results, as many external factors can influence these processes. Moreover, these processes could be expensive economically and temporarily since we have to build the robot and prepare its tests. To solve this type of problems while minimizing the evaluations performed we can use a set of techniques called Bayesian optimization (BO) \cite{brochu2009tutorial}. BO has two key pieces. First, a probabilistic model that estimates the potential values of the black-boxes in unexplored regions of $\mathcal{X}$. The probabilistic model usually used is a Gaussian process (GP). GPs are completely defined by a mean function $m(\cdot)$ and a covariance function $v(\cdot,\cdot)$ \cite{rasmussen2006gaussian}. Second, an acquisition function that measures the expected utility of evaluating at each point of X using the information provided by the probabilistic model. At each iteration of the BO algorithm, it evaluates the selected point, updates the probabilistic model and computes and maximizes the acquisition function. All this to find the point to evaluate in the next iteration. This process is repeated for a fixed number of iterations. Once the BO algorithm has finished, the probabilistic models are optimized to estimate the solution of the problem. The acquisition function is cheap to evaluate, unlike the black-boxes. Therefore, BO methods use the probabilistic models to guide the search and save expensive evaluations \cite{shahriari2015taking}. We have extended the acquisition function \emph{max-value entropy search} (MES) \cite{wang2017max}, which is based on the reduction of the entropy of the solution of the optimization problem in function space, to work with several objectives and constraints simultaneously. In the literature, there have already been some attempts to perform such an extension \cite{belakaria2020max}. Nevertheless, the approximations of the acquisition function, which is intractable, are very crude. In particular, the method proposed in \cite{belakaria2020max} simply tries to maximize each objective and constraint independently. Our approximation of the exact acquisition is more accurate, as shown by our experiments, which leads to better optimization results. We call our method \emph{improved max-value entropy search for multi-objective optimization with constraints} (MESMOC+). As MES, MESMOC+ chooses as the next point to evaluate as the one at which the entropy of the Pareto front $\mathcal{Y}^\star$ is expected to be reduced the most. The reduction of entropy of $\mathcal{Y}^\star$ means that more information about the solution of the problem is available \cite{villemonteix2009informational,hennig2012entropy}. Several experiments involving synthetic and real optimization problems show that MESMOC+ outperforms the method proposed in \cite{belakaria2020max}. Moreover, it obtains similar and sometimes even better results than those obtained by the best current acquisition functions for multi-objective optimization with several constraints from the literature \cite{garrido2019predictive}. Nevertheless, its computational cost per iteration is significantly smaller. MESMOC+ is expressed as a sum of acquisitions, one per each black-box. Therefore, it can be used in a decoupled evaluation setting \cite{hernandez2015predictive}. More precisely, often the evaluation of black-boxes involves performing different experiments or simulations. In the example of the robot, we may perform a simulation to know if any joint breaks. Whereas to measure the speed or energy consumption it may be needed to manufacture and test the robot. Similarly, in the example of the neural network, knowing its classification error requires training and validation. By contrast, to determine its prediction time, or if it needs more memory than available, it is only necessary to build it, using, \emph{e.g.}, random weights. In a decoupled setting MESMOC+ chooses not only the next input location to evaluate, but also which black-box to evaluate. Our experiments compare the coupled and decoupled variants of MESMOC+ showing that sometimes a decoupled evaluation setting gives better results over a coupled one in which all black-boxes are evaluated at the same location. \vspace{-.25cm} \section{Improved MES for Several Objectives and Constraints} \label{SEC:MESMOC+} In this section, we give the details of the proposed acquisition function \emph{improved max-value entropy search for multi-objective optimization with constraints} (MESMOC+). In BO, the maximum of the acquisition function indicates the next point at which to evaluate the black-boxes. For this, the information provided by the probabilistic models is used and each time that a new point is evaluated, the probabilistic models are updated. Usually, Gaussian processes (GPs) are the probabilistic models used \cite{shahriari2015taking}. Here we assume that the black-boxes are generated from a GP a priori with i.i.d. Gaussian noise with zero mean \cite{rasmussen2006gaussian}. For simplicity, the development of MESMOC+ is carried out considering a coupled evaluation setting, in which all black-boxes are evaluated at the same point. However, later on we explain how to use MESMOC+ in a decoupled setting. Let $\mathcal{D} = \{(\mathbf{x}_n, \mathbf{y}_n)\}_{n=1}^N$ be the dataset with the evaluations performed up to iteration $N$, where $\mathbf{x}_n$ is the input evaluated in the $n$-th iteration and $\mathbf{y}_n$ is a vector with the values obtained when evaluating the $K$+$C$ black-boxes in $\mathbf{x}_n$, \emph{i.e.}, $\mathbf{y}_n = (f_1(\mathbf{x}_n), \ldots, f_K(\mathbf{x}_n), c_1(\mathbf{x}_n), \ldots, c_C(\mathbf{x}_n))$. Since MESMOC+ believes in reducing the entropy of the solution in the functional space after a $\mathbf{x}_{N+1}$ evaluation, the MESMOC+ acquisition function is: \begin{align} \alpha(\mathbf{x}) &= H \left( \mathcal{Y}^\star | \mathcal{D} \right) - \mathbb{E}_{\mathbf{y}} \left[ H \left( \mathcal{Y}^\star| \mathcal{D} \cup \{(\mathbf{x}, \mathbf{y})\} \right) \right]\,, \label{EQ:MESMOC+INI1} \end{align} where $H \left( \mathcal{Y}^\star| \mathcal{D} \right)$ is the entropy of the Pareto front $\mathcal{Y}^\star$ given by the probabilistic models, adjusted using the current dataset $\mathcal{D}$; the expectation is calculated over the potential values for $\mathbf{y}$ at $\mathbf{x}$, according to the GPs; and $H( \mathcal{Y}^\star| \mathcal{D} \cup \{(\mathbf{x}, \mathbf{y})\} )$ is the entropy of $\mathcal{Y}^\star$ after including the new data point $(\mathbf{x}, \mathbf{y})$ in the dataset. Critically, evaluating the entropy of $\mathcal{Y}^\star$ is very challenging. In order to avoid this problem, we can rewrite \eqref{EQ:MESMOC+INI1} in an equivalent form, as suggested in \cite{wang2017max}, by noting that \eqref{EQ:MESMOC+INI1} is exactly the mutual information between $\mathcal{Y}^\star$ and $\mathbf{y}$ \cite{hernandez2014predictive,hernandez2016predictive}. Therefore, since $I(\mathcal{Y}^\star; \mathbf{y}) = I(\mathbf{y}; \mathcal{Y}^\star)$, then, we can swap the roles of $\mathcal{Y}^\star$ and $\mathbf{y}$ in (\ref{EQ:MESMOC+INI1}) and the MESMOC+ acquisition function is: \begin{align} \alpha(\mathbf{x}) &= H \left( \mathbf{y}| \mathcal{D}, \mathbf{x} \right) - \mathbb{E}_{\mathcal{Y}^\star} \left[ H \left( \mathbf{y}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star \right) \right]\,, \label{EQ:MESMOC+INI2} \end{align} where $H \left( \mathbf{y}| \mathcal{D}, \mathbf{x} \right)$ is the entropy of $p\left( \mathbf{y}| \mathcal{D}, \mathbf{x} \right)$, \emph{i.e.}, the entropy of the predictive distribution of the GPs at $\mathbf{x}$; now the expectation is with respect to potential values of $\mathcal{Y}^\star$; and $H \left( \mathbf{y}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star \right)$ is the entropy of the predictive distribution conditioned to the Pareto front $\mathcal{Y}^\star$ being the solution of the problem. The expression given in Eq. \eqref{EQ:MESMOC+INI2} is the acquisition function targeted by MESMOC+. Thus, in each iteration, the next query will be chosen at the maximum point \eqref{EQ:MESMOC+INI2}, \emph{i.e.} $\smash{\mathbf{x}_{N+1} = \arg \max_{\mathbf{x} \in \mathcal{X}} \alpha(\mathbf{x})}$. It is easier to work with this expression than with \eqref{EQ:MESMOC+INI1} because here we do not have to evaluate the entropy of $\mathcal{Y}^\star$, a set of probably infinite size. The first term of the r.h.s of \eqref{EQ:MESMOC+INI2} is simply the entropy of the current predictive distribution, and since we assume that there is no correlation between the black-boxes, its expression is the sum of the entropy of $K+C$ Gaussian distributions. Namely \begin{align}\label{EQ:FISRTTERMMESMOC+} H \left( \mathbf{y}| \mathcal{D}, \mathbf{x} \right) &= \sum_{k=1}^K \frac{\log(2 \pi e v_k^f)}{2} + \sum_{j=1}^C \frac{\log(2 \pi e v_j^c)}{2}\,, \end{align} where $v_k^f = v_k^f(\mathbf{x})$ and $v_j^c = v_j^c(\mathbf{x})$ are the predictive variance for the $k$-th and $j$-th objective and constraint respectively. Nevertheless, the evaluation of the second term in the r.h.s. of \eqref{EQ:MESMOC+INI2} is intractable. The expectation can be approximated by generating Monte Carlo samples of $\mathcal{Y}^\star$, calculating the entropy of $p( \mathbf{y}| \mathcal{D}, \mathbf{x})$ and averaging the results. To generate samples of $\mathcal{Y}^\star$, we first use a random feature approximation of the GPs (see \cite{rahimi2007random} for further details) to generate samples of the objectives and constraints. These samples are then optimized using a grid of points, as in \cite{garrido2019predictive}, to sample $\mathcal{Y}^\star$. This step is cheap because, unlike the actual black-boxes, we can evaluate the GP samples with small cost. Other GP sampling methods may be used instead, \emph{e.g.}, \cite{wilson2020efficiently}. Finally, on the other hand, the evaluation of the entropy of $p(\mathbf{y}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$ has to be approximated. We explain the approximation employed in the next section. \subsection{Approximating the Conditional Predictive Distribution} \label{SB:CPDADF} As in the original formation of \emph{max-value entropy search} (MES) \cite{wang2017max}, we consider that the evaluations are noiseless. Namely, we approximate $p(\mathbf{f}, \mathbf{c}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$ instead of $p(\mathbf{y}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$, where $\mathbf{f}=\{f_1(\mathbf{x}), \ldots, f_K(\mathbf{x})\}$ and $\mathbf{c}=\{c_1(\mathbf{x}), \ldots, c_C(\mathbf{x})\}$ are the predicted values for the black-boxes. At the end of the next section, we modify the acquisition function developed to take additive noise into account. The expression of $p(\mathbf{f}, \mathbf{c}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$ is obtained using Bayes' rule. Namely, \begin{align} p(\mathbf{f}, \mathbf{c}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star) & = Z^{-1} p(\mathbf{f}, \mathbf{c}| \mathcal{D}, \mathbf{x}) p(\mathcal{Y}^\star| \mathbf{f}, \mathbf{c})\,, \end{align} where $Z^{-1}$ is a normalization constant, $p(\mathbf{f}, \mathbf{c}| \mathcal{D}, \mathbf{x})$ is the probability of the objectives and constraints given $\mathcal{D}$ and $\mathbf{x}$, and $p(\mathcal{Y}^\star| \mathbf{f}, \mathbf{c})$ is the probability that $\mathcal{Y}^\star$ is a valid Pareto front given $\mathbf{f}$ and $\mathbf{c}$. The factor $p(\mathcal{Y}^\star| \mathbf{f}, \mathbf{c})$ in \eqref{EQ:CPDM1} removes all configurations of the objectives and constraints values, $(\mathbf{f}, \mathbf{c})$, that are incompatible with $\mathcal{Y}^\star$ being the Pareto front of the problem. Therefore, $p(\mathcal{Y}^\star| \mathbf{f}, \mathbf{c})$ must be $0$ when $\mathbf{c}$ is valid ($\mathbf{c}$ does satisfy $c_j (\mathbf{x}) \geq 0, \forall j \in \{1, ..., C\}$), but $\mathbf{f}$ is not \emph{Pareto dominated} by all the points in the Pareto front, \emph{i.e}, all $\mathbf{f}^\star \in \mathcal{Y}^\star$. Similarly, $p(\mathcal{Y}^\star| \mathbf{f}, \mathbf{c})$ will be $1$ if all points $\mathbf{f}^\star$ in the Pareto front $\mathcal{Y}^\star$ dominate $\mathbf{f}$, or if $\mathbf{c}$ is invalid (\emph{i.e.}, at least one constraint is negative at $\mathbf{x}$). This can be expressed, informally, as follows: \begin{align}\label{EQ:PYSTAR} p(\mathcal{Y}^\star| \mathbf{f}, \mathbf{c}) & \propto \prod_{\mathbf{f}^\star \in \mathcal{Y}^\star} \big( 1 - \prod_{j=0}^C \Theta (c_j) \prod_{k=0}^K \Theta \left(f_k^\star - f_k \right) \big) \propto \prod_{\mathbf{f}^\star \in \mathcal{Y}^\star} \Omega (\mathbf{f}^\star, \mathbf{f}, \mathbf{c}) \,, \end{align} where $\Theta(\cdot)$ is the Heaviside step function, $f_k = f_k (\mathbf{x})$, $c_j = c_j (\mathbf{x})$, $f_k^\star$ is the $k$-th value of $\mathbf{f}^\star$ and $\smash{\Omega (\mathbf{f}^\star, \mathbf{f}, \mathbf{c}) = 1 - \prod_{j=0}^C \Theta (c_j (\mathbf{x})) \prod_{k=0}^K \Theta \left(f_k^\star - f_k (\mathbf{x}) \right)}$. Note that the value of \eqref{EQ:PYSTAR} will only be $1$, if $\Omega (\mathbf{f}^\star, \mathbf{f}, \mathbf{c})$ is $1$ for all the $\mathbf{f}^\star$ in $\mathcal{Y}^\star$. To make $\Omega (\mathbf{f}^\star, \mathbf{f}, \mathbf{c})$ be $1$, $\smash{ \prod_{j=0}^C \Theta (c_j (\mathbf{x}))}$ or $\smash{\prod_{k=0}^K \Theta \left(f_k^\star - f_k (\mathbf{x}) \right)}$ have to be $0$, and that happens if all the values of $\mathbf{c}$ are greater or equal to $0$, or if all the values of $\mathbf{f}^\star$ are lower or equal to those of $\mathbf{f}$, except one which must be strictly minor. These are precisely the conditions described above Eq. (\ref{EQ:PYSTAR}). Computing the normalization constant and the entropy of \eqref{EQ:CPDM1} is intractable. Thus, we must approximate this distribution. Critically, the approximation should be cheap. For this, we use Assumed Density Filtering (ADF) \cite{boyen1998tractable,minka2001expectation}. ADF simply approximates each non-Gaussian factor in \eqref{EQ:CPDM1} using a Gaussian distribution. Since the predictive distribution of a GPs is Gaussian, the only non-Gaussian factors are the $\smash{\Omega (\mathbf{f}^\star, \mathbf{f}, \mathbf{c})}$ factors in \eqref{EQ:PYSTAR}. We assume independence among the objectives and constraints, this results in a factorizing Gaussian approximation of each factor. The specific updates for the parameters of each of these Gaussians are described in the supplementary material. Because the Gaussian distribution is closed under the product operation, the approximation of \eqref{EQ:CPDM1} is a factorizing Gaussian distribution. Let $\mathbf{\tilde{m}}^{f}$ and $\mathbf{\tilde{m}}^{c}$ and the $\mathbf{\tilde{v}}^{f}$ and $\mathbf{\tilde{v}}^{c}$ be respectively the means and variances of that approximation. \subsection{The MESMOC+ Acquisition Function} \label{SB:MESMOC+FINALEXPRESION} After the execution of ADF has finished, the variances of the objectives and the constraints of the predictive distribution at the candidate point $\mathbf{x}$ conditioned to the Pareto front $\mathcal{Y}^\star$ are available. Therefore, to obtain the approximate expression of \eqref{EQ:MESMOC+INI2} one simply has to combine \eqref{EQ:FISRTTERMMESMOC+} with the result of the calculation of the entropy of $p(\mathbf{f}, \mathbf{c}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$. Because this distribution is approximated with a Gaussian distribution using ADF, the approximate entropy has a form similar to that of \eqref{EQ:FISRTTERMMESMOC+}. The consequence is that the acquisition function can be approximated simply as the difference between the entropy of two factorizing multi-variate Gaussians. One for the objectives and one for the constraints. Namely, \begin{align} \label{EQ:MESMOC+NONOISY} \alpha(\mathbf{x}) \approx &\sum_{k=1}^K\log(v_k^f) + \sum_{j=1}^C\log(v_j^c) - \frac{1}{M} \sum_{m=1}^M \big[ \sum_{k=1}^K\log(\tilde{v}_k^f) + \sum_{j=1}^C\log(\tilde{v}_j^c) \big]\,, \end{align} where $M$ is the number of Monte Carlo samples of $\mathcal{Y}^\star$, $v_k^f = v_k^f(\mathbf{x})$, $v_j^c = v_j^c(\mathbf{x})$, $\tilde{v}_k^f = \tilde{v}_k^f(\mathbf{x}|\mathcal{Y}^\star_{(m)})$, $\tilde{v}_j^c = \tilde{v}_j^c(\mathbf{x}|\mathcal{Y}^\star_{(m)})$, are the approximate variances of the conditional distribution, and $\{\mathcal{Y}^\star_{(m)}\}^M_{m=1}$ is the set of Monte Carlo samples of $\mathcal{Y}^\star$. In order to take into account the noise of each black-box, one simply needs to add its variance to the variance of the corresponding objectives and constraints. Unfortunately, the behavior of MESMOC+ when using \eqref{EQ:MESMOC+NONOISY} to approximate the acquisition function is not the expected one. More precisely, \eqref{EQ:MESMOC+NONOISY} is highly influenced by a small decrease in the variance of the conditional predictive distribution. This is particularly the case for points that have a very small associated initial variance, \emph{e.g.}, $10^{-5}$. The logarithm tends to amplify these small differences (\emph{e.g.}, a variance reduction from $10^{-5}$ to $10^{-6}$ will result in a log difference that is approximately equal to 2.32) and the consequence is a highly exploitative behavior of the BO method which tends to perform evaluations that are very close to points that have already been evaluated. To avoid this, we modified MESMOC+'s acquisition function to take into account the absolute reduction in the variance instead. The final expression of \textrm{MESMOC+} acquisition is: \begin{align}\label{EQ:MESMOC+FINAL} \alpha(\mathbf{x}) \approx &\sum_{k=1}^K \left( v_k^f + (\sigma_k^f)^2 \right) + \sum_{j=1}^C \left( v_j^c + (\sigma_j^c)^2 \right) \notag \\ &- \frac{1}{M} \sum_{m=1}^M \big[ \sum_{k=1}^K \left( \tilde{v}_k^f + (\sigma_k^f)^2 \right) + \sum_{j=1}^C \left(\tilde{v}_j^c + (\sigma_j^c)^2 \right) \big]\,, \end{align} where $(\sigma_k^f)^2$ and $(\sigma_j^c)^2$ are the noise variances of each objective and constraint. Note that \eqref{EQ:MESMOC+FINAL} is a sum of one acquisition per black-box. Namely, $\alpha(\mathbf{x})= \sum_{k=1}^K \alpha_k^f(\mathbf{x}) + \sum_{j=1}^C \alpha_j^c(\mathbf{x})$. Therefore, \eqref{EQ:MESMOC+FINAL} can be readily used in a decoupled evaluation setting. In this case, when all black-boxes are competing to be evaluated, each individual acquisition function is maximized separately. The black-box with the maximum value associated to the acquisition is chosen for evaluation. The cost evaluating \eqref{EQ:MESMOC+FINAL} is in $\mathcal{O}(\sum_{m=1}^M (K+C)|\mathcal{Y}_{(m)}^\star|)$, where $M$ is the number of Monte Carlo samples, and $K$ and $C$ are the number of objectives and constraints respectively. The part of the cost corresponding to $(K+C)|\mathcal{Y}_{(m)}^\star|$ comes from running the ADF algorithm to approximate the variances of predictive distribution conditioned to $\mathcal{Y}^\star$. This approximation is run for each candidate point $\mathbf{x}$ at which the acquisition needs to be evaluated. For each sample of the objectives and constraints, $\mathcal{Y}^\star$ is approximated using $50$ points. The acquisition function is optimized using a Quasi-Newton method with the gradient approximated by differences. A grid is used to find a good starting value. \section{Related Work} \label{SEC:RELATEDWORK} There are other acquisition functions that can deal with multiple objectives and constraints. They are described in this section and compared to MESMOC+. Bayesian Multi-objective optimization (BMOO) extends the Pareto dominance rule to introduce a preference to perform evaluations at points that are more likely to be feasible \cite{feliot2017bayesian}. This extended rule comes from the fact that in constrained problems there may be no feasible point observed. The extended rule simply applies a transformation to the two points that are compared to see if one dominates the other. This transformation function is: \begin{align}\label{EQ:BMOOPARETORULE} \Psi(\mathbf{y}^f, \mathbf{y}^c) = \left\{ \begin{matrix} (\mathbf{y}^f, \mathbf{0}) \qquad &\text{if} \ \mathbf{y}^c \geq 0 \\ (+\infty, \min(\mathbf{y}^c, \mathbf{0})) \ &\text{otherwise} \end{matrix} \right.\,, \end{align} where $\mathbf{y}^f$ and $\mathbf{y}^c$ are the vectors observations of the objectives and constraints, respectively. BMOO uses the acquisition function expected improvement where improvement is measured in terms of hyper-volume. The hyper-volume is simply the volume of points in functional space above the best observed points using the extended Pareto dominance rule. It is maximized by the actual solution of the problem. The acquisition function of BMOO measures the expected hyper-volume improvement in the extended space: \begin{equation}\label{EQ:BMOO} \alpha(\mathbf{x}) = \mathbb{E}_{\mathbf{y}^f, \mathbf{y}^c} \left [ \int_{\mathcal{G}_N} \mathbb{I}(\Psi(\mathbf{y}^f, \mathbf{y}^c) \prec \Psi(\mathbf{y}) ) d\mathbf{y} \right] \end{equation} where $\mathbf{a} \prec \mathbf{b}$ means that $\mathbf{a}$ is Pareto dominated by $\mathbf{b}$, $\mathbb{I}(\cdot)$ is the indicator function, $\mathcal{G}_N$ is the set of points not dominated until iteration $N$, and the expectation is w.r.t the predictive distribution of the GPs. Since \eqref{EQ:BMOO} cannot be calculated analytically, in \cite{feliot2017bayesian} the authors decided to swap the expectation and the integral, and to approximate the integral by using Monte Carlo samples from a uniform distribution in $\mathcal{G}_N$. However, generating these samples is very expensive. A Metropolis-Hastings algorithm is suggested for this. BMOO was initially described for noiseless scenarios, but the method can also be applied when the black-boxes are contaminated with noise \cite{garrido2019predictive}. Finally, BMOO is often outperformed by another acquisition function known as PESMOC and it does not allow for decoupled evaluations \cite{garrido2019predictive}. PESMOC is another acquisition function that focuses on the reduction of the entropy of the solution of the optimization problem \cite{garrido2019predictive}. However, it targets the entropy of the Pareto set $\mathcal{X}^\star$, instead of the entropy of the Pareto frontier $\mathcal{Y}^\star$. The acquisition function is hence the expected reduction in the entropy of $\mathcal{X}^\star$. As in MESMOC+, the entropy of $\mathcal{X}^\star$ is intractable and requires complicated approximations. PESMOC rewrites the expression of the acquisition function noting that the expected reduction in the entropy of $\mathcal{X}^\star$ is the mutual information between $\mathcal{X}^\star$ and $\mathbf{y}$. Because the mutual information is symmetric, it is equivalent to the mutual information between $\mathbf{y}$ and $\mathcal{X}^\star$. Doing this rewriting, the acquisition function of PESMOC is: \begin{equation}\label{EQ:PESMOC} \alpha(\mathbf{x}) = H \left( \mathbf{y}| \mathcal{D}, \mathbf{x} \right) - \mathbb{E}_{\mathcal{X}^\star} \left[ H \left( \mathbf{y}| \mathcal{D}, \mathbf{x}, \mathcal{X}^\star \right) \right]\,, \end{equation} where the first term of the r.h.s. is the same as in MESMOC+. Namely, the entropy of the predictive distribution. The expectation is w.r.t. the Pareto set $\mathcal{X}^\star$ instead of $\mathcal{Y}^\star$. Finally, the second term of the r.h.s. is the entropy of the predictive distribution conditioned to $\mathcal{X}^\star$ being the solution to the problem. As in MESMOC+, the second term of the r.h.s. of \eqref{EQ:PESMOC} is intractable and must be approximated. The expectation is approximated also by a Monte Carlo average, as in MESMOC+. The method for generating the samples of $\mathcal{X}^\star$ is equivalent to the one used in MESMOC+. The entropy of $p(\mathbf{y}|\mathcal{D}, \mathbf{x}, \mathcal{X}^\star)$ needs to be approximated. Expectation propagation is used for that purpose \cite{minka2001expectation}. However, and importantly, this step is more complicated than in MESMOC+, where the entropy of $p(\mathbf{y}|\mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$ has to be approximated instead. In particular, there are more non-Gaussian factors that need to be approximated in PESMOC, and the approximation is more complicated since some of the factors depend on two variables, which involves working with bi-variate Gaussians. By contrast, all the factors in MESMOC+ are univariate which means that only one-dimensional Gaussians have to be used in practice. This is results in MESMOC+ acquisition being significantly less expensive to compute and easier to implement. Our experiments also show that MESMOC+ gives similar results to those of PESMOC. Max-value entropy search (MES) has also used to address optimization problems that involve a single objective and no constraints \cite{wang2017max}, a single objective and several constraints \cite{perrone2019constrained} and with several objectives and no constraints \cite{belakaria2019max,suzuki2020multi}. Notwithstanding, none of these methods can address several objectives and constraints at the same time. Moreover, the extension to several constraints or several objectives is not trivial at all. MESMOC is an acquisition function developed in an independent work \cite{belakaria2020max}, which also minimizes the entropy of $\mathcal{Y}^\star$, as MESMOC+ does. The expression considered by MESMOC for the acquisition function is also \eqref{EQ:MESMOC+INI2}. However, the proposed approximation for the entropy of $p(\mathbf{y}|\mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$ is different. Instead of minimizing the objectives, in \cite{belakaria2020max} maximization is considered. Ignore the constraints initially. Let the sampled Pareto frontier be $\mathcal{Y}^\star=\{\mathbf{z}_1,\ldots,\mathbf{z}_m\}$ with $m$ the size of $\mathcal{Y}^\star$. In \cite{belakaria2020max} they argue that a sufficient condition for some point $\mathbf{y}$ being compatible with $\mathcal{Y}^\star$ as the solution of the problem is that $y^j \leq \max \{z_1^j,\ldots,z_m^j\} \ \forall j \in \{1,\ldots,K\}$. That is, the value of $\mathbf{y}$ for the $j$-th objective cannot be better than the maximum value for that objective, according to $\mathcal{Y}^\star$. However, this condition is not complete because $\mathbf{y}$ can be optimal, (\emph{i.e.}, $\mathbf{y}$ is incompatible with $\mathcal{Y}^\star$) even if none of its values are greater than the maximum value for the corresponding objective. E.g., let $K=2$ and $\mathcal{Y}^\star = \{(1,0), (0,1)\}$. Consider now the point $(0.7, 0.7)$. Their components are lower than $1=\max\{z_1^j,\ldots,z_m^j\} \forall j \in \{1,\ldots,K\}$, but this point is optimal and non-dominated by any of the points in $\mathcal{Y}^\star$. Then, the constraints are incorporated in \cite{belakaria2020max}, in an ad-hoc way, simply by enforcing that $c_j(\mathbf{x}) \leq \text{max}\{\tilde{z}_1^j,\ldots,\tilde{z}_m^j\}$ for $j=1,\ldots,C$, where $\{\tilde{\mathbf{z}}_i\}_{i=1}^m$ are the constraint values associated to the points in $\mathcal{Y}^\star$. That is, the constraint values have to be smaller than the maximum constraint values associated to the Pareto front $\mathcal{Y}^\star$, as it was done with the objectives. Notwithstanding, in the provided code in \cite{belakaria2020max}, the implementation considered not only $\mathcal{Y}^\star$, but all the evaluations performed so far. The consequence is that the acquisition proposed in \cite{belakaria2020max} is simply the sum of the MES acquisition function for each objective and constraint \cite{wang2017max}. This makes sense, and is equivalent to maximizing all the objectives and constraints independently. Maximizing the objectives is expected to give good solutions. Maximizing the constraints is expected to provide feasible solutions. The optimization of the resulting acquisition function is, however, restricted in \cite{belakaria2020max} to those regions of the input space in which the GP means for the constraints are strictly positive. This becomes problematic in problems in which finding feasible points is difficult. In particular, if all the observations are infeasible, the GP means for the constraints will be negative in all the input space (even though the associated GP variance can be high). In that case, we simply choose at random the next point to evaluate. Our experiments show that the accuracy of MESMOC for approximating the acquisition in Eq. (\ref{EQ:MESMOC+INI2}) is worse than that of our method, MESMOC+. Furthermore, in several optimization problems MESMOC+ outperforms MESMOC. We believe this is related to the accuracy of the approximation and the problem of MESMOC for finding feasible solutions. \section{Experiments} \label{SEC:EXPERIMENTS} We compare MESMOC+ and its decoupled variant MESMOC+$_\text{dec}$ with the acquisition functions described in Section \ref{SEC:RELATEDWORK} (\emph{i.e.}, BMOO, PESMOC, and MESMOC) and with a random search (RANDOM). BMOO and PESMOC are provided in the Bayesian optimization software Spearmint (\url{https://github.com/EduardoGarrido90/Spearmint}). We have also implemented in that software MESMOC+ and MESMOC, closely following the code provided in \cite{belakaria2020max}. See the supplementary material. We use a Mat\'ern52 with ARD as the kernel of all GPs, and to learn their hyper-parameters we use slice sampling with 10 samples, as does in \cite{snoek2012practical}. This is also the number of samples considered in MESMOC+, MESMOC and PESMOC for $\mathcal{Y}^\star$, $\mathcal{Y}^\star$ and $\mathcal{X}^\star$, respectively. To maximize the acquisition function we use L-BFGS using a grid of 1.000 points to choose the starting position. The gradients of the acquisition function are approximated by differences. All the experiments are repeated 100 times and we report average results. The recommendation of each method is obtained by optimizing the means of the GPs at each iteration. We follow the approach suggested in \cite{garrido2019predictive} to avoid recommending infeasible solutions. \subsection{Quality of the Approximation of the Acquisition Function} \label{SB:QUALITYAPPROXIMATION} We compare in a simple problem the acquisition function of MESMOC+ and MESMOC with the exact acquisition function described in Eq. (\ref{EQ:MESMOC+INI2}). The problem considered has only two objectives and one constraint. In this setting, quadrature methods are feasible to evaluate the entropy of $p(\mathbf{y}| \mathcal{D}, \mathbf{x}, \mathcal{Y}^\star)$ at a much higher computational cost. They are expected to provide an approximation that is almost equal to that of the exact acquisition. The left column of the Fig. \ref{FIG:CMPACQ} shows the current observations and predictive distributions for the objectives and constraints. The right column shows the sum of the acquisition function for MESMOC and MESMOC+. In the case of MESMOC+, we show results for the proposed method and when the log of the variance is considered (MESMOC+$_\text{log}$). See Eq. \eqref{EQ:MESMOC+NONOISY}. Last, we also show the results of the quadrature method (Exact). We note that the approximation of MESMOC+$_\text{log}$ seems to be the most accurate, closely followed by MESMOC+. By contrast, the approximation of MESMOC does not look similar to the exact acquisition. MESMOC avoids evaluations in the region where the GP mean of the constraint is negative. MESMOC's acquisition there correspond to a constant value smaller than zero. A comparison of the quality of the approximation in a decoupled scenario is found in the supplementary material. There, MESMOC+ also provides a more accurate approximation than MESMOC. \begin{figure*}[tbh]\label{FIG:CMPACQ} \centering \resizebox{\textwidth}{!}{\begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{./plot_gps_coupled.pdf} \includegraphics[width=0.5\textwidth]{./plot_acq_coupled_Y.pdf} \end{tabular}} \caption{(left) GP predictive distributions for the objectives and constraints. (right) The corresponding estimated acquisition function of each method, MESMOC+ and MESMOC, and the exact acquisition (Exact). Best seen in color. } \end{figure*} \subsection{Synthetic Experiments} \label{SB:SYTHETICEXP} Here, the objectives and the constraints are sampled from a GP. We also consider two scenarios. One with noiseless observations and another where the observations are contaminated with standard Gaussian noise with variance $0.1$. The first experiment has 4 dimensions, 2 objectives and 2 constraints, and the second has 6 dimensions, 4 objectives and 2 constraints. The performance of each method is measured as the relative difference (in log-scale) of the hyper-volume of the recommendation made and the maximum hyper-volume, with respect to the number of evaluations made. The results obtained by each method are shown in Fig. \ref{FIG:EXPSYNTHE}. We see that in the 4D experiment, in both scenarios, the best methods are MESMOC+, PESMOC and PESMOC$_\text{dec}$. MESMOC+$_\text{dec}$ also archives good results when there is no noise. In these experiments MESMOC+ is superior to MESMOC, which performs poorly in the noisy settings. MESMOC$_\text{dec}$ also performs poorly in general. This is probably as a consequence of the poor approximation of the acquisition function in MESMOC and MESMOC$_\text{dec}$. In the 6D experiments we observe similar results, but here MESMOC+$_\text{dec}$ gets significantly worse results than MESMOC+. This could be related to the removal of the logarithm in the acquisition function of MESMOC+$_\text{dec}$. In any case, in these experiments we observe that in general MESMOC+ and PESMOC give similar results, while MESMOC seems to perform worse. \begin{figure*}[tbh]\label{FIG:EXPSYNTHE} \centering \resizebox{\textwidth}{!}{\begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{./plot_4d.pdf} \includegraphics[width=0.5\textwidth]{./plot_6d.pdf} \\ \includegraphics[width=0.5\textwidth]{./plot_4d_noisy.pdf} \includegraphics[width=0.5\textwidth]{./plot_6d_noisy.pdf} \end{tabular}} \caption{Avg. log hyper-volume relative difference between the recommendation of each method at each iteration and the maximum hyper-volume in a 4-dimensional problem (left-column) and in a 6-dimensional problem (right-column). We consider noiseless (top) and noisy observations (bottom). Best seen in color. } \end{figure*} Table \ref{TB:TIMES} shows the average execution time in seconds per iteration of MESMOC+, MESMOC and PESMOC and their decoupled variants in the experiment 4D. We observe that the times of MESMOC+ and MESMOC+$_\text{dec}$ are significantly lower than those of PESMOC and PESMOC$_\text{dec}$, respectively. This is because MESMOC+'s approximation is cheaper to compute. In particular, it reduces the entropy of the solution of the problem in the function space and uses ADF to approximate the conditional predictive distribution, instead of EP, like PESMOC. On the other hand, the total execution time of MESMOC is just a little lower than the one of MESMOC+, and although the runtime of MESMOC$_\text{dec}$ is half that MESMOC+$_\text{dec}$, its performance is much worse. \begin{table}[ht] \caption{Avg. execution time per iteration (in sec.) in the 4D experiment. \strut} \label{TB:TIMES} \begin{center} \resizebox{\textwidth}{!}{\begin{tabular}{r@{$\pm$}l@{\hspace{3mm}}r@{$\pm$}l@{\hspace{3mm}}r@{$\pm$}l@{\hspace{3mm}}r@{$\pm$}l@{\hspace{3mm}}r@{$\pm$}l@{\hspace{3mm}}r@{$\pm$}l} \hline \multicolumn{2}{c}{\scriptsize \textbf{MESMOC+}} & \multicolumn{2}{c}{\scriptsize \textbf{MESMOC+$_{\textrm{dec}}$}} & \multicolumn{2}{c}{\scriptsize \textbf{MESMOC}} & \multicolumn{2}{c}{\scriptsize \textbf{MESMOC$_{\textrm{dec}}$}} & \multicolumn{2}{c}{\scriptsize \textbf{PESMOC}} & \multicolumn{2}{c}{\scriptsize \textbf{PESMOC$_{\textrm{dec}}$}} \\ \hline $12.48$ & $1.15$ & $25.73$ & $3.94$ & $10.34$ & $0.84$ & $12.09$ & $0.83$ & $29.71$ & $3.70$ & $89.33$ & $5.36$ \\ \hline \end{tabular}} \end{center} \end{table} \subsection{Finding an Optimal Ensemble} \label{SB:ENSEMBLEEXP} We tune the hyper-parameters of an ensemble of trees to classify the German dataset, from the UCI repository \cite{dua2017}. This dataset has 1,000 instances, 20 attributes and 2 classes. The hyper-parameters of the ensemble are: the number of trees, the number of attributes to consider to split a node, the minimum number of samples to split a node, the probability of switching the class of each instance \cite{martinez2005switching} and the fraction of samples on which each tree is trained. We choose two objectives: to minimize the classification error, as estimated by a 10-fold-cv method, and to minimize the number of nodes of the trees in the ensemble. We also choose a constraint: the ensemble has to speed-up its average classification time by at least 25\% when using dynamic pruning \cite{hernandez2008statistical}. Both, the objectives and the constraints, can be evaluated separately as the total number of nodes is estimated by building only once the ensemble without leaving any data aside for validation. By contrast, the CV approach used to estimate the ensemble error requires to build several ensembles on subsets of the data. Similarly, evaluating the constraint involves building a lookup table. This table is expensive to build and is different for each ensemble size. The left column of the Fig. \ref{FIG:EXPREAL} shows the average Pareto fronts obtained by each method after 100 and 200 evaluations. Since we are minimizing, the greater the area above the average Pareto front of a method, the better that method is, and the average hyper-volume of the solution is bigger. Fig. \ref{FIG:EXPREAL} shows that MESMOC+$_\text{dec}$ and PESMOC$_\text{dec}$ obtain the best results. Although, MESMOC+$_\text{dec}$ obtains smaller ensembles but with more error, and PESMOC$_\text{dec}$ obtains larger ensembles but with less error. After 200 evaluations, the average Pareto front obtained by MESMOC+ has a similar quality to that of PESMOC, in terms of the hyper-volume. MESMOC+ finds better ensembles with an error above 25\%. By contrast, PESMOC finds better ensembles with a small error. We can also see that MESMOC and MESMOC$_\text{dec}$ perform quite poorly. We believe this is a consequence of the difficulty of finding feasible solutions, which means that these methods will significantly constrain the optimization of the acquisition function, as described in Section \ref{SEC:RELATEDWORK}. Table \ref{TB:ENSEMBLEHV} shows the average hyper-volume of the Pareto front found by each method. We observe see that the largest hyper-volume is obtained by MESMOC+$_\text{dec}$ and PESMOC$_\text{dec}$ after 100 and 200 evaluations, respectively, and their results are very similar. \begin{figure*}[tbh]\label{FIG:EXPREAL} \centering \resizebox{\textwidth}{!}{\begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{./plot_ensemble_100.pdf} \includegraphics[width=0.5\textwidth]{./plot_nnets_50.pdf} \\ \includegraphics[width=0.5\textwidth]{./plot_ensemble_200.pdf} \includegraphics[width=0.5\textwidth]{./plot_nnets_100.pdf} \end{tabular}} \caption{Avg. Pareto front of each method when finding an optimal ensemble (left-column) and when finding an optimal neural network (right-column). Best seen in color. } \end{figure*} \begin{table*}[t!] \caption{Average hyper-volume of each method after 100 and 200 evaluations in the problem of finding an optimal ensemble. Similar results after 50 and 100 evaluations in the problem of finding an optimal neural network. The best result is bolded and the second best is underlined. \strut} \label{TB:ENSEMBLEHV} \begin{center} \begin{tabular}{lr@{$\pm$}lr@{$\pm$}lr@{$\pm$}lr@{$\pm$}lr@{$\pm$}l} \hline \textbf{} & \multicolumn{4}{c}{\textbf{Ensemble}} & \multicolumn{4}{c}{\textbf{Neural Network}} \\ \cmidrule(l){2-5} \cmidrule(l){6-9} \textbf{Method} & \multicolumn{2}{c}{\textbf{100 Evals.}} & \multicolumn{2}{c}{\textbf{200 Evals.}} & \multicolumn{2}{c}{\textbf{50 Evals.}} & \multicolumn{2}{c}{\textbf{100 Evals.}} \\ \hline \text{MESMOC+} & $0.293$ & $0.001$ & $0.322$ & $0.001$ & $47.84$ & $0.119$ & $53.90$ & $0.043$ \\ \text{MESMOC+$_{\textrm{dec}}$} & $\boldsymbol{0.317}$ & $\boldsymbol{0.002}$ & $\underline{0.339}$ & $\underline{0.001}$ & $\underline{48.70}$ & $\underline{0.072}$ & $\underline{54.32}$ & $\underline{0.051}$ \\ \text{MESMOC} & $0.220$ & $0.002$ & $0.243$ & $0.002$ & $45.24$ & $0.361$ & $46.27$ & $0.461$ \\ \text{MESMOC$_{\textrm{dec}}$} & $0.215$ & $0.004$ & $0.234$ & $0.005$ & $45.70$ & $0.634$ & $49.65$ & $0.166$ \\ \text{PESMOC} & $0.310$ & $0.001$ & $0.327$ & $0.001$ & $48.58$ & $0.074$ & $53.97$ & $0.057$ \\ \text{PESMOC$_{\textrm{dec}}$} & $\underline{0.312}$ & $\underline{0.001}$ & $\boldsymbol{0.340}$ & $\boldsymbol{0.001}$ & $\boldsymbol{48.94}$ & $\boldsymbol{0.055}$ & $\boldsymbol{54.44}$ & $\boldsymbol{0.041}$ \\ \text{BMOO} & $0.294$ & $0.001$ & $0.310$ & $0.001$ & $47.46$ & $0.261$ & $53.67$ & $0.085$ \\ \text{RANDOM} & $0.264$ & $0.001$ & $0.280$ & $0.001$ & $46.23$ & $0.132$ & $51.99$ & $0.098$ \\ \hline \end{tabular} \end{center} \end{table*} Fig. \ref{FIG:REALEVALS} shows the number of evaluations of each black-box performed by MESMOC+$_\text{dec}$. It evaluates approximately the same number of times each black-box. Therefore, the advantages of the decoupled setting come in this case from the fact that it can choose different input locations at which to evaluate each black-box, at each iteration. By contrast, the coupled version of MESMOC+ always evaluates all the black-boxes, at each iteration, on the same candidate point, which the one maximizing the acquisition. \begin{figure*}[tbh]\label{FIG:REALEVALS} \centering \resizebox{\textwidth}{!}{\begin{tabular}{cc} \includegraphics[width=0.5\textwidth]{./plot_num_evals_mesmocplus_dec_ensemble_100.pdf} \includegraphics[width=0.5\textwidth]{./plot_num_evals_mesmocplus_dec_nnets_100.pdf} \end{tabular}} \caption{Number of evaluations performed by MESMOC+$_\text{dec}$ for each black-box in the problem of finding an optimal ensemble (left) and the problem of finding an optimal neural network (right). Best seen in color. } \end{figure*} \subsection{Finding an Optimal Neural Network} \label{SB:NETSEXP} We aim to tune the hyper-parameters of a deep neural network. In this experiment, we consider the MNIST \cite{lecun2010mnist} dataset, which contains 60,000 images of $28 \times 28$ pixels of hand-written digits. We build the network using Keras. For the training the networks we use ADAM with the default parameters \cite{kingma2014adam}. We have divided the dataset into 50,000 instances for training and 10,000 for validation. The hyper-parameters to adjust are: the number of hidden layers, the number of neurons in each layer, the learning rate, the dropout probability \cite{srivastava2014dropout}, the level of $\ell_1$ and $\ell_2$ regularization, and two parameters related to the codification of the neural network in a chip: the memory partition and the loop unrolling factor. See \cite{garrido2019predictive} for more details. The goal is to minimize the validation error and the prediction time of the network. The constraint chose invalidate all networks that when codified into a chip result in an area greater than one square millimeter. The calculation of the area needed by each network is made using the hardware simulator Aladdin \cite{shao2014aladdin}. Again, these objectives and constraint can be evaluated independently. The prediction time is measured as the ratio with respect to the prediction time of the fastest network (\emph{i.e.}, the smallest one). In the right column of the Fig. \ref{FIG:EXPREAL} we shows the average Pareto front of each method after 50 and 100 evaluations. PESMOC$_\text{dec}$ is the method that obtains the Pareto front with the highest hyper-volume, followed by MESMOC+$_\text{dec}$ and PESMOC. We can also see that after 100 evaluations there is not much difference between MESMOC+$_\text{dec}$ and PESMOC$_\text{dec}$, and MESMOC+ and PESMOC. However, the decoupled variant obtains significantly better results than the coupled one but performing the same number of evaluations. We can also see that the performance of MESMOC and MESMOC$_\text{dec}$ is worse or similar to that of RANDOM. Table \ref{TB:ENSEMBLEHV} displays the average hyper-volume of the Pareto front of each method. The highest hyper-volume is obtained by PESMOC$_\text{dec}$, closely followed by MESMOC+$_\text{dec}$. Finally, the number of evaluations performed by MESMOC+$_\text{dec}$ are displayed in the Fig. \ref{FIG:EXPREAL}. We observe that most of the evaluations have been carried out on the black-box corresponding to the prediction error. It is hence expected that this black-box is more difficult to optimize and hence the proposed approach, MESMOC+$_\text{dec}$ focuses more on it. \section{Conclusions} \label{SEC:CONCLUSIONS} We have developed MESMOC+, a method for multi-objective Bayesian optimization with constraints. MESMOC+ selects the next point to evaluate as the one that is expected to reduce the most the entropy of the solution of the optimization problem in the function space. Namely, the Pareto frontier. Since MESMOC+'s acquisition is expressed as a sum of acquisition functions, one per each different black-box, its computational cost is linear with respect to the number of black-boxes. Moreover, it can be used in a decoupled evaluation setting. In our experiments we have observed that MESMOC+ is competitive with other methods from the state-of-the-art for Bayesian optimization, but its cost per iteration is significantly smaller. The approximation of the acquisition function performed by MESMOC+ is also more accurate than that of existing methods. Furthermore, a decoupled evaluation setting shows that MESMOC+ can not only choose where to evaluate next, but also which black-box to evaluate. Finally, we have observed that sometimes the decoupled variant of MESMOC+ achieves significantly better results than those of standard MESMOC+. \subsubsection*{Acknowledgements} The authors gratefully acknowledge the use of the facilities of Centro de Computaci\'on Cient\'ifica (CCC) at Universidad Aut\'onoma de Madrid. The authors also acknowledge financial support from Spanish Plan Nacional I+D+i, grants TIN2016-76406-P and from PID2019-106827GB-I00 / AEI / 10.13039/501100011033. Daniel Fer\'andez-S\'anchez also acknowledges the financial support from the Universidad Aut\'onoma de Madrid through Convocatoria de Ayudas para el fomento de la Investigaci\'on en Estudios de M\'aster-UAM 2019. \bibliographystyle{abbrv}
{ "timestamp": "2021-04-05T02:17:08", "yymm": "2011", "arxiv_id": "2011.01150", "language": "en", "url": "https://arxiv.org/abs/2011.01150" }
\subsection{Definitions } First lets note some notation $\mathbf{W}$ denotes a workload. $\mathcal{W}$ denotes a set of workloads, $ \mathcal{M}$ denotes a mechanism and each mechanism takes in as input a non-empty set of workloads and a value $\epsilon$ \begin{definition}[Non-interference] Let $Err_i$ be the expected error of analyst $i$. A mechanism $\mathcal{M}$ satisfies Non-Interference if for every analyst $i$ with preference workload $W_i$ there does not exist an alternative workload $W_i'$ where $Err_i(\mathcal{M}(\epsilon, \mathcal{W})) > Err_i(\mathcal{M}(\epsilon, \mathcal{W}'))$ and for at least one other analyst $j \neq i$ $Err_j(\mathcal{M}(\epsilon, \mathcal{W})) < Err_j(\mathcal{M}(\epsilon, \mathcal{W}'))$ \end{definition} \begin{definition}[Sharing incentive] Let $Err_i$ be the expected error of analyst $i$ and let $\mathcal{W}$ be the set of all analysts workloads. A mechanism $\mathcal{M}$ satisfies the Sharing Incentive if for every analyst $i$ the following holds. $$Err_i(\mathcal{M}(\epsilon,\mathcal{W})) \leq Err_i(\mathcal{M}(\frac{\epsilon}{k},\{\mathbf{W}_i\}))$$ \end{definition} \subsection{Rolling example} \subsection{Proofs work} \subsubsection{Independent Workload strategy Sharing incentive} By using formula for the expected error of the matrix mechanism we get that the error for each analyst using the independent workload strategy is as follows. $$\frac{2}{(\epsilon/k)^2} \|\mathbf{W}_i \|^2_1 \|\mathbf{W_iW_i}^+ \|^2_F = \frac{2n}{(\epsilon / k)^2} \|\mathbf{W}_i \|^2_1 $$ We'll use that in later proofs. Trivially the independent workload strategy satisfies the Sharing Incentive since it's error is exactly equivalent to that if the analyst used the mechanism on their own. \subsubsection{Unified workload strategy Sharing Incentive} When an analyst does unified workload strategy on their own they get the exact same expected error as the independent case since the unified workload strategy is identical to the independent workload strategy when there is only one analyst. The same follows for the weighted unified workload strategy as shown bellow. \\ Let $\mathbf{W_i'}$ be the weighted query matrix of analyst $i$ which is weighted by a scale of $\frac{1}{x}$. Then when separating and computing on their own the expected error is as follows. $$\frac{2}{(\epsilon/k)^2} \|\mathbf{W}_i' \|^2_1 \|\mathbf{W_iW_i'}^+ \|^2_F] =$$ $$ \frac{2}{(\epsilon/k)^2} \|\frac{1}{x} \mathbf{W}_i \|^2_1 \|\mathbf{x W_iW_i}^+ \|^2_F] =$$ $$ \frac{2}{(\epsilon/k)^2} \|\mathbf{W}_i \|^2_1 \frac{nx^2}{x^2} = \frac{2n}{(\epsilon/k)^2} \|\mathbf{W}_i \|^2_1 $$ For both of those mechanisms consider the best case scenario where all analysts have the same workload. We note that then the unified workload is exactly the workload proposed by each analyst and we get the following expected error for each analyst. $$\frac{2n}{\epsilon^2} \|\mathbf{W}_i \|^2_1 $$ Which is simply the independent case but with the entirety of the privacy budget. Note that we have also just proved that scaling the strategy matrix by some arbitrary constant doesn't change the error. This intuitively makes sense since if every analyst wants the same workload we might as well just compute answers for that one workload and give it to each of the analysts. We can use this as a lower bound for error for now, saying that at best in the unified case without scaling we can get error that is $k^2$ times less for each analyst. Now lets consider the "Fair Workload strategy" mechanism. In this mechanism instead of scaling according to the Frobenius norm of a matrix we let an optimizer choose in order to maximise the egalitarian social welfare. Now we note that if we consider the independent version of this mechanism ( in order to find error to prove the Sharing Incentive) it would simply optimize the scaling factor of the matrix for one individual to minimize their error. However since we just proved that any arbitrary scaling doesn't change the error it will effectively do nothing. Therefore all 4 versions of this mechanism have the same baseline error for the independent case.\\ The fact that scaling the entire matrix alone doesn't change the error is super helpful. Because say I have some optimal weights for the unified workload strategy $\alpha_1 W_1 \cup \alpha_2 W_2 \cup \dots \alpha_k W_k$ then multiplying them all by some factor $\beta$ doesn't change any of the errors. Therefore I might as well scale them down such that $\sum_i \alpha_i = 1$. This now bounds our problem to find the optimal set of weights such that all the weights sum to one. \\ We also know the optimum weights for each analyst, that simply being their own workload with a weight of 1 and the others with a weight of zero. \\ Thoughts on how to solve from here \\ Goal is to maximise min-max fairness. If every ones individual workload is the best for them and moving towards that is always better the answer is simple. use a sort of waterfilling redistribution algorithm until we hit some level of convergence but that isn't always the case. We could have a case for example where analyst 1 chooses some workload and analyst 2 chooses some other workload which happens to be more optimal for the strategy for analyst 1. An example of this is as follows. Analyst 1 chooses the all range workload. Analyst 2 chooses the workload that represents the H2 algorithm. H2 being specifically designed for that case scores better than the workload strategy. \subsection{Defining non-triviality} The goal of a non-triviality condition is to rule out options that involve not leveraging the joint nature of the problem. In this case a definition of non-triviality should rule out mechanisms that answer each of the query sets independently of one another. Since the definition we use for the Sharing Incentive compares any mechanism to itself it alone is not sufficient for the type of non-triviality we are looking for. It allows any mechanism that doesn't change at all when giving the group setting instead of the singular one. \\ We considered the idea of Pareto-optimally as a definition of non-triviality. We still get the same issue if our definition of Pareto optimality considers only the given inputs and the respective output space of the same mechanism then it still allows for Independent Mechanisms. For example consider the independent HDMM. If we assume that the optimizer for HDMM is perfect and picks the optimal strategy matrix for the given workload then the independent HDMM with respect to it's possible outputs always optimizes on the sum of the utilities across all agents. This set of query answers that optimizes on the sum of utilities must always be Pareto efficient since if there was a set of query answers that didn't reduce any utility but increased the utility for at least one person it would increase the the sum of utilities and that contradicts our perfect optimizer assumption. \\ Pareto-optimality seems to be a good starting point but it can't have the mechanism be compared to itself. It needs to compare the mechanism in question to some baseline that we consider to be "inefficient but reasonable". We could say that it needs to be strictly better than some baseline mechanism. That is for some naive solution any efficient solution should achieve better expected error for each analyst. \\ On some slightly good news. If you assume that there is a unique point that minimizes min max fairness and that our optimizer finds it then the strategy at that point must be Pareto optimal. Why? Well what would be required for the min-max fair point to not be pareto optimal. There must be a case where either everyone gets the same error and at least one person must get less error. If that one person was the person with maximum error already then the point we found didn't minimize min max fairness since we can minimize further by going to the Pareto-dominating point. If the person who gets less error is not the person with max error then the min-max error optimizing point is not unique since the Pareto-dominating point it has the same error for the analyst with the max error or less error. If it has the same error then it is an alternative point with the same min-max fairness value. If it has less error then either this analyst remains the analyst with the worst error in which case the new point dominates the old point in terms of min-max fairness as well or some other analyst becomes the analyst with maximum error in which case this is still a point that dominates the other in terms of min-max fairness. \subsection{Property definitions and proofs 3/16/2020} We would like a set of properties which when satisfied by any mechanism results in a mechanism that is "fair" within this context. We have 3 properties currently which seem to satisfy this criteria. First I'll enumerate those properties considered in the space of matrix mechanisms and show that they do not trivially satisfy each-other. That is show that no combination of the properties ensures the others. From there we can consider some techniques and how they satisfy such properties \par \subsubsection{Redefining Non-Interference} Lets redefine Non-Interference. Now isntead of caring if you can reduce utility by chaning your workload we only care if you reduce other's utility to below that if you weren't there. More formally as follows. \begin{definition}[Non-interference] Let $Err_i$ be the expected error of analyst $i$. A mechanism $\mathcal{M}$ satisfies Non-Interference if for every analyst $i$ with preference workload $W_i$ there does not exist an alternative workload $W_i'$ where $Err_i(\mathcal{M}(\epsilon, \mathcal{W})) > Err_i(\mathcal{M}(\epsilon, \mathcal{W}'))$ and for at least one other analyst $j \neq i$ $Err_j(\mathcal{M}(\epsilon, \mathcal{W} \setminus W_i)) < Err_j(\mathcal{M}(\epsilon, \mathcal{W}'))$ \end{definition} Let's begin with the Sharing Incentive. The Sharing Incentive intuitively, as the name suggests, incentives any individual to join a group of analysts rather than answer their queries alone. If we assume privacy budget $\epsilon$ is a public good to be distributed we can give each of the $k$ analyst a fair their fair share of the privacy budget $\epsilon /k$. We then give the analyst two options. They can either use their fair share of the budget on their own using a prescribed DP mechanism in order to answer their queries or they can add their budget and queries to a collective pool in order to have all the queries of the collective answered under the collective privacy budget. The goal of the Sharing Incentive is grantee that joining the pool will result in at least as much utility for the agent as answering their queries on their own. Formally we define that as follows. We define a mechanism $\mathcal{M}$ to be a function that takes in as a parameter a privacy budget $\epsilon$ and a non-empty set of workloads and returns a strategy matrix $\mathbf{A}$ to answer the queries. Likewise we define the expected error of a mechanism to be the expected mean-squared error of the mechanism. \begin{definition}[Sharing incentive] Let $Err_i$ be the expected error of analyst $i$ and let $\mathcal{W}$ be the set of all analysts workloads. Let $\mathbf{W}_i$ be the workload of analyst $i$. A mechanism $\mathcal{M}$ satisfies the Sharing Incentive if for every analyst $i$ the following holds. $$Err_i(\mathcal{M}(\epsilon,\mathcal{W})) \leq Err_i(\mathcal{M}(\frac{\epsilon}{k},\{\mathbf{W}_i\}))$$ \end{definition} We note that while Sharing Incentive is a good incentive for joining the collective group structure it does not grantee that the mechanism itself is efficient. For example consider the mechanism that simply returns an all zero matrix. Since this mechanism doesn't actually answer any of the queries we say it has infinite error. Likewise since any analyst experiences the same error under any conditions or parameters it trivially satisfies the Sharing Incentive. To that goal we consider a non-triviality constraint. Since all mechanisms considered are in the space of matrix mechanisms we consider Pareto-efficiency with respect to the matrix mechanisms (or possibly p-identity). We define pareto efficiency as follows. \begin{definition}[Pareto-Efficiency (matrix mechanism space)] A mechanism $\mathcal{M}(\epsilon, \mathcal{W})$ is pareto-efficient if it produces an output $\mathbf{A}$ such that there is no alternative strategy $\mathbf{A}' \in \mathbb{R}^{m \times n}$ such that every analyst gets at least as much utility under $\mathbf{A}'$ compared to $\mathbf{A}$ and at at least one analyst gets more utility under $ \mathbf{A}' $ compared to $\mathbf{A}$ \end{definition} We are now going to show that the two previously defined criteria do not satisfy eachother in the space of matrix mechanisms. First we are going to show that there exists a mechanism which satisfies Pareto optimally and not the Sharing Incentive. \begin{theorem} Pareto optimality does not guarantee Sharing Incentive. \end{theorem} Consider the following mechanism which takes in a set of workloads and a privacy budget. Step 1 take the first workload and answer it's queries using the entire privacy budget under the optimal full rank strategy. Step 2 publish the results. This mechanism is Pareto-efficient because it maximizes utility for the first analyst and since the optimal strategy is unique \cite{Matrix} then any alternative workload would reduce the utility of the first agent. This mechanism may not satisfy the Sharing Incentive in cases where the second analyst has a workload that is significantly different to the first analyst. The mechanism will then optimize on the first analyst at the expense of the second. We can similarly create a mechanism which satisfies the Sharing Incentive but is not pareto efficient. \begin{theorem} Sharing incentive does not guarantee pareto-optimality \end{theorem} Here we'll construct a mechanism which satisfies the Sharing Incentive but is not pareto efficient. In this case the mechanism splits the budget up amongst each analyst and answers the workloads independently. This is exactly the independent case so it must satisfy the Sharing Incentive however if each analyst has the same workload this is grossly inefficient since there exists a similar strategy which answers all the workloads and just uses the entire budget. \subsection{Achieving Pareto optimally} Here I will go over and lightly prove two well known results about social welfare functions and Pareto optimality. \begin{theorem} If a mechanism $\mathcal{M}$ maximises the utilitarian social welfare amongst all analysts it is Pareto efficient. \end{theorem} Lets prove this by contradiction. Assume a strategy $A$ output by mechanism $\mathcal{M}$ maximizes the utilitarian social welfare. That is $A$ maximises the total utility achieved by some agent. Likewise assume there is an alternative workload $A'$ that Pareto dominates $A$ that is all agents receive at least as much utility under $A'$ compared to $A$ and at least one agent receives strictly more utility. Since no agent loses more utility under $A'$ compared to $A$ the total welfare under $A'$ must be greater than that under $A$ which is a contradiction since $A$ should maximise the total welfare. \par Likewise we can prove the same with the egalitarian social welfare if that point is unique. \begin{theorem} If a mechanism $\mathcal{M}$ outputs a strategy $A$ that maximises the egalitarian social welfare amongst all analysts and that strategy is a unique solution then it is Pareto efficient. \end{theorem} Likewise we'll prove this by contradiction. First assume that a mechanism $\mathcal{M}$ outputs a strategy $A$ that uniquely maximises the egalitarian social welfare. That is it maximises the minimum utility amongst all analysts. Let $x$ be the agent who has the minimum utility under strategy $A$. Assume that $A$ is Pareto dominated at by an alternative strategy $A'$ in this case at least one agent receives the more utility than under strategy $A$. If that agent is $x$ then there is a contradiction since that would mean that the egalitarian social welfare under $A'$ is greater than that under $A$. If the agent which receives more utility under $A'$ is not $x$ then since no other agent can lose utility this solution must have the same egalitarian social welfare as $A$. Since they both have the same social welfare then $A$ is no longer a unique maximising point of the egalitarian social welfare. \subsubsection{Pareto Optimality in the matrix mechanism space} It is clear when considering the space of matrix mechanisms any mechanism that we derive based off HDMM will be insufficient to achieve Pareto optimality. In order to show this we only need to show the case with one analyst who asks a degenerate set of queries who's optimal strategy does not lie in the p-identity space. Since there exists one optimal full rank solution and it does not exist in the p-identiy space there must be a solution which pareto dominates any HDMM based solution. \subsubsection{Pareto optimality in the p-identity space} If we consider pareto optimality in the p-identity space then we get some conditions in which Pareto optimality may be achieved. Like previously stated any mechanism which maximises the utilitarian social welfare is Pareto optimal, the same goes for the egalitarian social welfare given that the point is a unique solution.\\ \textbf{SIDENOTE}: \\ Like how we created the fair-hdmm by changing the optimization function to the egalitarian social welfare after some scaling we can simply create a utalitarian version of hdmm by the same virtues. Since the sum of convex functions is also convex we can simply change the optimization task to maximize the sum of all agents utilities. Likewise the resulting new gradient is the sum of all the individual gradients. Since optimization on the utilitarian social welfare enforces pareto efficiency this mechanism is pareto efficient. \subsection{Non-Interference } Non-interference as stated above is a property of the privacy mechanism which states that no individual acting for their own good can increase their utility at the cost of others. Yikai has shown where there are cases in which lying about your workload can reduce the utility of all agents. \\ First we show that satisfying Non-Interference does not imply pareto efficiency or Sharing Incentive. We do this by giving examples of poorly designed mechanisms that don't satisfy one of the two by design but satisfy Non-Interference. Lets start with the previously discussed mechanism which doesn't answer all queries this mechanism clearly satisfies Non-Interference and the Sharing Incentive but is not pareto efficient. This gives rise to the following two theorems. \begin{theorem} Non-interference does not imply pareto efficiency \end{theorem} \begin{theorem} Non-interference and Sharing Incentive does not imply pareto efficiency \end{theorem} The second mechanism will be a slight change to the standard hdmm. In this mechanism we take in as input all the workloads of all analysts and instead of using them we just send in a random workload to HDMM. Since the actual inputs do not affect the output of the mechanism it satisfies Non-Interference but since the expected utility each time is essentially random it does not satisfy the Sharing Incentive. Therefore the following theorem holds \begin{theorem} Non -interference does not imply Sharing Incentive \end{theorem} What we've effectively learned is that we can achieve Sharing Incentive by ignoring the group structure and we can achieve Non-Interference by ignoring the input. By having a mechanism that broadly does both achieves both but cannot be Pareto efficient. Now the question is imagine we had some magic mechanism that magically knows the analysts true intentions and therefore doesn't use the input and can simply choose from any pareto efficient point, can this mechanism satisfy the Sharing Incentive. This basically boils down to from the set of pareto-efficient solutions is there always a solution which satisfies the Sharing Incentive. In the individual case this mechanism just maximises the individual with $1/\epsilon$ of the budget. \section{First idea} Let current error be $U$. Let the expected error in the independent case be $\alpha$ The first optimization idea is as follows $ \max \min U- \alpha $ We then transform this into a minimization problem as follows. $ \min \max_i \alpha_i - U_i , 0$ \subsection{Alternative HDMM (Name in progress)} Consider an alternative HDMM that optimizes the following social welfare function. \[ \min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}} \max_{W \in \mathcal{W}} \|\mathbf{WA^+}\|_F^2- \|\mathbf{WA_W^+}\|_F^2 - \]. Where $ \mathcal{A_W}$ is the strategy matrix chosen by independent HDMM. The objective of this function is to specifically achieve Sharing Incentive. We note that the utility function is negative so long as any individual is getting less utility than mandated by Sharing Incentive. Therefore we note that so long as there exists a strategy matrix that satisfies Sharing Incentive given the workloads this mechanism will satisfy Sharing Incentive. \subsubsection{Proof of Pareto efficiency} Assume there is a unique strategy matrix which maximises this utility function. We call this matrix $\mathbf{A}$. Assume there is an alternate strategy matrix $\mathbf{A'}$ which is Pareto dominant over $\mathbf{A}$. Since it is pareto dominant then no individual can lose utility while at least one individual sees a gain in utility. If that individual is not the worst off individual in terms of the social welfare function then $\mathbf{A}$ is not a unique optimizer. If the individual which gains utility is thew worst off then $\mathbf{A}$ does not optimize this function therefore a contradiction. Therefore if the optimizer of this function is perfect it chooses a Pareto optimal strategy matricx. \subsection{Experiments} These examples are where the Weighted Unified HDMM and Fair HDMM do not satisfy sharing incentive. The first analyst has all range workload and the second analyst has total workload, both with $n=10$. \includegraphics[scale = 0.5]{fig/si_wu.pdf} \includegraphics[scale = 0.5]{fig/si_fh.pdf} Also, there are cases when the Weighted Unified HDMM and Fair HDMM not satisfying Non-Interference. One case is when Analysist 1 queries all $n-1$ norm and Analyst 2 queries IdentityTotal. When Analyst 2 changes to Identity, it violates Non-Interference. \includegraphics[scale = 0.5]{fig/ni_wu1.pdf} \includegraphics[scale = 0.5]{fig/ni_fh1.pdf} The other case is when Analysist 1 has prefix query and Analyst 2 queries IdentityTotal. When Analyst 2 changes to Identity, it violates Non-Interference. \includegraphics[scale = 0.5]{fig/ni_wu2.pdf} \includegraphics[scale = 0.5]{fig/ni_fh2.pdf} \subsection{Egaritarian HDMM} In order to satisfy Sharing Incentive, we tried to implement an alternative Fair HDMM. We optimize on \[ \min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}} \max_{W \in \mathcal{W}} \|\mathbf{WA^+}\|_F^2- k^2\|\mathbf{WA_W^+}\|_F^2 \] where $\bm{A_W}$ is the strategy matrix of $\bm{W}$ optimized as in the independent HDMM. Thus, as long as the value is negative, we satisfy sharing incentive. Also, if there exists a set of workload where sharing incentive is not satisfied, assuming the optimization is ideal, this implies that there does not exist a common strategy in the search space of HDMM which satisfies sharing incentive. Whether there exists such a set of workload is still left to investigate. In our experiments, we find that all the cases the does not satisfy sharing incentive for Weighted Unified HDMM and Fair HDMM would staisfy sharing incentive if we use the new optimization goal. We use the same setting of all range and total workload for the alternative Fair HDMM and the result is as below. \includegraphics[scale = 0.5]{fig/si_fa.pdf} This shows that the optimization can solve this issue in many cases and is probably the optimal way to ensure sharing incentive is satisfied. However, we also noted that, when using this method, the errors of some analysts can be significantly larger than those with the original fair HDMM and Weighted Unified HDMM. We can also optimize on \[ \min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}} \max_{\mathbf{W} \in \mathcal{W}} \frac{\|\mathbf{WA^+}\|_F^2}{k^2\|\mathbf{WA_W^+}\|_F^2} \] This is equivalent to optimize \[ \min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}} \max_{\mathbf{W} \in \mathcal{W}} \frac{\|\mathbf{WA^+}\|_F^2}{\|\mathbf{WA_W^+}\|_F^2} \] In this case, this also ensures sharing incentive but with a different and probably better distribution of error. \includegraphics[scale = 0.5]{fig/si_fa2.pdf} It may be worth trying to define weight as $\|\mathbf{WA_W^+}\|_F^2$ in the Weighted Unified HDMM. In addition, the examples for which Non-Interference is not satisfied with Weighted Unified HDMM and Fair HDMM also does not satisfy Non-Interference in this case. One improvement is that this can prevent the cases when the change of workload of one analyst causes the violation of sharing incentive. \subsection{Utilitarian HDMM} We have also considered a utilitarian HDMM by optimizing \[\min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}}\sum_{\mathbf{W}\in \mathcal{W}}\|\mathbf{WA^+}\|_F^2\] This turned out to be equivalent to the unified workload mechanism. For any $\bm{W}$ of $m$ rows, we have \[\|\mathbf{WA^+}\|_F^2 = \sum_{i=1}^m \|\bm{W}^{(i)}\bm{A^+}\|^2, \] where $\bm{W}^{(i)}$ is the $i$th row of $\bm{W}$. Since $\bm{W}_U$ is the vertical stack of $\bm{W} \in \cm{W}$, let $k$ be the number of workloads and $M$ be the sum of rows of each workload $m_i$, we have \begin{align*} \|\bm{W}_U\bm{A^+}\|_F^2 &= \sum_{i=1}^M \|\bm{W}_U^{(i)}\bm{A^+}\|^2\\ &=\sum_{i=1}^k\sum_{j=1}^{m_i}\|\bm{W}_i^{(j)}\bm{A^+}\|^2\\ &=\sum_{i=1}^k\|\bm{W}^{(i)}\bm{A^+}\|^2\\ &= \sum_{i=1}^m \|\bm{W}^{(i)}\bm{A^+}\|^2 \end{align*} Thus, the optimization goal is the same. \subsection{Weighted Utilitarian optimization} The optimization goal is a modifiable parameter in our algorithm as long as it is convex. We have no idea in deciding whether there is a best loss function yet. One optimization which can be seen as a tradeoff between egalitarian and utilitarian is to optimize the sum of ratios as below \[\min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}}\sum_{\mathbf{W}\in \mathcal{W}}\frac{\|\mathbf{WA^+}\|_F^2}{\|\mathbf{WA_W^+}\|_F^2}\] \subsection{Ideas for Proof} Here is my idea on prove the group sharing incentive. It may be helpful but I have not carefully think through it. The following may be a sufficient condition for group sharing incentive. Consider 3 analysts with 3 arbitrary workload $\bm{W}_1$, $\bm{W}_2$, and $\bm{W}_3$, and a proposed mechanism $M$. Suppose both Analyst 1 and 2 get higher error in $M(\bm{W}_1,\bm{W}_2,\bm{W}_3)$ with budget $\epsilon$ than in $M(\bm{W}_1,\bm{W}_2)$ with budget $\frac{2}{3}\epsilon$. Then, let the absolute difference of normalized error for analyst $i$ be $d_i$. Without loss of generality, suppose $d_1 > d_2$, which means analyst 1 get more worse off. Then, by changing the workload of Analyst 2 to be $\bm{W}_1$, both Analyst 1 and 2 still get higher error in $M(\bm{W}_1,\bm{W}_1,\bm{W}_3)$ with budget $\epsilon$ than in $M(\bm{W}_1,\bm{W}_1)$ with budget $\frac{2}{3}\epsilon$. In the case of 10.4, we can define normalized error as \[\frac{\|\mathbf{WA^+}\|_F^2}{\|\mathbf{WA_W^+}\|_F^2}\]. If this statement is true for any 3 arbitrary workload (with the same number of columns). It may not be hard to prove that for analyst group $G_1$ and $G_2$, if the inclusion of $G_2$ will cause every analyst in $G_1$ to have larger error, we can pick the workload $\bm{W}^*$, which is the workload of the analyst with the maximum difference of normalized error (most worse off). Then, if we change the workload of every analyst in $G_1$ to be $\bm{W}^*$, and not change the workload in $G_2$, the inclusion in $G_2$ will still make every analyst in $G_1$ worse off. If the above is true, proving for the cases when a group of analysts having the same workload will be sufficient. This can even be possible to reduce to simple sharing incentive if the following is satisfied. Similarly, we say the inclusion of $G_2$ makes every analyst in $G_1$ worse off. If there exists a workload $\bm{W}^*$ of some analyst in $G_2$, which by changing every workload in $G_2$ to $\bm{W}^*$, the inclusion of $G_2$ still get every analyst in $G_1$ worse off. This conditions may or may not imply one another. I think if we can prove any one of them, it is helpful for the entire proof nevertheless. \subsection{Experiments} We conduct our experiments in 5 different modes. They are 2 baselines: Independent HDMM, Unified HDMM, and 3 different types of Fair HDMM: Fairdiff, Fairmax, and Fairsum Fairdiff is the Additive egalitarian HDMM described above. It optimizes on \[ \min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}} \max_{W \in \mathcal{W}} \|\mathbf{WA^+}\|_F^2- k^2\|\mathbf{WA_W^+}\|_F^2 \], where $k$ is the number of analysts. It is called Fairdiff as it optimizes on the maximum of the difference of current error and independent error. Fairmax changes the difference to ratio. It optimizes on \[ \min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}} \max_{\mathbf{W} \in \mathcal{W}} \frac{\|\mathbf{WA^+}\|_F^2}{\|\mathbf{WA_W^+}\|_F^2} \]. Both methods are egalitarian method and uses min-max optimization. They should be very similar. Fairsum is slightly more different. It is Weighted Utilitarian HDMM described above, as it uses the inverse of the independent error as weights. It optimizes on the sum of ratios, \[\min{\{ \mathbf{A = A( \Theta)| \Theta \in \mathbb{R}^{p \times n}_+}\}}\sum_{\mathbf{W}\in \mathcal{W}}\|\mathbf{WA^+}\|_F^2\]. \end{document} \section{Proofs of Properties}\label{sec:proofs} \subsection{Utilitarian Social welfare} Here we prove that Utilitarian HDMM which vertically stacks the workload matrices is equivalent to directly optimizing on the Utilitarian social welfare objective. \begin{theorem} \label{thrm:Utilitarian} Optimizing on a stacked workload is equivalent to optimizing the utilitarian social welfare objective \end{theorem} For any $\bm{W}$ of $m$ rows, the objective function optimized on is as follows. \[\|\mathbf{WA^+}\|_F^2 = \sum_{i=1}^m \|\bm{W}^{(i)}\bm{A^+}\|^2, \] where $\bm{W}^{(i)}$ is the $i$th row of $\bm{W}$. Let $\bm{W}_U$ be the vertical stack of $\bm{W} \in \cm{W}$, let $k$ be the number of workloads and $M$ be the sum of rows of each workload $m_i$, we have \begin{align*} \|\bm{W}_U\bm{A^+}\|_F^2 &= \sum_{i=1}^M \|\bm{W}_U^{(i)}\bm{A^+}\|^2\\ &=\sum_{i=1}^k\sum_{j=1}^{m_i}\|\bm{W}_i^{(j)}\bm{A^+}\|^2\\ &=\sum_{i=1}^k\|\bm{W}_{(i)}\bm{A^+}\|^2\\ \end{align*} which is equivalent to the Utilitarian social welfare objective. \subsection{Proof of \cref{thrm:waterfilling}} We will prove that the HDMM waterfilling mechanism satisfies the Sharing Incentive and Non-Interference by instead proving a stronger grantee. We will prove that increasing the number of analysts cannot decrease the utility experienced by any singular analyst. This property assures that the mechanism satisfies both properties. \par Let $\bm{A}$ be the strategy matrix produced by HDMM waterfilling for $k$ analysts and $\bm{A'}$ be the strategy matrix for $(k+1)$ analysts (adding one from previous case). Each analyst has budget $\epsilon$. Each analyst's strategy matrix $\bm{A}_i$ is a submatrix of $\bm{A}$. \par First let us prove a few properties of HDMM waterfilling. \begin{lemma} The introduction of an additional analyst to the HDMM waterfilling mechanism can only increase the sensitivity of the final strategy matrix by 1. \end{lemma} We first note that the strategy matrix produced by HDMM waterfilling in the single analyst case is exactly the same strategy used by the standard HDMM mechanism. Since this strategy is part of the p-identity space it all of it's columns have norm 1. If we add an additional analyst for each of the new analysts queries they may either add their query to an existing bucket or create a new bucket for their query. For each query $v$ if that query were to belong to an existing bucket with representative query $e$ and weight $w$ then $v$ would be added to the bucket and the new weight would be $\norm{v} + w$. As such the bucket would be represented in the final matrix as $(\norm{v} + w)e$. Since $v$ belongs to the bucket it is by definition equal to $\norm{v}e$. Therefore the final represented query in the matrix is equivalent to the existing query plus the new query being added into the bucket $(\norm{v} + w)e = \norm{v}e +we$. Therefore the column norm of the matrix can be raised by at most $\norm{v}$. Likewise if the query $v$ belongs in it's own bucket is is merely added to the final strategy matrix as is therefore only increasing the column norm of the strategy matrix by a maximum of $\norm{v}$. However since all the queries being added to the final strategy matrix are done by either appending or addition the overall increase to the column norm of the final matrix is equal to the column norm of the strategy matrix chosen by the additional analyst. As stated before this matrix has a column norm of 1. \par We first note that the strategy matrix produced by HDMM waterfilling in the single analyst case is exactly the same strategy used by the standard HDMM mechanism. Since this strategy is part of the p-identity space it all of it's columns have norm 1. If we add an additional analyst for each of the new analysts queries they may either add their query to an existing bucket or create a new bucket for their query. If the query belongs in it's own bucket then the query is simply appended to the final workload. If the query belongs into an existing bucket then that query is simply added to the existing representative query of that bucket. Since all of queries introduced by the additional analyst are added to the matrix by either appending or adding to existing queries the final increase to the sensitivity of the strategy matrix is equal to the sensitivity of the strategy matrix chosen by the additional analyst. Likewise we can represent the unified strategy matrix under HDMM waterfilling as a matrix of representative queries $\bm{A}$ multiplied by a scaling matrix of weights $\bm{D}$. Let $\bm{A}$ and $\bm{D}$ be the unified strategy and weight matrix under HDMM waterfilling with all $k$ analysts.Let $\bm{A'}$ and $\bm{D'}$ be the unified strategy and weight matrix under HDMM waterfilling with $k-1$ analysts As stated before we have $\|\bm{DA}\|_1 =\sum_i\|\bm{A}_i\|_1 = k$. Similarly, $\|\bm{D'A'}\|_1 = k-1$. Let $\bm{W}$ be the workload of any analyst. Then the error experienced by any particular analyst is as follows. \begin{align*} Err(\mathbf{W},\mathbf{A}) &=\frac{2}{k^2\epsilon^2}\|\bm{DA}\|_1^2 \|\bm{W(DA)^+}\|^2_F\\ &= \frac{2}{k^2\epsilon}k^2\|\bm{WA^+D^+}\|^2_F\\ &=\frac{2}{\epsilon^2}\|\bm{WA^+D^+}\|^2_F\\ &= \frac{2}{\epsilon^2(k-1)^2}(k-1)^2\|\bm{W}\bm{A^+D^+}\|^2_F\\ &\leq \frac{2}{\epsilon^2(k-1)^2}\|\bm{A'}\|_1\|\bm{W}\bm{A'^+D'^+}\|^2_F.\\ &=Err(\mathbf{W},\mathbf{A'}). \end{align*} Since $\bm{A'}$ is a submatrix of of $\bm{A}$ then every row of $\bm{D'A'}$ is either in $\bm{DA}$ or a portion (multiple with positive ratio less than 1) of a row of $\bm{DA}$. Therefore $\|\bm{W}\bm{A'^+D'^+}\|^2_F \leq \|\bm{W}\bm{A^+D^+}\|^2_F$ \dap{TODO: confirm this?} \subsection{Proof of \cref{thrm:waterfilling} generic version} We will prove that waterfilling mechanisms satisfies the Sharing Incentive and Non-Interference by instead proving a stronger grantee. We will prove that increasing the number of analysts cannot decrease the utility experienced by any singular analyst. This property assures that the mechanism satisfies both properties. \par Let $\bm{A}$ be the strategy queries produced by a waterfilling mechanism for $k$ analysts and $\bm{A'}$ be the strategy queries for $(k+1)$ analysts (adding one from previous case). The $i$th analyst has budget $s_i\epsilon$. Each analyst's strategy queries $\bm{A}_i$ is a subset of $\bm{A}$. \par First let us prove a few properties of HDMM waterfilling. \begin{lemma} The introduction of an additional analyst with weight $s'$ to the collective of a waterfilling mechanism can only increase the sensitivity of the query set by $s'$. \end{lemma} \begin{proof} When a new analyst is added to the collective for each of their queries they have two options they may either create a new bucket or add their query to an existing bucket. If they add a new bucket that query becomes part of the final strategy query set adding it's sensitivity to the overall sensitivity. If they add their query to an existing bucket the weight of that query increases by the weight of that query, therefore increasing the overall utility by the weight of that query. Since the weight of all of an analysts queries must equal their overall weight the increase in sensitivity is equal to the overall weight of the analyst. \end{proof} We note that since each analyst is entitled to $s_i\epsilon$ of the budget and the sensitivity of the strategy query set is equal to the sum of each analysts weights that the scale of the noise used to answer the query set is the same regardless of the number of analysts in the collective. The scale of the noise added to the query set is as follows. $$ \frac{\norm{A}_1}{\epsilon} = \frac{\sum_{i=0}^k s_i}{\sum_{i=0}^k s_i \epsilon} = \frac{1}{\epsilon}$$ We now consider two different perspectives when an additional analyst, Bob, is added to the collective. We first consider the case of an analyst who is already in the collective, Alice, and how their utility is affected by an additional analyst joining. We then consider the case of the analyst joining the collective and how their utility changes as they join the collective. \par Consider Alice, an analyst already in the collective. If the new analyst being added shares no queries with Alice then her queries remain unchanged and the noise being added stays the same ensuring that her query answers remain utility neutral. If Bob shares queries with Alice then the weights of Alice's queries will increase. This reduces the the amount which Alice's queries need to be re-scaled to reconstruct her original strategy queries therefore reducing her error. Since adding Bob to the collective is either utility neutral or positive for Alice the mechanism satisfies Non-Interference. \ par Likewise consider Bob an analyst to be added to the collective. From Bob's perspective adding Bob to the existing collective is equivalent to adding all the members of the existing collective to Bobs collective (of just himself). Just like Alice adding any analysts to Bobs collective is either utility neutral or positive. Since Bobs collective of just himself is indeed the independent case and if being added to the collection is either utility neutral or positive then the mechanism satisfies the Sharing Incentive. \section{The Waterfilling Mechanism} \label{sec:waterfilling} The Waterfilling Mechanism is an example of a select first mechanism which satisfies all three of the desiderata. We first start with a simplified example of the Waterfilling Mechanism seen in \cref{fig:waterfill_animation} and then discuss the full Waterfilling Mechanism. \par \begin{figure*}[t] \centering \includegraphics[width=0.7\textwidth]{fig/waterfill_animation.png} \caption{Simplified Waterfilling Mechanism} \label{fig:waterfill_animation} \end{figure* In this example there are three analysts Alice, Bob, and Carol each given the same share of the budget, $\frac{1}{3}$. Alice asks only the blue query and assigns all of her share to that query. Bob asks the red, blue, and green queries and assigns each query equal amounts of his share of the privacy budget. Carol asks the blue and green queries and like Bob assigns his share of the budget equally across all her queries. The Waterfilling mechanism then buckets similar queries (in this example by bucketing red blue and green queries) and their associated shares of privacy budget together. Once all the queries are assigned to buckets the mechanism answers a single query for each bucket using the entire privacy budget in each bucket. The mechanism then uses those answered queries to reconstruct the analysts original queries. In \cref{fig:waterfill_animation}, we can see that since the red query was only asked by one analyst it receives the same amount of privacy budget as if were asked independently. Meanwhile since each analyst asked the blue query it is answered once using the pooled contribution of privacy budget from each analyst, resulting in a more accurate estimate than if each analyst had independently answered the blue query, even if they subsequently shared their results with one another. \par The example shown in \cref{fig:waterfill_animation} is a simplified version of the Waterfilling Mechanism. The Waterfilling Mechanism as defined in Algorithm \ref{alg:waterfilling} has three key differences. The first key difference is the selection step. In the simplified Waterfilling Mechanism analyst's queries are bucketed directly. However in practice a selection step is done first. This selection step takes in the analyst's workload and outputs a strategy workload that may be more efficient to answer directly. The second key difference is sensitivity scaling. The simplified example assumes that the sensitivity of each query is 1 and that all three queries overlap somewhat causing Alice's sensitivity to be 1 Bob's sensitivity to be 3 and Carols sensitivity to be 2. In order to avoid sensitivity scaling issues, the Waterfilling Mechanism scales each analyst's strategy workload to have a sensitivity of 1 prior to the bucketing step. The third key difference is in the bucketing step. In the simplified example we only bucketed identical queries. Since the selection step introduces some numerical instability we allow for queries which are approximately equal to be added to the same bucket. We introduce an additional parameter $\tau$ which determines how much two queries are allowed to deviate to be assigned to the same bucket. In Algorithm \ref{alg:waterfilling} we allow two queries with cosine similarity greater than $1-\tau$ to be assigned to the same bucket. Once the buckets are filled the query answered is the unit vector representing the average query in the bucket. All of the proofs below assume that $\tau =0$ and may not hold for higher values of $\tau$. We set $\tau$ to be $10^{-3}$ in experiments and we empirically evaluate the performance of the Waterfilling mechanism as $\tau$ changes in \cref{sec:Experiments_Tolerance_results}. \begin{algorithm}[ht] \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{${\mathcal{W}},S, {\bm{x}},\epsilon,{\mathcal{M}}$, \Comment{defined in Algorithm~\ref{alg:ind}}\\ tolerance parameter $\tau$} \renewcommand{\nl}{\let\nl\oldnl} \textbf{Selection Step} \\ ${\mathcal{A}} \leftarrow \{{\mathcal{M}}(W_i) \;|\; W_i \in {\mathcal{W}}\}$\\ \renewcommand{\nl}{\let\nl\oldnl} \textbf{Collection step}\\ buckets ${\mathcal{B}} \leftarrow \{\}$\\ \For{${\bm{A}}_i \in {\mathcal{A}}$}{ \For{${\bm{v}} \in \Rows(s_i{\bm{A}}_i/\|{\bm{A}}\|_1)$ } { \eIf{\kwexists $B \in {\mathcal{B}}$ \kwst $\similarity({\bm{v}},\sum_{{\bm{u}}\in B} {\bm{u}}) \geq 1-\tau$ }{\renewcommand{\nl}{\let\nl\oldnl} \Comment{$\similarity$ is the cosine similarity}\\ $B \leftarrow B \cup \{{\bm{v}}\} $\\ } { \kwnew $B \leftarrow \{{\bm{v}}\}$\\ ${\mathcal{B}} \leftarrow {\mathcal{B}} \cup \{B\}$\\ }} } ${\bm{A}} \leftarrow \Mat\left(\{\sum_{{\bm{u}} \in B}{\bm{u}} \;|\; B \in {\mathcal{B}} \}\right)$ \\\renewcommand{\nl}{\let\nl\oldnl}\Comment{$\Mat$ converts a set of vectors into a matrix, each row of ${\bm{A}}$ is the sum of vectors in a bucket} \renewcommand{\nl}{\let\nl\oldnl} \textbf{Measure step} \\ ${\bm{y}} \leftarrow {\bm{A}}{\bm{x}} + \Lap(\frac{1}{\epsilon}\norm{{\bm{A}}}_1)$\\ \renewcommand{\nl}{\let\nl\oldnl} \textbf{Reconstruct step} \\ $\Bar{{\bm{x}}} \leftarrow {\bm{A}}^+ {\bm{y}}$\\ ans $\leftarrow \{W_i(\Bar{{\bm{x}}})\;|\; W_i \in {\mathcal{W}}\}$\\ \Return ans \caption{$\tau$ - Waterfilling Mechanism } \label{alg:waterfilling} \end{algorithm} Here we prove a stronger property than either the Sharing Incentive or Non-Interference. We show that adding an additional analyst to an arbitrary collective increase the error experienced by any analyst, A property we call Analyst Monotonicity. \begin{theorem} \label{thrm:waterfilling} Let ${\mathcal{W}}$ be the set of all workloads of the analysts in an arbitrary collective. For all analysts $i \neq j$, for all workloads $W_i \in {\mathcal{W}}, W_j \not \in {\mathcal{W}}$ the 0-Waterfilling mechanism satisfies both of the following \\ \begin{equation} \Err_i\left({\mathcal{M}}, {\mathcal{W}} \cup W_j, \left[s_j +\sum_{l: W_l \in {\mathcal{W}}}s_l\right] \epsilon\right) \leq \Err_i\left({\mathcal{M}}, {\mathcal{W}} , \left[\sum_{l: W_l \in {\mathcal{W}}}s_l\right]\epsilon\right)\label{eqn:prf1} \end{equation} \begin{equation} \label{eqn:prf2} \Err_j\left({\mathcal{M}}, {\mathcal{W}} \cup W_j, \left[s_j +\sum_{l: W_l \in {\mathcal{W}}}s_l \right] \epsilon\right) \leq \Err_j\left({\mathcal{M}},W_j, s_j\epsilon\right) \end{equation} \end{theorem} We first show that regardless of the number of analysts in the collective the scale of the noise added to the queries remains the same. We then show that the error introduced by reconstructing the original query answers (frobenius norm term of \cref{eqn:error}) can only decrease as more analysts are added the collective therefore resulting in error that either decreases or remains the same for each analyst. \begin{lemma} Consider adding an analyst to the collective with strategy matrix ${\bm{A}}_i$ and weight $s_i$. If the L1 norm of every column of ${\bm{A}}_i$ is 1, the sensitivity of the resultant strategy queries will increase by $s_i$, formally \begin{equation*} \|{\bm{A}}'\|_1 = \|{\bm{A}}\|_1+s_i, \end{equation*} where ${\bm{A}}$ and ${\bm{A}}'$ are the resultant strategy matrix before and after adding this analyst respectively. \label{Lemma:water_sensitivity} \end{lemma} \begin{proof} For any matrix ${\bm{M}}$, We define $\cnorm({\bm{M}})$ as a vector where the $i$th entry is the L1 norm of the $i$th column of ${\bm{M}}$, formally \begin{equation*} \cnorm({\bm{M}}) = \sum_{{\bm{v}}\in{\bm{M}}} |{\bm{v}}|, \end{equation*} where ${\bm{v}}$'s are the row vectors of ${\bm{M}}$ and $|{\bm{v}}|$ is the vector which takes entry-wise absolute value of ${\bm{v}}$. In Alg.~\ref{alg:waterfilling}, each row of ${\bm{A}}$ corresponds to a bucket $B \in {\mathcal{B}}$. Thus, particularly for ${\bm{A}}$, \begin{equation} \cnorm({\bm{A}}) = \sum_{{\bm{v}}\in{\bm{A}}} |{\bm{v}}| = \sum_{B\in {\mathcal{B}}}\left|\sum_{{\bm{u}}\in B}{\bm{u}}\right|. \label{eqn:water_strategy_bucket} \end{equation} Consider adding a query ${\bm{v}}'$ to buckets ${\mathcal{B}}$ and let the new buckets be ${\mathcal{B}}'$. Let ${\bm{e}}' = {\bm{v}}'/\|{\bm{v}}'\|$. If ${\bm{e}}'\cdot {\bm{e}}_B < 1$ for all buckets $B \in {\mathcal{B}}$, ${\bm{v}}'$ will be put in a new bucket $B'$ and thus $|\sum_{{\bm{u}}\in B'}{\bm{u}}| = |{\bm{v}}'|$. Also, ${\mathcal{B}}' = {\mathcal{B}} \cup \{B'\}$. Otherwise, there exists a bucket $B^* \in {\mathcal{B}}$ and ${\bm{e}}' \cdot {\bm{e}}_{B^*} = 1$. In this case, ${\bm{v}}'$ will be put in the bucket $B^*$ and ${\mathcal{B}}' = {\mathcal{B}}$ with updated $B^{*'}$. Since ${\bm{e}}'$ and ${\bm{e}}_{B^*}$ are both unit vector, ${\bm{e}}' \cdot {\bm{e}}_{B^*} = 1$ means ${\bm{v}}'/\|{\bm{v}}'\|={\bm{e}}' = {\bm{e}}_{B^*} = \sum_{{\bm{u}} \in B^*}{\bm{u}}/\|\sum_{{\bm{u}} \in B^*}{\bm{u}}\|$. Thus, \begin{equation*} \left|\sum_{{\bm{u}}\in B^{*'}}{\bm{u}}\right| = \left|\sum_{{\bm{u}}\in B^*}{\bm{u}} + {\bm{v}}'\right| = \left|\sum_{{\bm{u}}\in B^*}{\bm{u}}\right| + |{\bm{v}}'|. \end{equation*} In both cases, we have \begin{equation} \sum_{B\in {\mathcal{B}}'}\left|\sum_{{\bm{u}}\in B}{\bm{u}}\right| = \sum_{B\in {\mathcal{B}}}\left|\sum_{{\bm{u}}\in B}{\bm{u}}\right| + |{\bm{v}}'|. \label{eqn:water_add_query} \end{equation} In this process, we add $s_i{\bm{A}}_i$ to $B$ resulting in $B'$. From \cref{eqn:water_strategy_bucket} and \cref{eqn:water_add_query} we get, \begin{equation*} \begin{split} \cnorm({\bm{A}}') &= \sum_{B\in {\mathcal{B}}'}\left|\sum_{{\bm{u}}\in B}{\bm{u}}\right| = \sum_{B\in {\mathcal{B}}}\left|\sum_{{\bm{u}}\in B}{\bm{u}}\right| + \sum_{{\bm{v}}\in s_i{\bm{A}}_i}|{\bm{v}}|\\ &= \cnorm({\bm{A}})+\cnorm(s_i{\bm{A}}_i). \end{split} \end{equation*} Given the L1 norm of every column of ${\bm{A}}_i \in {\mathcal{A}}$ is 1, we have $\cnorm(s_i{\bm{A}}_i) = s_i{\bm{1}}$, where ${\bm{1}}$ is a all-one vector. Since the L1 norm of a matrix is the maximum of all L1 column norms, we have \begin{equation*} \begin{split} \|{\bm{A}}'\|_1 &= \max(\cnorm({\bm{A}}')) = \max(\cnorm({\bm{A}})+s_i{\bm{1}})\\ &= \max(\cnorm({\bm{A}}))+s_i = \|{\bm{A}}\|_1+s_i \end{split} \end{equation*} \end{proof} Since we can consider the strategy matrix with no analysts as the zero matrix, and adding an additional analyst adds their weight to the sensitivity, the L1 norm for the strategy matrix for $k$ analysts is \begin{equation*} \|{\bm{A}}\|_1 = \sum_{i=1}^k s_i \end{equation*} Since the ith analyst is entitled to $s_i\epsilon$ of the budget and the sensitivity of the strategy query set is equal to the sum of each analysts' weights, the scale of the noise term in \cref{eqn:error} is the same regardless of the number of analysts. Let $z\leq k$ be any arbitrary number of analysts. The scale of the noise term in \cref{eqn:error} is as follows. \begin{equation} \label{eqn:noise} \frac{2\norm{\bm{A}}_1^2}{\epsilon^2} = \frac{2\left(\sum_{i=1}^z s_i\right)^2}{\left(\sum_{i=1}^z s_i \epsilon\right)^2} = \frac{2}{\epsilon^2} \end{equation} Since the amount of noise being added to each query in the final strategy is the same, the amount of error experienced by each analyst is only dependent on the frobenius norm term of \cref{eqn:error}. We first note that adding a new analyst to the collective results in a change to the overall strategy matrix that can either be expressed by multiplying it by some diagonal matrix with all entries greater than 1 (adding weight to a bucket) or by adding additional rows (creating new buckets). We show below that either of these operations results in a frobenius norm term that is no greater than the term with the original strategy matrix. \begin{lemma} \label{lemma:diagonal} For any workload matrix ${\bm{W}}$ and any strategy ${\bm{A}}$ \begin{equation*} \left\| {\bm{W}}({\bm{D}}{\bm{A}})^+\right\|_F \leq \left\| {\bm{W}}{\bm{A}}^+\right\|_F \end{equation*} where ${\bm{D}}$ is a diagonal matrix with all diagonal entries greater than or equal to 1 and ${\bm{A}}$ is a full rank matrix. \end{lemma} \begin{proof} We first note that since ${\bm{D}}$ is a diagonal matrix with all entries greater than or equal to 1 then ${\bm{D}}^{-1}$ is a diagonal matrix with all values less than or equal to 1. Since this matrix cannot increase the value of any entry of any matrix multiplied by it the following holds. \begin{equation*} \left\|{\bm{W}}{\bm{A}}^+{\bm{D}}^{-1} \right\|_F \leq \left\|{\bm{W}}{\bm{A}}^+ \right\|_F \end{equation*} We then note that ${\bm{W}}{\bm{A}}^+{\bm{D}}^{-1}$ is a solution to the linear system of equations ${\bm{B}}({\bm{D}}{\bm{A}}) = {\bm{W}}$. Since ${\bm{W}}{\bm{A}}^+{\bm{D}}^+$ is a solution to the linear system of equations then it is the least squares solution to the set of linear equations \cite{pseudo-inverse} and as such the following holds. \begin{equation*} \left\|{\bm{W}}({\bm{D}}{\bm{A}})^+ \right\|_F \leq \left\|{\bm{W}}{\bm{A}}^+{\bm{D}}^{-1} \right\|_F \leq \left\| {\bm{W}}{\bm{A}}^+\right\|_F \end{equation*} \end{proof} \begin{lemma} \label{lemma:rows} Let $\tilde{{\bm{A}}}$ be the original strategy matrix ${\bm{A}}$ with additional queries (rows) added to it. We can write this as a block matrix as $ \tilde{{\bm{A}}} = \begin{bmatrix} {\bm{A}} \\ {\bm{C}} \end{bmatrix} $ Where ${\bm{C}}$ are the additional queries. For any workload ${\bm{W}}$ and any strategy ${\bm{A}}$ \begin{equation*} \left\| {\bm{W}}\tilde{{\bm{A}}}^+ \right\|_F \leq \left\| {\bm{W}}{\bm{A}}^+ \right\|_F \end{equation*} \end{lemma} \begin{proof} Let $\hat{{\bm{A}}}$ be the original matrix ${\bm{A}}$ padded with additional rows of zeros in order to be the same size as $\tilde{{\bm{A}}}$ written in block matrix form as $ \hat{{\bm{A}}} = \begin{bmatrix} {\bm{A}} \\ {\bm{0}} \end{bmatrix}$. We note that by the formula for block matrix pseudo-inverse, the pseudo-inverse of $\hat{{\bm{A}}}$ is as follows. $\hat{{\bm{A}}}^+ = \begin{bmatrix} {\bm{A}}^+ & {\bm{0}} \end{bmatrix}$ We then note that ${\bm{W}}\hat{{\bm{A}}}^+$ is a solution to the linear system of equations as follows. \begin{equation*} \label{eq1} \begin{split} {\bm{W}}\hat{{\bm{A}}}^+\tilde{{\bm{A}}} & = {\bm{W}} \begin{bmatrix} {\bm{A}}^+ & {\bm{0}} \end{bmatrix}\begin{bmatrix} {\bm{A}} \\ {\bm{C}} \end{bmatrix} \ ={\bm{W}}{\bm{A}}^+{\bm{A}} \ = {\bm{W}} \end{split} \end{equation*} Therefore since ${\bm{W}}\hat{{\bm{A}}}^+$ is a solution to the linear system of equations and since $ {\bm{W}}\tilde{{\bm{A}}}^+ $ is the least squares solution to the linear set of equations \cite{pseudo-inverse} we get the following. \begin{equation*} \left\| {\bm{W}}\tilde{{\bm{A}}}^+ \right\|_F \leq \left\| {\bm{W}}\hat{{\bm{A}}}^+ \right\|_F = \left\| {\bm{W}}{\bm{A}}^+ \right\|_F \end{equation*} \end{proof} \begin{proof}[Proof of \cref{thrm:waterfilling}] Let ${\bm{A}}$ be the strategy matrix produced by the Waterfilling Mechanism without analyst $j$. Let $\tilde{{\bm{A}}}$ be ${\bm{A}}$ with additional rows appended to it and let ${\bm{D}}$ be a diagonal matrix with all entries 1 or greater. \begin{eqnarray*} \lefteqn{\Err_i \left({\mathcal{M}}, {\mathcal{W}} \cup W_j, \left[s_j+ \sum_{l: W_l \in {\mathcal{W}}}s_l \right] \epsilon \right)} \\ & = & \frac{2}{\epsilon^2}\norm{{\bm{W}}_i({\bm{D}}\tilde{{\bm{A}}}^+)}_F^2 \ \ \ \text{(from \cref{eqn:noise})} \\ & \leq &\frac{2}{\epsilon^2}\norm{{\bm{W}}_i\tilde{{\bm{A}}}^+}_F^2 \ \ \ \text{(from \cref{lemma:diagonal})} \\ & \leq& \frac{2}{\epsilon^2}\norm{{\bm{W}}_i{\bm{A}}^+}_F^2 \ \ \ \text{(from \cref{lemma:rows})} \\ & = &\Err_i\left({\mathcal{M}}, {\mathcal{W}}, \left[\sum_{l: W_l \in {\mathcal{W}}}s_l \right] \epsilon\right) \end{eqnarray*} If we instead assume ${\bm{A}}$ is the strategy matrix produced by the Waterfilling Mechanism with only analyst $j$ then the same process satisfies \cref{eqn:prf2}. \end{proof} Since adding an additional analyst to the collective can only decrease the amount of expected error experienced by any analyst, we have the following as corollaries for Theorem~\ref{thrm:waterfilling}. \begin{corollary} \label{cor:sharing_incentive} Waterfilling Mechanism satisfies sharing incentive \end{corollary} \begin{corollary}\label{cor:non_interference} Waterfilling Mechanism satisfies non-interference \end{corollary} Unlike Independent Mechanisms, Waterfilling Mechanisms satisfy all the desiderata while being efficient with respect to error. \begin{theorem} The Waterfilling Mechanism can achieve as much as k times better error than the Independent Mechanism and always achieves no more error than the Independent Mechanism. \end{theorem} \begin{proof} Consider the pathological example of $k$ analysts each of whom ask the same single linear counting query to be answered with the Laplace Mechanism. In this case the overall expected error using the Waterfilling mechanism is that of answering the single query once using the entire privacy budget using the Laplace mechanism. This results in an expected error of $\frac{2}{\epsilon^2}$. If each analyst were to independently answer their queries using $\frac{\epsilon}{k}$ of the budget each and then post process the $k$ results by taking the sample median it would result in a mean squared error of $\frac{2k}{\epsilon^2}$. By \cref{cor:sharing_incentive} the Waterfilling Mechanism always achieves at most as much error as the Independent Mechanism satisfying the second statement. \end{proof} \subsection{Experimental Setup} \label{sec:Experiments_setup} For the following experiments we use HDMM \cite{HDMM} as the selection step, but any selection step can be used in practice. In addition, we can consider the Identity Mechanism a variant of matrix mechanism with a fixed identity strategy matrix ${\bm{I}}$, $\MM({\bm{I}})$. \begin{comment} \begin{table*}[ht] \caption{Common workloads used for domain size $n$} \begin{tabularx}{\textwidth}{|l|l|X|} \hline \textbf{Workload} & \textbf{Description} & \textbf{Matrix} \\ \hline Identity & Histogram on all $n$ categories & $n\times n$ identity matrix ${\bm{I}}_n$ \\ \hline Total & The sum of all $n$ categories & $1\times n$ all-ones row matrix ${\bm{T}}_n$ \\ \hline Singleton & One single category & $1\times n$ row matrix ${\bm{S}}_{n,i}$ with 1 in the $i$th entry and 0 in other entries \\ \hline Prefix Sum & The prefix sum of all $n$ categories & $n\times n$ lower triangular matrix of ones ${\bm{P}}_n$ \\ \hline H2 workload & Hierarchical matrix ${\mathcal{H}}^2$ & $(2n-1)\times n$ matrix: vertical stack of ${\bm{I}}_{n/t}\otimes {\bm{T}}_{t}$ for $t = 1,2,4, \ldots, n$ \\ \hline \end{tabularx}% \label{tab:common_workloads} \end{table*} \end{comment} For all experiments we used $\epsilon =1 $ for our total privacy budget. In addition, The Waterfilling Mechanism has a tolerance parameter $\tau$. We experimented with several values of $\tau$. Results shown in \cref{sec:Experiments_Tolerance_results} found $\tau = 0.001$ is a value that achieves good overall accuracy. As such we set it to be $0.001$ in all our experiments. For the figures, each workload is given an abbreviations as follows: Ind (Independent HDMM), Iden (Identity mechanism), Util (Utilitarian HDMM), WUtil (Weighted Utilitarian HDMM), and Water (HDMM Waterfilling Mechanism). For each experiment we run the optimization 10 times and pick the strategy with the minimum loss. \subsection{Empirical Measures} We design several empirical measures based on our desiderata to provide an overall understanding of the mechanisms. All measures are with respect to a single mechanism and a single set of workloads. \noindent\textbf{Total Error} is the sum of expected errors of all analysts. This is a common measure found in the literature to show the efficiency of the algorithm. \par \noindent\textbf{Maximum Ratio Error} of a mechanism ${\mathcal{M}}$ for a given analyst is the expected error of ${\mathcal{M}}$ divided by the expected error of the independent version. For non-independent adaptive algorithms, it is a measure of the Sharing Incentive as it measures to what extent one analyst gets better or worse off compared to asking the query on their own. We present the maximum of the ratio errors among all analysts. The maximum ratio error amongst all analysts is \begin{equation*} \max_i \left( \frac{\Err_i({\mathcal{M}},{\mathcal{W}}, \epsilon)}{\Err_i({\mathcal{M}},W_i, s_i\epsilon)} \right). \end{equation*} If the value is larger than 1, the mechanism violates the Sharing Incentive as the error in the joint case is greater than the error experienced in the independent case. \par \noindent\textbf{Empirical Interference} is a quantifiable measure to show the extent which a mechanism violates Non-Interference or the distance from violating it. For each analyst $i$, we define the interference with respect to another analysts $j$ as the ratio of the expected error for analyst $j$ when all analysts are included to the case when excluding analyst $i$. If this ratio is larger than 1, analyst $j$ can be worse off when analyst $i$ joins the workload set. We define the interference of analyst $i$ on analyst $j$ to be \begin{equation*} I_{i}(j) = \frac{\Err_j({\mathcal{M}},{\mathcal{W}}, \epsilon)}{\Err_j({\mathcal{M}},{\mathcal{W}}^c_i, (1-s_i)\epsilon)} \end{equation*} This represents the relative change in error experienced by analyst $j$ when analyst $i$ joins the collective. We then define the interference of mechanism ${\mathcal{M}}$ on the set ${\mathcal{W}}$ as the maximum of interference among all analysts, as \begin{equation*} I_{{\mathcal{M}}}({\mathcal{W}}) = \max_{1 \leq i,j \leq k, i\neq j} I_{i}(j). \end{equation*} Intuitively, it represents the maximum ratio increase of the expected error of any analyst when another analyst joins the workload set. If $I_{{\mathcal{M}}}({\mathcal{W}}) \leq 1$, mechanism ${\mathcal{M}}$ satisfies Non-Interference on ${\mathcal{W}}$. Since ${\mathcal{M}}$ is usually a non-deterministic mechanism, rerunning the mechanism with ${\mathcal{W}}^c_i$ may give different strategy matrices to other analysts. Thus, we fix strategy matrices for Select First Mechanisms to ensure a more reasonable comparison. Since the strategies used by Collect First Mechanisms are dependent on each analysts input it is not possible to fix the strategy matrix. \subsection{Workloads and Datasets} \label{sec:Experiments_WorkloadsAndDatasets} Here we describe the methods used to generate workloads for each analyst as well as the data-sets used. When considering only linear queries all of our mechanisms are data independent and as such do not require a dataset in order to be evaluated. We only use a dataset when we extend our evaluation to non-linear queries and data dependent queries. \par \noindent\textbf{Practical settings:}\label{sec:prac_setup}\input{experiment_figure} We generate practical settings using a series of random steps using the census example workloads provided in \cite{HDMM}. We tested on the race workloads with domain size $n=64$. \begin{enumerate} \item We first fix the domain size $n$. We then generate the number of analysts by picking an integer $k$ uniformly random from $[2, k_{\max}]$. We let the number of analysts be $k$. Each analyst is given equal weight. \item Each analyst then pick a workload uniformly random from the set of 8 workloads, including 3 race workloads, Identity, Total, Prefix Sum, H2 workload, and custom workload. \item If they get custom workload, we chose their matrix size by picking an integer uniformly random from $[1, 2n]$. \item For each query in the matrix we chose a class of query uniformly sampled from the set including range queries (0-1 vector with contiguous entries), singleton queries, sum queries (random 0-1 vector) and random queries (random vector). The query is thus a random query within its class. \item The custom workload is thus a vertical stack of the queries. \item We repeat this procedure $t$ times to get $t$ randomly chosen sets of workloads. We call them $t$ instances. \end{enumerate} \noindent\textbf{Marginals:}\label{sec:Experiments_Marginal_setup} We also experiment on another common type of workloads, marginals. For a dataset with $d$ attributes with domain size $n_i$ for the $i$th attribute, we can define a $m$-way marginal as the follows. Let $S$ be a size $m$ subset of $\{1,2,\ldots,d\}$, we can express the workload as the Kronecker product ${\bm{A}}_1 \otimes {\bm{A}}_2 \otimes \ldots \otimes {\bm{A}}_d$, where ${\bm{A}}_i = {\bm{I}}_{n_i}$ if $i \in S$ and ${\bm{A}}_i = {\bm{T}}_{n_i}$ otherwise. Here ${\bm{I}}_{n_i}$ is the identity workload matrix and ${\bm{T}}_{n_i}$ is the total workload matrix. Specifically, a 0-way marginal is the Total workload and a $d$-way marginal is the Identity workload. Also, since there are $\binom{d}{m}$ size-$m$ subset of $\{1,2,\ldots,d\}$, there are $\binom{d}{m}$ different $m$-way marginals. In our experiments for simplicity, we use $d$ attributes all with domain size 2. We repeat the process for generating analyst workloads from the practical settings in this case each individual analyst chooses a workload uniformly at random from the set of set of $\binom{d}{m}$ $m$-way marginals. \noindent\textbf{Data-dependent Non-linear Queries: }\label{sec:experiments_nonlinear_setup}In previous experiments, all workloads are linear and the expected error can thus be calculated without data. Our mechanisms can also be used for non-linear queries. We experiment on some common non-linear queries including \emph{mean}, \emph{medium}, and \emph{percentiles} based on a histogram. Error in this case is data-dependent and needs to be empirically calculated using real datasets. We use the Census Population Projections \cite{census_population}. The dataset is Population Changes by Race. We choose year 2020 and Projected Migration for Two or more races. The domain size of data is $n=86$, representing ages from 0 to 85. As in the previous 2 experiments we use the procedure from practical settings in order to generate each analyst's workloads except the set of workloads to select from only contains 4 queries, \emph{mean}, \emph{medium}, \emph{25-percentile}, and \emph{75-percentile}. \emph{Mean} is reconstructed from the workload containing the Total query ${\bm{T}}_n$ and the weighted sum query, a vector representing the attribute values (0 to 85 in our case). \emph{Medium} and \emph{percentiles} are reconstructed from the Prefix Sum workload ${\bm{P}}_n$. \noindent\textbf{Tolerance for Water-filling: } \label{Sec:Experiments_Tolerance_setup} To examine the effect of tolerance in practice, we experimented on different values of tolerance $\tau$ for the HDMM Water-Filling mechanism. \cref{fig:tol} shows the case when $\tau \in [0.1, 0]$. We experimented with greater value of $\tau$ those values resulted in greater error and have been omitted from the figures.The workloads used are 1-way marginals as defined in \cref{sec:Experiments_Marginal_setup}. \subsection{Results} \noindent\textbf{Practical settings:} \label{sec:prac_results} \cref{fig:prac_total} gives an overall view of the efficiency of different mechanisms. As expected, Utilitarian HDMM, a mechanism optimized for overall error, performs the best. Meanwhile Independent HDMM, a mechanism which does not utilize the group structure of the problem at all performs the worst. We note that the Weighted Utilitarian Mechanism in exchange for satisfying the sharing incentive performs slightly worse than the Utilitarian but performs better than the Waterfilling Mechanism which satisfies all three desiderata. The Waterfilling Mechanism performs as well as the Identity Mechanism while still satisfying adaptivity. This shows as stated in \cref{sec: Tradeoffs} that while there is a small cost in order to satisfy the sharing incentive and Non-Interference, satisfying adaptivity comes at no accuracy cost. We present the results for $k_{\max}=20$ as a representative in \cref{fig:prac}. The figure is a box plot of $t=100$ instances is generated randomly using the procedure in \cref{sec:prac_setup}. The green line represents the median and the green triangle represents the mean. The box represents the interquartile range. \cref{fig:prac_ratio} shows how other mechanisms compared with Independent HDMM in terms of maximum ratio error. Utilitarian HDMM violates the Sharing Incentive in a small number of instances as there are some outliers with maximum ratio error larger than 1. Weighted Utilitarian and The Waterfilling Mechanism satisfied the Sharing Incentive. Although Identity also has some outliers larger than 1, since independent HDMM is not the independent form of this mechanism it does not violate the Sharing Incentive. \cref{fig:prac_inter} gives an empirical indication on whether a mechanism satisfies Non-Interference. It can be seen that both Utilitarian and Weighted Utilitarian HDMM violate Non-Interference in some cases. Weighted Utilitarian has fewer instances which violate Non-Interference than Utilitarian. The Weighted Utilitarian mechanism also violates Non-Interference to a smaller extent than the Utilitarian Mechanism. The other three mechanisms do not violate Non-Interference as we expect. \par \noindent\textbf{Marginal Workloads: } \label{sec:Experiments_Marginal_results} In \cref{fig:prac} we show the results for $1$-way marginal with $d=8$, $k_{\max}=20$, and $n =256$. This figure also contains 100 instances. In particular, there are $d$ 1-way marginals each corresponds to an attribute. \cref{fig:marginal_total} shows Identity mechanism performs worse than the Waterfilling Mechanism and both Utilitarian mechanisms. The addition of the 1-way marginals drastically increases the error of identity compared to that of the other mechanisms. This is an example where the Identity Mechanism performs poorly with regard to total error for a common type of workloads. This is also observed for 1-way marginals with $d=6,7,9,10$. \cref{fig:marginal_ratio} and \cref{fig:marginal_inter} are qualitatively similar to those in the practical settings. The Waterfilling Mechanism continues to satisfy all the desiderata while maintaining lower error than the Independent and Identity Mechanisms. Both Utilitarian mechanisms achieve lower overall error but at the cost of violating non interference. \noindent\textbf{Data-dependent Non-linear Queries: } \label{sec:experiments_nonlinear_results} \begin{figure*}[ht] \resizebox{0.9\textwidth}{!}{% \centering \begin{subfigure}[b]{0.308\linewidth} \includegraphics[width=1\textwidth]{fig/data_total_errors.pdf} \caption{Total Errors (log scale)} \label{fig:data_total} \end{subfigure} \begin{subfigure}[b]{0.254\linewidth} \includegraphics[width=1\textwidth]{fig/data_total_errors_2.pdf} \caption{Total Errors (zoomed in)} \label{fig:data_total_zoom} \end{subfigure} \begin{subfigure}[b]{0.217\linewidth} \includegraphics[width=1\textwidth]{fig/data_max_ratio_errors.pdf} \caption{Max Ratio Errors} \label{fig:data_ratio} \end{subfigure} \begin{subfigure}[b]{0.223\linewidth} \includegraphics[width=1\textwidth]{fig/data_inters.pdf} \caption{Empirical Interference} \label{fig:data_inter} \end{subfigure}% } \caption{Empirical measures for non-linear queries. Errors shown are empirical expected errors calculated using real data. Values of maximum ratio error and empirical interference above 1 signify a violation of the Sharing Incentive and Non-Interference respectively.} \label{fig:data} \end{figure*} \cref{fig:data_total} shows that the Independent Mechanism performs much worse than all other mechanisms in terms of total error. \cref{fig:data_total_zoom} is the zoomed in version of \cref{fig:data_total}, removing Independent. Since the answer of a non-linear query is reconstructed using the result of a different linear workload, Utilitarian is not guaranteed to have the lowest total errors. We can see that Weighted Utilitarian outperforms Utilitarian here. The other two mechanism have higher total errors, and the Waterfilling Mechanism has a better median total errors than Identity. \cref{fig:data_ratio} and \cref{fig:data_inter} shows the max ratio errors and empirical interference. Since Independent and Identity mechanism satisfy the Sharing Incentive and Non-Interference by definition, we omit them here. We can see that all 3 other mechanisms satisfy the Sharing Incentive as they all have max ratio errors smaller than 1. Both Utilitarian mechanisms violate Non-Interference as shown in \cref{fig:data_inter}. Waterfilling Mechanisms satisfies Non-Interference. The outliers are due to numerical errors since we are using empirical expected errors instead of analytical ones. These results show that our mechanisms also perform well for non-linear queries and have similar properties as the instances with linear queries. The results are qualitatively similar for $k_{\max}=10$. \noindent\textbf{Tolerance for Water-filling Mechanism: } \label{sec:Experiments_Tolerance_results} \cref{fig:tol_ratio} shows that the total error is large at both ends, $\tau =0.1$ and $\tau =0$. The total error is the smallest for $\tau=0.01$ and is also small for $\tau = 10^{-3}$ and $\tau=10^{-4}$. This shows that there is no simple relation between the value of tolerance and total errors and we should not set $\tau=0$ exactly in practice. \cref{fig:tol_ratio} shows the violation of Sharing Incentive when $\tau =0.1$ and $\tau=0.01$. From this result, we see that $\tau=0.01$ is too large and $\tau=10^{-3}$ (our default setting) is reasonable. We do not observe violation of Non-Interference any value of $\tau$. \begin{figure}[h] \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[width=1\linewidth]{fig/tol_marginal_total_errors.pdf} \caption{Total Errors} \label{fig:tol_total} \end{subfigure \begin{subfigure}[b]{0.44\linewidth} \centering \includegraphics[width=1\linewidth]{fig/tol_marginal_max_ratio_errors.pdf} \caption{Max Ratio Errors} \label{fig:tol_ratio} \end{subfigure} \caption{Total and Maximum Ratio Errors for 1-way marginals using HDMM Water-filling mechanism with different values of $\tau$ (x-axis). Values of maximum ratio error above 1 signify a violation of Sharing Incentive.} \label{fig:tol} \end{figure} \subsection{Multi-analyst DP data release problem} We study the common real-world situation where multiple stakeholders or analysts are interested in a particular data release and the data curator must decide how the stakeholders should share the limited privacy budget. Consider the role of Facebook in its partnership with Social Science One \cite{FacebookSocialScienceOne}. Facebook wanted to aid research on the effect of social media on democracy and elections by sharing some social network data. In order to participate and receive the privacy protected data each analyst had to submit their specific tasks and queries ahead of time. With the given set of queries from each analyst and a fixed privacy budget, Facebook created a single data release to be used by all analysts. Using existing DP techniques, Facebook had three options: (a) split the privacy budget and answer each analyst's queries individually, (b) join all analysts' queries together and answer them all at once using a workload answering mechanism \cite{Xu2013,Matrix10,HDMM,AHP,Chen13:recursive,Ding2011,Hay2010,Li2014,Narayan,Hb,Acs2012,torkzadehmahani2020dpcgan}, or (c) generate a single set of synthetic data \cite{PrivBays} for all analysts to use.\par Option (a) is inefficient as the same query can be answered multiple times, each time using some of the privacy budget. Option (b) may be efficient with respect to overall error but does not differentiate between the queries of different analysts. Some analysts may receive drastically more error than others, perhaps much more than they would have under (a). Option (b) therefore lacks much in the way of guarantees to an individual analyst. Option (c) is agnostic to any analyst's particular queries and may incur inefficiencies due to its inability to adapt to the specific queries being asked.\par Though all of these techniques have their uses, they all have some undesirable properties in the multi-analyst setting. This is because almost all of the work in differential privacy up until now has focused (often implicitly) on the single analyst case. We are interested in designing effective shared systems for multi-analyst differentially private data release that simultaneously provide guarantees to individual analysts and ensure good overall performance. We call this the multi-analyst differentially private data release problem. \subsection{Contributions} Our work introduces the multi-analyst differentially private data release problem. In this context we ask: \textit{``How should one design a privacy mechanism when multiple analysts may be in competition over the limited privacy budget''}. Our main contributions in this work are as follows. \par \begin{itemize} \item We study (for the first time) differentially private query answering across multiple analysts. We consider a realistic setting where multiple analysts pose query workloads and the data owner makes a single private release to answer all analyst queries. \item We define three minimum desiderata that that we will argue any differentially private mechanism should satisfy in a multi-agent setting -- The Sharing Incentive, Non-Interference and Adaptivity. \item We show empirically that existing mechanisms for answering large sets of queries either violate at least one of the desiderata described or are inefficient. \item We introduce mechanisms which provably satisfy all of the desiderata while maintaining efficiency. \end{itemize} \section{Introduction}\label{sec:intro} \input{introduction.tex} \section{Background}\label{sec:background} \input{background} \section{Problem Formulation} \label{sec:ProbDef} \subsection{Setting} \input{prob_def} \subsection{Desiderata} \label{sec:desiderata} \input{desiderata.tex} \subsection{Problem Statement} \input{problem-statement.tex} \section{Design Paradigms} \label{sec:Design} \input{design_paradigm.tex} \section{Adapting Existing Mechanisms} \label{sec:Algorithms} \input{algorithms.tex} \section{Experiments} \label{sec:Experiments} \input{experiments.tex} \section{Related Work} \input{relatedwork.tex} \section{Future Work} \input{futurework.tex} \section{Conclusion} \input{conclusions.tex} \label{sec:Conclusion} \begin{acks} \input{acknowledgement.tex} \end{acks} \clearpage \bibliographystyle{ACM-Reference-Format} \subsection{Pathological Settings} The next set of experiments uses pathological settings, where we purposely constructed sets of workloads to detect whether our mechanisms violates the Sharing Incentive and Non-Interference. We design this experiment with the intuition that if the the uncommon workload only provides a small proportion of privacy budget but requires a large change in the strategy, the analysts with the common workload will suffer. For example strategies that work well for the total workload perform badly when used to answer the identity workload. Therefore when $k-1$ analysts ask the total workload and the $k$th analyst asks the the identity workload the strategy chosen by the mechanism must change drastically to adjust for the $k$th analyst and therefore reduce the overall utility of the $k-1$ analysts. In the experiment, every set of workloads consists of 1 uncommon workload and $k-1$ common workloads. We experimented on 3 types of workloads: Singleton, Identity, and Total. We fixed the size of data $n=16$ and tried different values of $k \in [2,10]$. We present the results for 1 identity workload with $k-1$ total workload (identity-total) in \cref{fig:adv}, since it can best show the violation of the Sharing Incentive and Non-Interference. Since Independent and Identity mechanism both trivially satisfy these 2 properties, we only present results for Utilitarian HDMM, Weighted Utilitarian HDMM and HDMM Waterfilling mechanism. \begin{figure}[th] \begin{subfigure}[b]{0.495\linewidth} \centering \includegraphics[width=1\linewidth]{fig/adv_max_ratio_errors_k2.pdf} \caption{Max Ratio Errors ($k=2$)} \label{fig:adv_ratio} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{fig/adv_inters_k10.pdf} \caption{Empirical Interference ($k=10$)} \label{fig:adv_inter} \end{subfigure} \caption{Maximum Ratio Errors and Empirical Interference incurred by mechanisms in pathological settings. Values greater above 1 signify a violation of the Sharing Incentive and Non-Interference respectively. $k$ is the number of analysts.} \label{fig:adv} \end{figure} \cref{fig:adv_ratio} shows that only Utilitarian violates the Sharing Incentive and the violation is significant. \cref{fig:adv_inter} shows both Utilitarian and Weighted Utilitarian violates Non-Interference and the violation of Utilitarian is much more significant. Waterfilling mechanism satisfies both of them as expected. \subsection{Practical Settings} \label{sec:prac} \input{experiment_figure} We generate practical settings using a series of random steps using the census example workloads provided in \cite{HDMM}. We tested on the race workloads with \new{domain} size $n=64$. \begin{enumerate} \item We first fix the dataset size $n$. We then generate the number of analysts by picking an integer $k$ uniformly random from $[2, k_{\max}]$. We let the number of analysts be $k$. \new{Each analyst is given equal weight}. \item Each analyst then pick a workload uniformly random from the set of 8 workloads, including 3 race workloads, Identity, Total, Prefix Sum, H2 workload, and custom workload. \item If they get custom workload, we chose their matrix size by picking an integer uniformly random from $[1, 2n]$. \item For each query in matrix we chose a class of query uniformly sampled from the set including range queries (0-1 vector with contiguous entries), singleton queries, sum queries (random 0-1 vector) and random queries (random vector). The query is thus a random query within its class. \item The custom workload is thus a vertical stack of the queries. \item We repeat this procedure $t$ times to get $t$ randomly chosen sets of workloads. We call them $t$ instances. \end{enumerate} We present the results for $k_{\max}=20$ as a representative in \cref{fig:prac}. The figure is a box plot of $t=100$ instances is generated randomly using the above procedure. The green line represents the median and the green triangle represents the mean. The box represents the 25th percentile to the 75th percentile. \cref{fig:prac_total} gives an overall view of the efficiency of different mechanisms. As expected, Utilitarian HDMM, a mechanism optimized for overall error, performs the best. Meanwhile Independent HDMM, a mechanism which does not utilize the group structure of the problem at all performs the worst. The other three mechanisms have similar errors and are not much worse than Utilitarian HDMM. \cref{fig:prac_ratio} shows how other mechanisms compared with Independent HDMM in terms of maximum ratio error among all analysts. Utilitarian HDMM violates the Sharing Incentive in a small number of instances as there are some outliers with maximum ratio error larger than 1. Weighted Utilitarian and The Waterfilling mechanism satisfied the Sharing Incentive. Although Identity also has some outliers larger than 1, since independent HDMM is not the independent form of this mechanism it does not violate the Sharing Incentive. \cref{fig:prac_inter} gives an empirical indication on whether a mechanism satisfies Non-Interference in practical settings. It can be seen that both Utilitarian and Weighted Utilitarian HDMM violate Non-Interference in some cases. Weighted Utilitarian has fewer instances which violate Non-Interference than Utilitarian. The Weighted Utilitarian mechanism also violates Non-Interference to a smaller extent than the Utilitarian Mechanism. The other three mechanisms do not violate Non-Interference as we expect. From our experiment results, we can see that our HDMM Waterfilling mechanism have achieved our main goal of providing a mechanism which satisfies all three desiderata while maintaining high utility. It is also worth noting that Weighted Utilitarian HDMM also satisfies the Sharing Incentive empirically and can provide a higher utility with only small violations of Non-Interference. \subsection{Experimental Setup} We designed experiments to both test if the mechanisms proposed satisfy the desiderata as well as how they perform in practice. For the following experiments we use HDMM \cite{HDMM} as the selection step, but any selection step can be used in practice. As HDMM is a variant of the matrix mechanism (MM) \cite{Matrix10}, the mechanism is data independent for linear queries and the expected error of the matrix mechanism can be analytically calculated for any given strategy matrix. In addition, we can consider the Identity Mechanism a variant of matrix mechanism with a fixed identity strategy matrix ${\bm{I}}$, $\MM({\bm{I}})$. We note some commonly used workloads and their corresponding workload matrices ${\bm{W}}$ in \cref{tab:common_workloads}. \begin{table*}[ht] \caption{Common workloads used for \new{domain} size $n$} \begin{tabularx}{\textwidth}{|l|l|X|} \hline \textbf{Workload} & \textbf{Description} & \textbf{Matrix} \\ \hline Identity & Histogram on all $n$ categories & $n\times n$ identity matrix ${\bm{I}}_n$ \\ \hline Total & The sum of all $n$ categories & $1\times n$ all-ones row matrix ${\bm{T}}_n$ \\ \hline Singleton & One single category & $1\times n$ row matrix ${\bm{S}}_{n,i}$ with 1 in the $i$th entry and 0 in other entries \\ \hline Prefix Sum & The prefix sum of all $n$ categories & $n\times n$ lower triangular matrix of ones ${\bm{P}}_n$ \\ \hline H2 workload & Hierarchical matrix ${\mathcal{H}}^2$ & $(2n-1)\times n$ matrix: vertical stack of ${\bm{I}}_{n/t}\otimes {\bm{T}}_{t}$ for $t = 1,2,4, \ldots, n$ \\ \hline \end{tabularx}% \label{tab:common_workloads} \end{table*} In addition, The Waterfilling mechanism has a tolerance parameter $\tau$. We experimented with several values of $\tau$. Results shown in \cref{sec:Experiments_Tolerance} and found $\tau = 0.001$ is a value that achieves good overall accuracy. As such we set it to be $0.001$ in all our experiments. For the figures, each workload is given an abbreviations as follows: Ind (Independent HDMM), Iden (Identity mechanism), Util (Utilitarian HDMM), WUtil (Weighted Utilitarian HDMM), and Water (HDMM Waterfilling mechanism). For each experiment we run the optimization 10 times and pick the strategy with the minimum loss. Since HDMM is data independent the experiments do not require a particular dataset. \subsection{Marginal Workloads} \label{sec:Experiments_Marginal} We also experiment on another common type of workloads, marginals. For a dataset with $d$ attributes with domain size $n_i$ for the $i$th attribute, we can define a $m$-way marginal as the follows. Let $S$ be a size $m$ subset of $\{1,2,\ldots,d\}$, we can express the workload as the Kronecker product ${\bm{A}}_1 \otimes {\bm{A}}_2 \otimes \ldots \otimes {\bm{A}}_d$, where ${\bm{A}}_i = {\bm{I}}_{n_i}$ if $i \in S$ and ${\bm{A}}_i = {\bm{T}}_{n_i}$ otherwise. Here ${\bm{I}}_{n_i}$ is the identity workload matrix and ${\bm{T}}_{n_i}$ is the total workload matrix. Specifically, a 0-way marginal is the Total workload and a $d$-way marginal is the Identity workload. Also, since there are $\binom{d}{m}$ size-$m$ subset of $\{1,2,\ldots,d\}$, there are $\binom{d}{m}$ different $m$-way marginals. In our experiments for simplicity, we use $d$ attributes all with domain size 2. We repeat the process for generating analyst workloads from \cref{sec:prac} however in this case each individual analyst chooses a workload uniformly at random from the set of set of $\binom{d}{m}$ $m$-way marginals. in \cref{fig:prac} we show the results for $1$-way marginal with $d=8$, $k_{\max}=20$, and $n =256$. This figure also contains 100 instances. In particular, there are $d$ 1-way marginals each corresponds to an attribute. For the 1-way marginal with attribute $i$, it consists of 2 queries, where one is the total query with predicate attribute $i$ equals to 0, and the other is with predicate attribute $i$ equals to 1. \cref{fig:marginal_total} shows Identity mechanism performs worse than the Waterfilling mechanism and both Utilitarian mechanisms. The addition of the 1-way marginals drastically increases the error of identity compared to that of the other mechanisms. This is an example where the Identity perform poorly with regard to total error for a common type of workloads. This is also observed for 1-way marginals with $d=6,7,9,10$. \cref{fig:marginal_ratio} and \cref{fig:marginal_inter} are qualitatively similar to those in the practical settings. The Waterfilling Mechanism continues to satisfy all the desiderata while maintaining lower error than the Independent and Identity Mechanisms. Both Utilitarian mechanisms achieve lower overall error but at the cost of violating non interference. \subsection{Tolerance for Water-filling Mechanism} \label{sec:Experiments_Tolerance} The Water-filling mechanism provably satisfies Sharing Incentive and Non-interference when the tolerance parameter $\tau=0$. However, if we set $\tau =0$ exactly or very small, very similar queries (which may be the same query ideally but are different due to numerical errors in optimizations) will not be combined. This can lead to a large expected error. On the other hand, a too large $\tau$ may make Water-filling mechanism violate the Sharing Incentive or Non-Interference because queries which are significantly different may be put into the same bucket. To examine the effect of tolerance in practice, we experimented on different values of tolerance $\tau$ for the HDMM Water-Filling mechanism. \cref{fig:tol} shows the case when $\tau \in [0.1, 0]$, which we consider a reasonable range of the tolerance. The workloads used are 1-way marginals as defined in \cref{sec:Experiments_Marginal}. \cref{fig:tol_total} shows that the total error is large at both ends, $\tau =0.1$ and $\tau =0$. The total error is the smallest for $\tau=0.01$ and is also small for $\tau = 10^{-3}$ and $\tau=10^{-4}$. This shows that there is no simple relation between the value of tolerance and total errors and we should not set $\tau=0$ exactly in practice. \cref{fig:tol_ratio} shows the violation of Sharing Incentive when $\tau =0.1$ and $\tau=0.01$. From this result, we see that $\tau=0.01$ is too large and $\tau=10^{-3}$ (our default setting) is reasonable. We do not observe violation of Non-interference for all these values of tolerance. \begin{figure}[ht] \begin{subfigure}[b]{0.51\linewidth} \centering \includegraphics[width=1\linewidth]{fig/tol_marginal_total_errors.pdf} \caption{Total Errors} \label{fig:tol_total} \end{subfigure \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=1\linewidth]{fig/tol_marginal_max_ratio_errors.pdf} \caption{Max Ratio Errors} \label{fig:tol_ratio} \end{subfigure} \caption{Total Errors and Maximum Ratio Errors for 1-way marginals using our HDMM Water-filling mechanism with different values of tolerance $\tau$ (x-axis). Values of maximum ratio error above 1 signify a violation of Sharing Incentive.} \label{fig:tol} \end{figure} \subsection{Data-dependent Non-linear Queries} \label{sec:experiments_nonlinear} \begin{figure*}[ht] \resizebox{1\textwidth}{!}{% \centering \begin{subfigure}[b]{0.308\linewidth} \includegraphics[width=1\textwidth]{fig/data_total_errors.pdf} \caption{Total Errors (log scale)} \label{fig:data_total} \end{subfigure} \begin{subfigure}[b]{0.254\linewidth} \includegraphics[width=1\textwidth]{fig/data_total_errors_2.pdf} \caption{Total Errors (zoomed in)} \label{fig:data_total_zoom} \end{subfigure} \begin{subfigure}[b]{0.217\linewidth} \includegraphics[width=1\textwidth]{fig/data_max_ratio_errors.pdf} \caption{Max Ratio Errors} \label{fig:data_ratio} \end{subfigure} \begin{subfigure}[b]{0.223\linewidth} \includegraphics[width=1\textwidth]{fig/data_inters.pdf} \caption{Empirical Interference} \label{fig:data_inter} \end{subfigure}% } \caption{Total Errors, Maximum Ratio Errors and Empirical Interference for non-linear queries. Errors shown are empirical expected errors calculated using real data. Values of maximum ratio error and empirical interference above 1 signify a violation of the Sharing Incentive and Non-Interference respectively.} \label{fig:data} \end{figure*} In previous experiments, all workloads are linear and the expected error can thus be calculated without data. Our mechanisms can also be used for non-linear queries. We now experiment on some non-linear queries including \emph{mean}, \emph{medium}, and \emph{percentiles} based on a histogram. Error in this case is data-dependent and needs to be empirically calculated using real datasets. We use the Census Population Projections \cite{census_population}. The dataset is Population Changes by Race. We choose year 2020 and Projected Migration for Two or more races. This dataset can show the difference between mechanisms clearly. The size of data is $n=86$, representing ages from 0 to 85. As in the previous 2 experiments, the experiment procedure is as follows. We fix the maximum number of analysts $k_{\max}$. We then generate the number of analysts by picking an integer $k$ uniformly random from $[2,k_{\max}]$. We let the number of analysts be $k$. Each analyst then picks a query uniformly random from the set of 4 queries, including \emph{mean}, \emph{medium}, \emph{25-percentile}, and \emph{75-percentile}. \emph{Mean} is reconstructed from the workload containing the Total query ${\bm{T}}_n$ and the weighted sum query, a vector representing the attribute values (0 to 85 in our case). \emph{Medium} and \emph{percentiles} are reconstructed from the Prefix Sum workload ${\bm{P}}_n$. The expected error here is the mean squared error of 10000 samples calculated using the Laplace mechanism. In \cref{fig:data} we show the results of 100 instances (sets of queries) with $k_{\max}=20$. \cref{fig:data_total} shows that the Independent mechanism performs much worse than all other mechanisms in terms of total error. \cref{fig:data_total_zoom} is the zoomed in version of \cref{fig:data_total}, removing Independent. Since the answer of a non-linear query is reconstructed using the result of a different linear workload, Utilitarian is not guaranteed to have the lowest total errors. We can see that Weighted Utilitarian outperforms Utilitarian here. The other two mechanism have higher total errors, and the Waterfilling mechanism has a better median total errors than Identity. \cref{fig:data_ratio} and \cref{fig:data_inter} shows the max ratio errors and empirical interference. Since Independent and Identity mechanism satisfy the Sharing Incentive and Non-Interference by definition, we omit them here. We can see that all 3 other mechanisms satisfy the Sharing Incentive as they all have max ratio errors smaller than 1. Both Utilitarian mechanisms violate Non-Interference as shown in \cref{fig:data_inter}. Waterfilling mechanisms satisfies Non-Interference. The outliers are due to numerical errors since we are using empirical expected errors instead of analytical ones. These results show that our mechanisms are also good for non-linear queries and have similar properties as the instances with linear queries. The results are qualitatively similar for $k_{\max}=10$. \item \textbf{D1:} We are not aware of any variant of the matrix mechanism which can take into account prior knowledge into the optimization. As far as we are aware this remains a challenging open problem which is outside the scope of this paper. This is an interesting approach however the simple algorithm provided as stated is under-defined. Using the reviewers rubric we were able to construct two mechanisms, one which did not outperform the baseline and another which failed to satisfy the desiderata. The algorithms are as follows. \begin{algorithm}[h] \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} Order the agents from greatest to least $\epsilon_i$ \\ \For{each agent $i$}{ \eIf{$W_i = W_j$ where $j < i$ }{ Answer $W_i$ using the answers from $W_j$} {Answer $W_i$ with the matrix mechanism using $\epsilon_i$} } \caption{} \end{algorithm} \begin{algorithm}[h] \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} Order the agents from greatest to least $\epsilon_i$ \\ \For{each agent $i$}{ \eIf{$W_i$ is inferable from $W_j$ where $j < i$ }{ Reconstruct the answers to $W_i$ using the answers from $W_j$} {Answer $W_i$ with the matrix mechanism using $\epsilon_i$} } \caption{} \end{algorithm} Consider the scenario where there are two analysts, Alice and Bob. Alice and Bob both ask the exact same workload and each receive the same fraction $\frac{\epsilon}{2}$ of the total privacy budget. In this case Algorithm 1 would publish the answer to the workload using $\frac{\epsilon}{2}$ of the privacy budget. The independent mechanism however would answer both Alice and Bobs workloads separately using $\frac{\epsilon}{2}$ of the privacy budget and publish both, resulting in two samples of the noisy measure from the same distribution which both Alice and Bob can use to infer a more accurate estimate of their workload. Likewise consider the case where Alice asks the Identity workload and Bob asks the total workload again each analyst receives half of the privacy budget. Algorithm 2 would first answer Alices workload with $\frac{\epsilon}{2}$ of the privacy budget. It would then see that Bobs workload can be reconstructed using the $n$ query answers of Alices workload and instead of answering Bob's workload directly uses those measures instead. In this case Bob recives an answer that contains $n$ samples of noise instead of the one sample of noise he would have recived under the indpendent mechanism therefore violating the sharing incentive. \par We have included in \cref{sec:Future_work} a brief discussion on the importance of incorporating prior knowledge for future work.+ \new{One of the main sources of inefficiency in independent mechanisms is that if multiple analysts ask the same query an independent mechanism must answer the query multiple times, using privacy budget each time it does so. The waterfilling mechanism circumvents this by selecting a set of strategy queries for each analyst and then grouping common queries together and pooling privacy budget together to answer said queries.} The mechanism assigns each individual query to a bucket. Buckets represent groups of equivalent queries that can all be answered with a single representative query. In Algorithm~\ref{alg:waterfilling}, Queries are equivalent if they are simple scalar multiples of one another with some deviation bounded by a parameter $\tau$. The mechanism generates a strategy for each analyst using the given selection step. These strategies are then scaled by the weight of each analyst. Those strategies are split into their independent queries. For each of those queries the mechanism assigns a weight (usually each query is given an equal share of the analysts' share of the privacy budget) and finds a bucket with an equivalent representative query to which the query is added. The associated weight of the bucket is also increased by the query's weight. If no bucket with an equivalent representative query exists, then a new bucket is created with the query as the representative and with its associated weight. The mechanism constructs a single joint set of strategy queries by taking the representative query from each bucket and weighting it by the bucket's weight. The joint strategy queries are then measured directly using the Laplace mechanism. \par \new{Since the selection step introduces some numerical instability to the algorithm we have added the parameter $\tau$, which determines how much two queries are allowed to deviate and still be assigned to the same bucket. This allows queries which are the same up to some numerical error to be assigned to the same bucket. $\tau$ lies in the range $[0,1]$ where higher values allow for more variability.} \\If $\tau = 0$ then queries must be scalar multiples of one another to be assigned to the same bucket. \begin{figure*}[ht] \resizebox{0.75\textwidth}{!}{% \centering \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=1\textwidth]{fig/Independentanimation.pdf} \caption{Independent}\label{fig:independent} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=1\textwidth]{fig/Workload_agnostic_animation.pdf} \caption{Workload Agnostic}\label{fig:agnostic} \end{subfigure} } \resizebox{0.75\textwidth}{!}{% \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=1\textwidth]{fig/Collect_first_animation.pdf} \caption{Collect First}\label{fig:collect} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=1\textwidth]{fig/Select_first_mechanism.pdf} \caption{Select First}\label{fig:select} \end{subfigure} } \caption{\new{Design Paradigms} for Multi-Analyst DP Query Answering} \label{fig:algos} \end{figure*} \begin{figure*}[ht] \resizebox{\textwidth}{!}{% \centering \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=1\textwidth]{fig/Independentanimation.pdf} \caption{Independent}\label{fig:independent} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=1\textwidth]{fig/Workload_agnostic_animation.pdf} \caption{Workload Agnostic}\label{fig:agnostic} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=1\textwidth]{fig/Collect_first_animation.pdf} \caption{Collect First}\label{fig:collect} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=1\textwidth]{fig/Select_first_mechanism.pdf} \caption{Select First}\label{fig:select} \end{subfigure} } \caption{\new{Design Paradigms} for Multi-Analyst DP Query Answering} \label{fig:algos} \end{figure*}
{ "timestamp": "2021-06-18T02:02:05", "yymm": "2011", "arxiv_id": "2011.01192", "language": "en", "url": "https://arxiv.org/abs/2011.01192" }
"\\section{Introduction}\n\\label{sec:introduction}\n\nWith the first detection and characterisation(...TRUNCATED)
{"timestamp":"2020-11-03T02:54:30","yymm":"2011","arxiv_id":"2011.01209","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nIn quantum chromodynamics (QCD),\ntopological charge fluctuations in vac(...TRUNCATED)
{"timestamp":"2021-04-02T02:11:39","yymm":"2011","arxiv_id":"2011.01123","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction and main results}\r\n\t\\label{introandmainresults}\r\n\t\r\n\t\\noindent Ra(...TRUNCATED)
{"timestamp":"2020-11-03T02:54:10","yymm":"2011","arxiv_id":"2011.01199","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nAn important subject in quantum computational complexity theory is the stu(...TRUNCATED)
{"timestamp":"2021-03-30T02:39:33","yymm":"2011","arxiv_id":"2011.01109","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\tThis note considers the problem of predicting the future-state of the fo(...TRUNCATED)
{"timestamp":"2020-11-03T02:49:32","yymm":"2011","arxiv_id":"2011.01093","language":"en","url":"http(...TRUNCATED)
"\n\\section{Main Algorithm and Performance Guarantee} \\label{sec:algorithm}\n\n\nIn this section, (...TRUNCATED)
{"timestamp":"2020-12-23T02:23:10","yymm":"2011","arxiv_id":"2011.01104","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:intro}\n\n\\begin{table*}[t!]\n\\centering\n\\caption{List of (...TRUNCATED)
{"timestamp":"2020-11-04T02:00:12","yymm":"2011","arxiv_id":"2011.01225","language":"en","url":"http(...TRUNCATED)
"\\subsubsection*{Acknowledgements.}\nThe computations were performed on resources provided by the T(...TRUNCATED)
{"timestamp":"2020-11-03T02:48:54","yymm":"2011","arxiv_id":"2011.01077","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
5