| { |
| "paper_id": "D18-1002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:52:39.464406Z" |
| }, |
| "title": "Adversarial Removal of Demographic Attributes from Text Data", |
| "authors": [ |
| { |
| "first": "Yanai", |
| "middle": [], |
| "last": "Elazar", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Bar-Ilan University", |
| "location": { |
| "country": "Israel" |
| } |
| }, |
| "email": "yanaiela@gmail.com" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Bar-Ilan University", |
| "location": { |
| "country": "Israel" |
| } |
| }, |
| "email": "yoav.goldberg@gmail.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in-and can be recovered from-the intermediate representations learned by text-based neural classifiers. The implication is that decisions of classifiers trained on textual data are not agnostic to-and likely condition on-demographic attributes. When attempting to remove such demographic information using adversarial training, we find that while the adversarial component achieves chance-level development-set accuracy during training, a post-hoc classifier, trained on the encoded sentences from the first part, still manages to reach substantially higher classification accuracies on the same data. This behavior is consistent across several tasks, demographic properties and datasets. We explore several techniques to improve the effectiveness of the adversarial component. Our main conclusion is a cautionary one: do not rely on the adversarial training to achieve invariant representation to sensitive features.", |
| "pdf_parse": { |
| "paper_id": "D18-1002", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in-and can be recovered from-the intermediate representations learned by text-based neural classifiers. The implication is that decisions of classifiers trained on textual data are not agnostic to-and likely condition on-demographic attributes. When attempting to remove such demographic information using adversarial training, we find that while the adversarial component achieves chance-level development-set accuracy during training, a post-hoc classifier, trained on the encoded sentences from the first part, still manages to reach substantially higher classification accuracies on the same data. This behavior is consistent across several tasks, demographic properties and datasets. We explore several techniques to improve the effectiveness of the adversarial component. Our main conclusion is a cautionary one: do not rely on the adversarial training to achieve invariant representation to sensitive features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Consider automated systems that are used for determining credit ratings, setting insurance policy rates, or helping in hiring decisions about individuals. We would like such decisions to not take into account factors such as the gender or the race of the individual, or any other factor which we deem to be irrelevant to the decision. We refer to such irrelevant factors as protected attributes. The naive solution of not including protected attributes in the features to a Machine Learning system is insufficient: other features may be highly correlated with-and thus predictive of-the protected attributes (Pedreshi et al., 2008) . For example, in Credit Score modeling, text might help in credit score decisions (Ghailan et al., 2016) . By using the raw text as is, a discrimination issue might arise, as textual information can be predictive of some demographic factors (Hovy et al., 2015 ) and author's attributes might correlate with target variables .", |
| "cite_spans": [ |
| { |
| "start": 608, |
| "end": 631, |
| "text": "(Pedreshi et al., 2008)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 715, |
| "end": 737, |
| "text": "(Ghailan et al., 2016)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 874, |
| "end": 892, |
| "text": "(Hovy et al., 2015", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we are interested in languagebased features. It is well established that textual information can be predictive of age, race, gender, and many other social factors of the author (Koppel et al., 2002; Burger et al., 2011; Nguyen et al., 2013; Weren et al., 2014; Verhoeven and Daelemans, 2014; Rangel et al., 2016; Blodgett et al., 2016) , or even the audience of the text (Voigt et al., 2018) . Thus, any system that incorporates raw text into its decision process is at risk of indirectly conditioning on such signals. Recent advances in representation learning suggest adversarial training as a mean to hide the protected attributes from the decision function (Section 2). We perform a series of experiments and show that: (1) Information about race, gender and age is indeed encoded into intermediate representations of neural networks, even when training for seemingly unrelated tasks and the training data is balanced in terms of the protected attributes (Section 4); (2) The adversarial training method is indeed effective for reducing the amount of protected encoded information... (3) ...but in some cases even though the adversarial component seems to be doing a perfect job, a fair amount of protected information still remains, and can be extracted from the encoded representations (Section 5.1).", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 212, |
| "text": "(Koppel et al., 2002;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 213, |
| "end": 233, |
| "text": "Burger et al., 2011;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 234, |
| "end": 254, |
| "text": "Nguyen et al., 2013;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 255, |
| "end": 274, |
| "text": "Weren et al., 2014;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 275, |
| "end": 305, |
| "text": "Verhoeven and Daelemans, 2014;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 306, |
| "end": 326, |
| "text": "Rangel et al., 2016;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 327, |
| "end": 349, |
| "text": "Blodgett et al., 2016)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 385, |
| "end": 405, |
| "text": "(Voigt et al., 2018)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This suggests that when working with text data it is very easy to condition on sensitive properties by mistake. Even when explicitly using the adversarial training method to remove such properties, one should not blindly trust the adversary, and be careful to ensure the protected attributes are in-deed fully removed. We explore means for improving the effectiveness of the adversarial training procedure (section 5.2). 1 However, while successful to some extent, none of the methods fully succeed in removing all demographic information. Our main message, then, remains cautionary: if the goal is to ensure fairness or invariant representation, do not trust adversarial removal of features from text inputs for achieving it.", |
| "cite_spans": [ |
| { |
| "start": 421, |
| "end": 422, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We follow a setup in which we have some labeled data D composed of documents x 1 , ..., x n and task labels y 1 , ..., y n . We wish to train a classifier f that accurately predicts the main task labels y i . Each data point x i is also associated with a protected attribute z i , and we want the decision y i = f (x i ) to be oblivious to z i . Following (Ganin and Lempitsky, 2015; Xie et al., 2017) , we structure f as an encoder h(x) that maps x into a representation vector h x , and a classifier c(h(x)) that is used for predicting y based on", |
| "cite_spans": [ |
| { |
| "start": 356, |
| "end": 383, |
| "text": "(Ganin and Lempitsky, 2015;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 384, |
| "end": 401, |
| "text": "Xie et al., 2017)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "h x . If h x i is not predictive of z i , then the main task prediction f (x i ) = c(h(x i )) does not depend on z i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We say that a protected attribute z has leaked if we can train a classifier c (h x i ) to predict z i with an accuracy beyond chance level, and that the protected attribute is guarded if we cannot train such a classifier. We say that a classifier f (x) = c(h(x)) is guarded if z is guarded, and that it is leaky with respect to z if z leaked.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Adversarial Training In order to make f oblivious to z, we follow the adversarial training setup (Goodfellow et al., 2014; Ganin and Lempitsky, 2015; Beutel et al., 2017; Xie et al., 2017) . During training, an adversarial classifier adv(h x ) is trained to predict z, while the encoder h is trained to make adv fail. Concretely, the training procedure tries to jointly optimize both quantities:", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 122, |
| "text": "(Goodfellow et al., 2014;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 123, |
| "end": 149, |
| "text": "Ganin and Lempitsky, 2015;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 150, |
| "end": 170, |
| "text": "Beutel et al., 2017;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 171, |
| "end": 188, |
| "text": "Xie et al., 2017)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "arg min adv L(adv(h(x i )), z i ) arg min h,c L(c(h(x i )), y i ) \u2212 L(adv(h(x i )), z i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where L(y , y) is the loss function (in our case, cross entropy). This objective results in creating the representation h x s.t. it's maximally informative for the main task, while at the same time minimally informative of the protected attribute. The optimization is performed in practice using the gradient-reversal layer (GRL) method (Ganin and Lempitsky, 2015) . The GRL is a layer g \u03bb that is inserted between the encoded vector h x and the adversarial classifier adv. During the forward pass the layer acts as the identity, while during backpropagation it scales the gradients passed through it by \u2212\u03bb, causing the encoder to receive the opposite gradients from the adversary. The metaparameter \u03bb controls the intensity of the reversal layer. This results in the objective:", |
| "cite_spans": [ |
| { |
| "start": 337, |
| "end": 364, |
| "text": "(Ganin and Lempitsky, 2015)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "arg min h,c,adv L(c(h(x i )), y i )+L(adv(g \u03bb (h(x i ))), z i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Attacker Network To test the effectiveness of the adversarial training, we use an attacker network att(h x ). After the classifier c(h(x)) is fully trained, we use the encoder to obtain representations h, and train the attacker network to predict z based on h, without access to the encoder or to the original inputs x that resulted in h. If, after training, the attacker can predict z on unseen examples with an accuracy of beyond chance level, then the attribute z leaked to the representation, and the classifier is not guarded. Network Architecture In our setup, an example x i is a sequence of tokens w 1 , ..., w m i and the encoder is a one layer LSTM network that reads in the associated embedding vectors and returns the final state: h = LST M (w 1:m ). The classifier c and the adversarial adv are both multi-layer perceptrons with one hidden layer, sharing the same hidden layer size and activation function (tanh). 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To perform our experiments, we need a reasonably large dataset in which the data-points x contain textual information, and for which we have both main-task labels y and protected attribute labels z. While our motivating example used prediction tasks for credit rating, insurance rates or hiring decisions, to the best of our knowledge there are no publicly available datasets for these sensitive tasks that meet our criteria. We thus opted to use much less sensitive main-tasks, for which we can obtain the needed data. We focus on Twitter messages, and our protected attributes are binary-race (non-hispanic Whites vs. non-hispanic Blacks), binary-gender (Male vs. Female) 3 and binaryage (18-34 vs. 35+). As main tasks we chose binary emoji-based sentiment prediction and binary tweet-mention prediction. Both the sentiment and the mention prediction tasks are not inherently correlated with race, gender or age. Protected attributes leakage in these seemingly benign main-tasks is a strong indicator that such leakage is likely to occur also in more sensitive tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, Tasks, and Protected Attributes", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Main Tasks: Sentiment and Mention-detection Both tasks can be derived automatically from twitter data. We construct a binary \"sentiment\" task by identifying a subset of emojis which are associated with positive and negative sentiment, 4 identifying tweets containing these emojis, assigning them with the corresponding sentiment and removing the emojis. Tweets containing emojis from both sentiment lists are discarded. The binary mention task is to determine if a tweet mentions another user, i.e, classifying conversational vs. nonconversational tweets. We derive this dataset by identifying tweets that include @mentions tokens, and removing all such tokens from the tweets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, Tasks, and Protected Attributes", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Protected: Race The race annotation is based on the dialectal tweets (DIAL) corpus from (Blodgett et al., 2016), consisting of 59.2 million tweets by 2.8 million users. Each tweet is associated with predicted \"race\" information which was predicted using a technique that takes into account the geolocation of the author and the words in the tweet. We focus on the AAE (African-American English) and SAE (Standard American English) categories, which we use as proxies for non-Hispanic blacks and non-Hispanic whites.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, Tasks, and Protected Attributes", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We chose only annotations with confidence (the probability of the authors' race) of above 80%. Due to its construction, the race annotations in this dataset are highly correlated with the language being used. As such, the data reflects an extreme case in which the underlying language is very predictive of the protected attribute.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, Tasks, and Protected Attributes", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Protected: Age and Gender We use data from the PAN16 dataset (Rangel et al., 2016) , containing manually annotated Age and Gender information of 436 Twitter users, along with up to 1k tweets for each user. User annotation was performed by consulting the user's LinkedIn profile. Gender was determined by considering the user's name and photograph, discarding unclear cases. Age range was determined by birth-date which was published on the user's profile, or by mapping their degree starting date.", |
| "cite_spans": [ |
| { |
| "start": 61, |
| "end": 82, |
| "text": "(Rangel et al., 2016)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, Tasks, and Protected Attributes", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Data-splits From the DIAL corpus we extracted 166K and 10K tweets for training and development purpose respectively (after cleaning and extracting relevant tweets), whereas for the PAN16 dataset we collected 160K tweets for training and 10K for development. The train/development split in both phases of the training (task-training and attacker-training) is the same. This is the worst possible scenario for the attacker, as it is training on the exact representations the adversary attempted to remove the protected attribute from. Each split is balanced with respect to both the main and the protected labels: a random prediction of each variable is likely to result in 50% accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, Tasks, and Protected Attributes", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Metrics Throughout this paper, we measure leakage using accuracy. We say that the protected attribute has leaked if an attacker manages to predict the protected attribute with better than 50% accuracy, which is always the probability of that attribute (P (Z) = 0.5). In Appendix A we relate our metric to more standard fairness metrics, and prove that in our setup a guarded predictor guarantees demographic parity, equality of odds, and equality of opportunity. Note however that we also show empirically that such guarded predictors are very hard to attain in practice.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data, Tasks, and Protected Attributes", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In-dataset Accuracy Upper-bounds We begin by examining how well can we perform on each task (both main-tasks and protected attributes) when training the encoder and classifier directly on that task, without any adversarial component. This provides an upper bound on the protected attribute leakage for the main tasks results. The results in Table 1 indicate that the classifiers achieve reasonable accuracies for the main tasks. 5 For the protected attributes, race is highly predictable (83.9%) while age and gender can also be recovered at above 64% accuracy. Table 1 : Accuracies when training directly towards a single task.", |
| "cite_spans": [ |
| { |
| "start": 429, |
| "end": 430, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 341, |
| "end": 348, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 562, |
| "end": 569, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Baselines and Data Leakage", |
| "sec_num": "4" |
| }, |
| { |
| "text": "M+P+ M+P\u2212 M\u2212P\u2212 M\u2212P+ (a) balanced M+P+ M+P\u2212 M\u2212P\u2212 M\u2212P+ (b) unbalanced", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines and Data Leakage", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Leakage When training directly for the protected attributes, we can recover them with relatively high accuracies. But is information about them being encoded when we train on the main tasks? In this set of experiments, we encode the training and validation sets using the encoder trained on the main task, and train the attacker network to predict the protected attributes based on these vectors. This experiment suggests an upper bound on the amount of leakage of protected attributes when we do not actively attempt to prevent it. The Balanced section in Table 2 summarizes the validation-set accuracies. While the numbers are lower than when training directly (Table 1) , they are still high enough to extract meaningful and possibly highly sensitive information (e.g. DIAL Race direct prediction is 83.9% while DIAL Race leakage on the balanced Sentiment task is 64.5%).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 557, |
| "end": 564, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 663, |
| "end": 672, |
| "text": "(Table 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Baselines and Data Leakage", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Leakage: Unbalanced Data The datasets we considered were perfectly balanced with respect to both main task and protected attribute labels (Figure 1a) . Such extreme case is not representative of real-world datasets, in which a dataset may be well balanced w.r.t. the main task labels but not the protected attribute. For example, when training a classifier to predict a fit for managerial position based on Curriculum Vitae (CV) of candidates, the CV dataset may be perfectly balanced according to the managerial / non-managerial variable, but, because of existing social biases, CVs of females might be under-represented in the managerial category and over-represented in the nonmanagerial one. In such a situation, the classifier may perpetuate the bias by learning to favor males over females for managerial positions. We simulate this more realistic scenario by constructing unbalanced datasets in which the main tasks (sentiment/mention) remain balanced but the protected class proportions within each main class are not, as demonstrated in Figure 1b . For example, in the sentiment/gender case, we set the positivesentiment class to contain 80% male and 20% female tweets, while the negative-sentiment class contains 20% male and 80% female tweets. We then follow the leakage experiment on the unbalanced datasets. The attacker is trained and tested on a balanced dataset. Otherwise, the attacker can perform quite well on the male/female task simply by learning to predict sentiment, which does not reflect leakage of gender data to the representation. When training the attacker on balanced data, its decisions cannot rely on the sentiment information encoded in the vectors, and must look for encoded information about the protected attributes. The results in Table 2 indicate that both task accuracy and attribute leakage are stronger in the unbalanced case.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 138, |
| "end": 149, |
| "text": "(Figure 1a)", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1046, |
| "end": 1055, |
| "text": "Figure 1b", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 1769, |
| "end": 1776, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Baselines and Data Leakage", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Leakage: Real-world Example The above experiments used artificially constructed datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines and Data Leakage", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Here, we demonstrate leakage using a popular encoder trained for emotion detection: the Deep-Moji encoder (Felbo et al., 2017) trained to predict the most suitable emoji usage for a sentence (one of 64 in total), based on 1.2 billion tweets. The model is advertised as a good encoder for encoding sentences into a representation that is highly predictive of sentiment, mood, emotion and sarcasm. Does it also capture protected attributes?", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 126, |
| "text": "(Felbo et al., 2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines and Data Leakage", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We encode the sentences of the different protected attributes using the DeepMoji encoder and train three different attackers to predict race, gender and age. emoji usage is highly correlated with these properties.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines and Data Leakage", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Leakage of protected attributes information into the internal representation of the network when training on seemingly unrelated tasks is very common. We explore the means of mitigating such leakage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mitigating Data Leakage", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We repeat the experiments in Table 2 with an adversarial component (Ganin and Lempitsky, 2015) as described in Section 2, in order to actively remove the protected attribute information from the encoded representation during training. Note that the adversarial objective is in odds with the maintask one: by removing the protected attribute information from the encoder, we may also hurt its ability to encode information about the main task. Figure 2 shows the main task and adversary prediction accuracies on the development set as training progresses, for the Sentiment/Race pair. After an initial peak in task prediction accuracy, the adversary prediction drops and starts to fluctuate around chance level (50%), as desired, along with a drop in main task accuracy as well. The adversary's accuracy remain around chance level throughout the entire training process, suggesting that the adversarial training is indeed effective for removing the protected attribute information from the encoded representation. These trends are persistent for all main-task/protected-attribute pairs we tried.", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 94, |
| "text": "(Ganin and Lempitsky, 2015)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 36, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 443, |
| "end": 451, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Adversarial Training", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "However, training the attacker network on the resulting encoder vectors reveals a different story. For example, when considering the encoder after 50 training epochs (adversary accuracy of 49.0%), the attacker reaches 56.0% accuracy: substantially higher than the adversarial's success rate, despite sharing the exact same architecture, and being trained and tested on the exact same dataset. Table 3 summarizes the attacker's recovery rate on the adversarialy-trained encoders for the different settings. In all cases, the adversarial's success rate is around 50%, while the attacker's rate is substantially higher. The attacker's rates are lower than in Table 1 , indicating the adversarial training is effective in removing some of the protected attribute. However, a substantial amount of information managed to leak past the adversary, despite its seemingly perfect performance. Table 3 : Performances on different datasets with an adversarial training. \u2206 is the difference between the attacker score and the corresponding adversary's accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 393, |
| "end": 400, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 656, |
| "end": 663, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 884, |
| "end": 891, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Adversarial Training", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Attacker's Accuracy on Unseen Data We verify that the attacker's recovery accuracy persists also on the more realistic scenario in which the attacker is applied to encoded sentences that did not participate in the adversarial training. We constructed an additional dataset of 166K completely unseen samples from the Sentiment/Race case. As expected, the attacker works even better in this case, reaching an accuracy of 59.7% Vs. 56.0% on the original development set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Training", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We explore means of strengthening the adversarial component, by tuning its capacity and its weight, as well as by using a novel adversarial-ensemble configuration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Capacity We increase the capacity of the adversarial component by increasing its hidden dimension, while keeping the attacker's hidden dimension constant at 300 dimensions. We try hidden dimensions of size 500, 1000, 2000, 5000 and 8000.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Weight We experiment with different weighting of the adversarial component during training by tuning the \u03bb parameter, trying the values 0.5, 1.0 (default), 1.5, 2, 3, 5 (with values above 5 the main task training became extremely unstable, not raising above 50%).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Ensemble An alternative to using larger \u03bb values is to introduce several adversaries. The potential benefit of this approach is that rather than focusing harder on removing a single feature, here the different adversaries could each focus on a different aspect of the representation. This approach is potentially better suited to deal with language variability. Concretely, we suggest the following adaptation to the adversarial loss to incorporate k adversaries with different random initializations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "L y (c(h(x)), y) + k j=1 L z (adv j (g \u03bb (h(x))), z)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Other Attempts We also experienced with several other techniques: reinitializing the adversarial weights every t epochs; training the adversary without propagating the error to the encoder components for t epochs and only then starting to propagate; using adversaries with more hidden layers; adding dropout on the encoded vectors and within the encoder. None of these yielded improvements over the above methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Results All methods are effective to some extent, Table 4 summarizes the results. Increasing the capacity of the adversarial network helped reduce the protected attribute's leakage, though different capacities work best on each setup. On the Sentiment/Race task, none of the higher dimensional adversaries worked better than the 300-dim one, on the PAN16 dataset it did. On PAN16/Gender the 8000-dim adversary performed best, and on PAN16/Age, the 500-dim one.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 50, |
| "end": 57, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Increasing the weight of the adversary through the \u03bb parameter also has a positive effect on the result (except on the Sentiment/Race pair). However, too large \u03bb values make training unstable, and require many more epochs for the main-task to stabilize around a satisfying accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The adversarial ensemble method with 2 adversaries achieves 57.4% on Sentiment/Race, as opposed to 56.0% with a single one, but when using 5 different adversaries, we achieve 54.8%. On the PAN16 dataset larger ensembles are more effective. However, a potential issue with the ensemble method is that larger ensembles reduces training stability, similar to increasing the \u03bb value. For example, with 5 adversaries, the main-task accuracy remained at random for 5 epochs, and only begun rising at the 6th epoch. Using 10 adversaries, the main task could not be trained.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To summarize, while all methods are effective to some extent, it appears that (a) no method and parameter setting performs equally well across the different setups; and (b) no method succeeds in completely preventing the leakage of the protected attributes. Combining the different methods (ensembles of larger networks, larger networks with larger \u03bb, etc.) did not improve the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Strengthening the Adversarial Component", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We repeated the same set of experiments on the unbalanced Sentiment/Race corpus (Table 5) . In this setup, the results are somewhat similar: increasing the adversarial capacity and \u03bb is ineffective, and even increases the attacker's recovery rate. However, using an ensemble of 5 adversaries does manage to reduce the leakage, but it is still far from a satisfying result.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 80, |
| "end": 89, |
| "text": "(Table 5)", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Unbalanced Data Results", |
| "sec_num": null |
| }, |
| { |
| "text": "The gap between the adversary's dev-set accuracy and the after-the-fact attacker accuracy on the same data is surprising. To better understand the phenomenon, we perform further analysis on the Sentiment/Race pair with the default single adversary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Embedding Vs. RNN Recall that the attacker network tries to extract as much information from the encoder's output as possible. The encoder consists of two components: (1) Embedding Matrix and (2) an RNN. Therefore, the leakage can be caused due to one of them (or due to their combination).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We conduct the following experiment to determine which part affects the leakage more: we create a new encoder by composing 2 existing encoders: an encoder with high leakage (Leaky, using the baseline encoder) and an encoder with low leakage (Guarded, using the 5-Ensemble adversary). We fuse the two encoders by combining the embedding matrix of the Leaky encoder with the RNN module of the Guarded encoder, and vice versa. This yields two new encoders: an encoder with a \"leaky\" Embedding Matrix module and a \"strong\" RNN module (Leaky-EMB), and an encoder with a \"strong\" Embedding Matrix module and a \"leaky\" RNN module (Leaky-RNN). We compare encoders Leaky-EMB and Leaky-RNN to gauge which module has a greater contribution to the data leakage. We train attacker-networks over the encoders' output to predict the protected attributes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Leaky Guarded RNN Leaky 64.5 67.8 Guarded 59.3 54.8 Table 6 : Accuracies of the protected attribute with different encoders. Table 6 summarize the results, implying that the leakage is caused mainly by the RNN, and less by the Embedding Matrix. 6", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 52, |
| "end": 59, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 125, |
| "end": 132, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Embedding", |
| "sec_num": null |
| }, |
| { |
| "text": "We are interested in tweets whose protected attribute (race) is correctly predicted by the adversary. However, at accuracy rates below 60%, many of the correct predictions could be attributed to chance. To identify the relevant examples, we repeated the Sentiment/Race default adversary experiment 10 times with different random seeds. We then trained 10 attacker networks, and used each of them to label all examples in the development set. We then looked for tweets which are consistently and correctly classified by at least 9 attackers. 7 Table 7 shows some of these cases. Many of them include tokens (Naw, Bestfrand, tan) and syntactic structures (Going over Bae house) which are indeed predictive, though not the most salient features.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 543, |
| "end": 550, |
| "text": "Table 7", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Consistent Leakage: Examples Inspection", |
| "sec_num": null |
| }, |
| { |
| "text": "Leakage via Embeddings Even though we found out the RNN is much more responsible to the leakage then the Embedding, those still contribute to the leakage and are easier to inspect. Therefore, we turn to inspect the encoders' Embedding. We hypothesize that a possible reason for the adversarial network's inability to completely remove the protected race information is word frequency. Namely, rare words, which might be strongly identified with one group, didn't get enough updates during training and therefore remained predictive towards one of the groups. To quantify this, we compared two vocabularies: words appearing in tweets where the predictions were consistently predicted (9 or 10 out of 10 times) by the different attackers, and words appearing in tweets that were randomly distributed (50%) between the attackers. If our hypothesis is correct, we expect words from the second group to be more frequent than words in the first group. We discard words appearing in both groups, and associate each word with its training set frequency. One-tailed Mann-Whitney U test (Mann and Whitney, 1947) showed the effect is highly significant with p < e \u221212 .", |
| "cite_spans": [ |
| { |
| "start": 1077, |
| "end": 1101, |
| "text": "(Mann and Whitney, 1947)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Consistent Leakage: Examples Inspection", |
| "sec_num": null |
| }, |
| { |
| "text": "Data Overfitting? Standard ML setups often suffer from overfitting on the training data, especially when using neural-networks which tend to memorize the data they encounter. In the adversarial setup, the overfitting could result in the encoder-adversary pair working together to perfectly clean the attributes from the training data, 7 776 correct and 946 consistent examples in total without generalization. Such overfitting could explain the attacker success. Is this what happened? We test this hypothesis by using the same attacker networks experiments solely on the training data. We train the attackers on 90% of the training data while using the rest 10% as heldout. If overfitting has occurred, the accuracy is likely to result in 50% accuracy. Alas, this is not the case. Table 8 summarize the training accuracies of the attacker network. The Mention/Race task achieves the highest score of 64.3% whereas the Mention/Gender task achieves the lowest -58.1%. Even though when trained directly to predict these attributes without the adversarial setup, the training accuracies are much higher, a substantial amount of signal is still left, even in the training data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 782, |
| "end": 789, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Consistent Leakage: Examples Inspection", |
| "sec_num": null |
| }, |
| { |
| "text": "The fact that intermediary vector representations that are trained for one task are predictive of another is not surprising: it is at the core of the success of NLP methods for deriving \"generic\" word and sentence representations (e.g. Word2vec (Mikolov et al., 2013) , Skipthought vectors (Kiros et al., 2015) , Contextualized Word Representations (Melamud et al., 2016; Peters et al., 2018) etc.). While usually considered a positive feature, it can often have undesired consequences one should be aware of and potentially control for. Several works document biases and stereotypes that are captured by unsupervised word embeddings (Bolukbasi et al., 2016; Caliskan et al., 2017) and ways of mitigating them (Bolukbasi et al., 2016; Zhang et al., 2018) . Bias and stereotyping were also documented on a common NLP dataset (Rudinger et al., 2017) . While these work are concerned with the learned representations encoding unwanted biases about the world, our concern is with capturing potentially sensitive demographic information about individual authors of the text.", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 267, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 290, |
| "end": 310, |
| "text": "(Kiros et al., 2015)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 349, |
| "end": 371, |
| "text": "(Melamud et al., 2016;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 372, |
| "end": 392, |
| "text": "Peters et al., 2018)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 634, |
| "end": 658, |
| "text": "(Bolukbasi et al., 2016;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 659, |
| "end": 681, |
| "text": "Caliskan et al., 2017)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 710, |
| "end": 734, |
| "text": "(Bolukbasi et al., 2016;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 735, |
| "end": 754, |
| "text": "Zhang et al., 2018)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 824, |
| "end": 847, |
| "text": "(Rudinger et al., 2017)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Removing sensitive attributes (demographic or otherwise) from intermediate representations in order to achieve fair classification has been explored by solving an optimization problem (Zemel et al., 2013) , as well as by employing adversarial training (Edwards and Storkey, 2015; Louizos et al., 2015; Xie et al., 2017; Zhang et al., 2018) , focusing on structured features. Adversarial training was also applied for Image anonymization AAE (\"non-hispanic blacks\") SAE (\"non-hispanic whites\") My Brew Eattin I want to be tan again Naw im cool Why is it so hot in the house ?! Tonoght was cool Been doing Spanish homework for 2 hours . My momma Bestfrand died I wish I was still in Spain Enoy yall day Ahhhhh so much homework . Going over Bae house TWITTER-ENTITY I miss you too ! She not texting or calling ? Ok I want to move to california Real relationships go thru real shit Lol , I don't even go here . About to spend my entire check IDGAF Ahhhhh so much homework . Getting ready for school I'm so tired . Table 8 : Attacker's performance on different datasets. Results are on a training set 10% heldout. \u2206 is the difference between the attacker score and the corresponding adversary's accuracy. (Edwards and Storkey, 2015; Feutry et al., 2018) . In contrast, we consider features that are based on short user-authored text. Several works apply adversarial training to textual data, in order to learn encoders that are invariant to some properties of the text (Chen et al., 2016; Conneau et al., 2017; Zhang et al., 2017; Xie et al., 2017) . As their main motivation is to remove information about domain or language in order to improve transfer learning, domain adaptation, or end task accuracy, they were less concerned with the ability to recover information from the resulting representation, and did not evaluate it directly as we do here.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 204, |
| "text": "(Zemel et al., 2013)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 252, |
| "end": 279, |
| "text": "(Edwards and Storkey, 2015;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 280, |
| "end": 301, |
| "text": "Louizos et al., 2015;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 302, |
| "end": 319, |
| "text": "Xie et al., 2017;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 320, |
| "end": 339, |
| "text": "Zhang et al., 2018)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1200, |
| "end": 1227, |
| "text": "(Edwards and Storkey, 2015;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1228, |
| "end": 1248, |
| "text": "Feutry et al., 2018)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1464, |
| "end": 1483, |
| "text": "(Chen et al., 2016;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1484, |
| "end": 1505, |
| "text": "Conneau et al., 2017;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1506, |
| "end": 1525, |
| "text": "Zhang et al., 2017;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1526, |
| "end": 1543, |
| "text": "Xie et al., 2017)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1010, |
| "end": 1017, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Recent work on creating private representation in the text domain (Li et al., 2018) share our motivation of removing unintended demographic attributes from the learned representation using adversarial training. However, they report only the discrimination accuracies of the adversarial component, and do not train another classifier to verify that the representations are indeed clear of the protected attribute. As our work shows, trusting the adversary is insufficient, and external verification is crucial.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 83, |
| "text": "(Li et al., 2018)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Finally, our work is motivated by the desire for fairness. We use a definition in which a fair classification is one that does not condition on a certain attribute (fairness by blindness), and evaluate the ability to achieve text-derived representations that are blind to a property we wish to protect. Many other definitions of fairness exist, including demographic parity, equality of odds and equality of opportunity (see e.g. discussion in (Hardt et al., 2016; Beutel et al., 2017) ). Under our setup, blindness guarantees these metrics (Appendix A).", |
| "cite_spans": [ |
| { |
| "start": 444, |
| "end": 464, |
| "text": "(Hardt et al., 2016;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 465, |
| "end": 485, |
| "text": "Beutel et al., 2017)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We show that demographic information leaks into intermediate representations of neural networks trained on text data. Systems that train on text data and do not want to condition on demographic information must take active steps against accidental conditioning. Our experiments suggest that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "(1) Adversarial training is effective for mitigating protected attribute leakage, but, when dealing with text data, may fail to remove it completely.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "(2) When using the adversarial training method, the adversary score during training cannot be trusted, and must be verified with an externallytrained attacker, preferably on unseen data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "(3) Tuning the capacity and weight of the adversary, as well as using an ensemble of several adversaries, can improve the results. However, no single method is the most effective in all cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "The code and data acquisition are available in: https: //github.com/yanaiela/demog-text-removal", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Further details regarding the architecture and training parameters can be found in the supplementary materials.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "While gender is a non-binary construct, many decisions in the real-world are unfortunately still influenced by hard binary gender categories. We thus consider binary-gender to be a useful approximation in our context.4 Complete list is available in Appendix C", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "While the sentiment score may seem low, we manually verified the erroneous predictions and found out that many of them are indeed ambiguous with respect to sentiment, e.g. sentences like \"I can't take Amanda seriously \" and \"You make me so angry, yet you make me so happy.\" which were predicted negative and positive respectively, but their gold label was the opposite.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A discrepancy exists to some extent in the new encoders, as their parts originate from different models that were trained separately. To test if the fusion is valid, we train a different classifier on top of the new encoders to predict the main task. The combination of the leaked RNN with the guarded embeddings results in 65.4% on the sentiment task and the other combination results in 60.9% as opposed to 67.5% and 63.8% on the leaked and guarded models, respectively. As the new models are on par with the original ones, we conclude that the new encoders are valid.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Moni Shahar, Felix Kreuk, Yova Kementchedjhieva and the BIU NLP lab for fruitful conversation and helpful comments. We also thank Su Lin Blodgett for her help in supplying the DIAL dataset and clarifications. This work was supported in part by the The Israeli Science Foundation (grant number 1555/15) and German Research Foundation via the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Data decisions and theoretical implications when adversarially learning fair representations", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Beutel", |
| "suffix": "" |
| }, |
| { |
| "first": "Jilin", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhe", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ed", |
| "middle": [ |
| "H" |
| ], |
| "last": "Chi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.00075" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Demographic dialectal variation in social media: A case study of african-american english", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Su Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Lisa", |
| "middle": [], |
| "last": "Blodgett", |
| "suffix": "" |
| }, |
| { |
| "first": "Brendan O'", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1119--1130", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of african-american english. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1119-1130, Austin, Texas. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", |
| "authors": [ |
| { |
| "first": "Tolga", |
| "middle": [], |
| "last": "Bolukbasi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Venkatesh", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [ |
| "T" |
| ], |
| "last": "Saligrama", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kalai", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "4349--4357", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in Neural Information Processing Systems, pages 4349-4357.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Discriminating gender on twitter", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Guido", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zarrella", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the conference on empirical methods in natural language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1301--1309", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John D Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on twitter. In Proceedings of the conference on empir- ical methods in natural language processing, pages 1301-1309. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Semantics derived automatically from language corpora contain human-like biases", |
| "authors": [ |
| { |
| "first": "Aylin", |
| "middle": [], |
| "last": "Caliskan", |
| "suffix": "" |
| }, |
| { |
| "first": "Joanna", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bryson", |
| "suffix": "" |
| }, |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Science", |
| "volume": "356", |
| "issue": "6334", |
| "pages": "183--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Adversarial deep averaging networks for cross-lingual sentiment classification", |
| "authors": [ |
| { |
| "first": "Xilun", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Athiwaratkun", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| }, |
| { |
| "first": "Kilian", |
| "middle": [], |
| "last": "Weinberger", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1606.01614" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. arXiv preprint arXiv:1606.01614.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Word translation without parallel data", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc'aurelio", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Ludovic", |
| "middle": [], |
| "last": "Denoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Herv\u00e9", |
| "middle": [], |
| "last": "J\u00e9gou", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1710.04087" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Censoring representations with an adversary", |
| "authors": [ |
| { |
| "first": "Harrison", |
| "middle": [], |
| "last": "Edwards", |
| "suffix": "" |
| }, |
| { |
| "first": "Amos", |
| "middle": [], |
| "last": "Storkey", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.05897" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harrison Edwards and Amos Storkey. 2015. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", |
| "authors": [ |
| { |
| "first": "Bjarke", |
| "middle": [], |
| "last": "Felbo", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Mislove", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Iyad", |
| "middle": [], |
| "last": "Rahwan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sune", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. In Conference on Empirical Methods in Nat- ural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Learning anonymized representations with adversarial neural networks", |
| "authors": [ |
| { |
| "first": "Cl\u00e9ment", |
| "middle": [], |
| "last": "Feutry", |
| "suffix": "" |
| }, |
| { |
| "first": "Pablo", |
| "middle": [], |
| "last": "Piantanida", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Duhamel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1802.09386" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cl\u00e9ment Feutry, Pablo Piantanida, Yoshua Bengio, and Pierre Duhamel. 2018. Learning anonymized repre- sentations with adversarial neural networks. arXiv preprint arXiv:1802.09386.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Unsupervised domain adaptation by backpropagation", |
| "authors": [ |
| { |
| "first": "Yaroslav", |
| "middle": [], |
| "last": "Ganin", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Lempitsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1180--1189", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaroslav Ganin and Victor Lempitsky. 2015. Unsuper- vised domain adaptation by backpropagation. In In- ternational Conference on Machine Learning, pages 1180-1189.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Improving credit scorecard modeling through applying text analysis", |
| "authors": [ |
| { |
| "first": "Omar", |
| "middle": [], |
| "last": "Ghailan", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "O" |
| ], |
| "last": "Hoda", |
| "suffix": "" |
| }, |
| { |
| "first": "Osman", |
| "middle": [], |
| "last": "Mokhtar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hegazy", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "institutions", |
| "volume": "7", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omar Ghailan, Hoda MO Mokhtar, and Osman Hegazy. 2016. Improving credit scorecard modeling through applying text analysis. institutions, 7(4).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Generative adversarial nets", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Goodfellow", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Pouget-Abadie", |
| "suffix": "" |
| }, |
| { |
| "first": "Mehdi", |
| "middle": [], |
| "last": "Mirza", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Warde-Farley", |
| "suffix": "" |
| }, |
| { |
| "first": "Sherjil", |
| "middle": [], |
| "last": "Ozair", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2672--2680", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in neural information processing systems, pages 2672-2680.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Equality of opportunity in supervised learning", |
| "authors": [ |
| { |
| "first": "Moritz", |
| "middle": [], |
| "last": "Hardt", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Price", |
| "suffix": "" |
| }, |
| { |
| "first": "Nati", |
| "middle": [], |
| "last": "Srebro", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3315--3323", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moritz Hardt, Eric Price, Nati Srebro, et al. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315-3323.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "User review sites as a resource for largescale sociolinguistic studies", |
| "authors": [ |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Johannsen", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 24th International Conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "452--461", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dirk Hovy, Anders Johannsen, and Anders S\u00f8gaard. 2015. User review sites as a resource for large- scale sociolinguistic studies. In Proceedings of the 24th International Conference on World Wide Web, pages 452-461. International World Wide Web Con- ferences Steering Committee.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Skip-thought vectors", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "Yukun", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Ruslan", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Raquel", |
| "middle": [], |
| "last": "Zemel", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Urtasun", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanja", |
| "middle": [], |
| "last": "Torralba", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3294--3302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294-3302.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatically categorizing written texts by author gender. Literary and linguistic computing", |
| "authors": [ |
| { |
| "first": "Moshe", |
| "middle": [], |
| "last": "Koppel", |
| "suffix": "" |
| }, |
| { |
| "first": "Shlomo", |
| "middle": [], |
| "last": "Argamon", |
| "suffix": "" |
| }, |
| { |
| "first": "Anat Rachel", |
| "middle": [], |
| "last": "Shimoni", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "17", |
| "issue": "", |
| "pages": "401--412", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. Literary and linguistic com- puting, 17(4):401-412.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Towards robust and privacy-preserving text representations", |
| "authors": [ |
| { |
| "first": "Yitong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Cohn", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "25--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text repre- sentations. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25-30. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The variational fair autoencoder", |
| "authors": [ |
| { |
| "first": "Christos", |
| "middle": [], |
| "last": "Louizos", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Swersky", |
| "suffix": "" |
| }, |
| { |
| "first": "Yujia", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Welling", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zemel", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.00830" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The variational fair autoencoder. arXiv preprint arXiv:1511.00830.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Henry", |
| "suffix": "" |
| }, |
| { |
| "first": "Donald R", |
| "middle": [], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Whitney", |
| "suffix": "" |
| } |
| ], |
| "year": 1947, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "50--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Henry B Mann and Donald R Whitney. 1947. On a test of whether one of two random variables is stochasti- cally larger than the other. The annals of mathemat- ical statistics, pages 50-60.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Learning generic context embedding with bidirectional lstm", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Melamud", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Goldberger", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", |
| "volume": "2", |
| "issue": "", |
| "pages": "51--61", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51-61.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "how old do you think i am?\" a study of language and age in twitter", |
| "authors": [ |
| { |
| "first": "Dong", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Rilana", |
| "middle": [], |
| "last": "Gravel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dolf", |
| "middle": [], |
| "last": "Trieschnigg", |
| "suffix": "" |
| }, |
| { |
| "first": "Theo", |
| "middle": [], |
| "last": "Meder", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ICWSM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. \" how old do you think i am?\" a study of language and age in twitter. In ICWSM.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Discrimination-aware data mining", |
| "authors": [ |
| { |
| "first": "Dino", |
| "middle": [], |
| "last": "Pedreshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Salvatore", |
| "middle": [], |
| "last": "Ruggieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Franco", |
| "middle": [], |
| "last": "Turini", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "560--568", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware data mining. In Pro- ceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data min- ing, pages 560-568. ACM.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Matthew", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1802.05365" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Overview of the 4th author profiling task at pan 2016: cross-genre evaluations", |
| "authors": [ |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Rangel", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Rosso", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Verhoeven", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Potthast", |
| "suffix": "" |
| }, |
| { |
| "first": "Benno", |
| "middle": [], |
| "last": "Stein", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Working Notes Papers of the CLEF 2016 Evaluation Labs. CEUR Workshop Proceedings/Balog, Krisztian", |
| "volume": "", |
| "issue": "", |
| "pages": "750--784", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. 2016. Overview of the 4th author profiling task at pan 2016: cross-genre evaluations. In Working Notes Papers of the CLEF 2016 Evaluation Labs. CEUR Workshop Proceedings/Balog, Krisztian [edit.]; et al., pages 750-784.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Social bias in elicited natural language inferences", |
| "authors": [ |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Chandler", |
| "middle": [], |
| "last": "May", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "74--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural lan- guage inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 74-79.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Clips stylometry investigation (csi) corpus: A dutch corpus for the detection of age, gender, personality, sentiment and deception in text", |
| "authors": [ |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Verhoeven", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "LREC 2014-NINTH INTERNATIONAL CONFER-ENCE ON LANGUAGE RESOURCES AND EVAL-UATION", |
| "volume": "", |
| "issue": "", |
| "pages": "3081--3085", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ben Verhoeven and Walter Daelemans. 2014. Clips stylometry investigation (csi) corpus: A dutch corpus for the detection of age, gender, per- sonality, sentiment and deception in text. In LREC 2014-NINTH INTERNATIONAL CONFER- ENCE ON LANGUAGE RESOURCES AND EVAL- UATION, pages 3081-3085.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Twisty: A multilingual twitter stylometry corpus for gender and personality profiling", |
| "authors": [ |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Verhoeven", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Plank", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ben Verhoeven, Walter Daelemans, and Barbara Plank. 2016. Twisty: A multilingual twitter stylometry cor- pus for gender and personality profiling. In LREC.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Rtgender: A corpus for studying differential responses to gender", |
| "authors": [ |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Voigt", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Jurgens", |
| "suffix": "" |
| }, |
| { |
| "first": "Vinodkumar", |
| "middle": [], |
| "last": "Prabhakaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rob Voigt, David Jurgens, Vinodkumar Prabhakaran, Dan Jurafsky, and Yulia Tsvetkov. 2018. Rtgender: A corpus for studying differential responses to gen- der. In LREC.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Examining multiple features for author profiling", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "D" |
| ], |
| "last": "Edson", |
| "suffix": "" |
| }, |
| { |
| "first": "Anderson", |
| "middle": [ |
| "U" |
| ], |
| "last": "Weren", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucas", |
| "middle": [], |
| "last": "Kauer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mizusaki", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Viviane", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Moreira", |
| "suffix": "" |
| }, |
| { |
| "first": "Leandro", |
| "middle": [ |
| "K" |
| ], |
| "last": "Palazzo M De Oliveira", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wives", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of information and data management", |
| "volume": "5", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edson RD Weren, Anderson U Kauer, Lucas Mizusaki, Viviane P Moreira, J Palazzo M de Oliveira, and Le- andro K Wives. 2014. Examining multiple features for author profiling. Journal of information and data management, 5(3):266.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Controllable invariance through adversarial feature learning", |
| "authors": [ |
| { |
| "first": "Qizhe", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Zihang", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulun", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "585--596", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems, pages 585- 596.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Learning fair representations", |
| "authors": [ |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Zemel", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Swersky", |
| "suffix": "" |
| }, |
| { |
| "first": "Toni", |
| "middle": [], |
| "last": "Pitassi", |
| "suffix": "" |
| }, |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Dwork", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "325--333", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair represen- tations. In International Conference on Machine Learning, pages 325-333.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Mitigating unwanted biases with adversarial learning", |
| "authors": [ |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Hu Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Blake", |
| "middle": [], |
| "last": "Lemoine", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1801.07593" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted bi- ases with adversarial learning. arXiv preprint arXiv:1801.07593.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Aspect-augmented adversarial networks for domain adaptation", |
| "authors": [ |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "5", |
| "issue": "1", |
| "pages": "515--528", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2017. Aspect-augmented adversarial networks for domain adaptation. Transactions of the Association of Computational Linguistics, 5(1):515-528.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", |
| "authors": [ |
| { |
| "first": "Jieyu", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Tianlu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Yatskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Vicente", |
| "middle": [], |
| "last": "Ordonez", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2979--2989", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification us- ing corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979-2989.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Balanced (a) vs. Unbalanced (b) dataset.Red(M+)/Blue(M-): Main Task. Light(P+)/Dark(P-): Protected attribute. Each class is globally balanced, but in (b) the proportion of the protected attribute within each main task split is unbalanced." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Main task and Adversary accuracy curves for Sentiment/Race." |
| }, |
| "TABREF2": { |
| "text": "Protected attribute leakage: balanced & unbalanced data splits.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF5": { |
| "text": "Results of different adversarial configurations. Sentiment/Mention: main task accuracy. Race/Gender/Age: protected attribute recovery difference from 50% rate by the attacker (values below 50% are as informative as those above it). \u2206: the difference between the attacker score and the corresponding adversary's accuracy. The bold numbers are the best oblivious classifiers within each configuration.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Method</td><td colspan=\"3\">Param Sentiment Race</td></tr><tr><td>No Adversary Baseline</td><td>-</td><td colspan=\"2\">79.5 23.5</td></tr><tr><td>Standard Adversary</td><td>1.0</td><td colspan=\"2\">76.8 10.6</td></tr><tr><td>Adv-Capacity</td><td>500</td><td colspan=\"2\">74.8 13.8</td></tr><tr><td/><td>1000</td><td colspan=\"2\">70.5 18.4</td></tr><tr><td/><td>2000</td><td colspan=\"2\">73.9 18.5</td></tr><tr><td/><td>5000</td><td colspan=\"2\">71.5 19.4</td></tr><tr><td/><td>8000</td><td colspan=\"2\">73.6 18.7</td></tr><tr><td>Lambda</td><td>0.5</td><td colspan=\"2\">75.0 15.5</td></tr><tr><td/><td>1.5</td><td colspan=\"2\">71.2 18.2</td></tr><tr><td/><td>2.0</td><td colspan=\"2\">73.0 12.1</td></tr><tr><td/><td>3.0</td><td colspan=\"2\">71.5 12.0</td></tr><tr><td/><td>5.0</td><td>50.0</td><td>-</td></tr><tr><td>Ensemble</td><td>2</td><td colspan=\"2\">70.6 20.8</td></tr><tr><td/><td>3</td><td colspan=\"2\">73.6 17.9</td></tr><tr><td/><td>5</td><td>71.5</td><td>8.6</td></tr></table>", |
| "html": null |
| }, |
| "TABREF6": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF7": { |
| "text": "Examples for correct dialectal/race predictions, which were predicted consistently by at least 9 different attacker-classifiers.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Data</td><td>Task</td><td>Protected Attribute</td><td>\u2206</td></tr><tr><td>DIAL</td><td colspan=\"2\">Sentiment Race</td><td>12.2</td></tr><tr><td/><td>Mention</td><td>Race</td><td>14.3</td></tr><tr><td colspan=\"2\">PAN16 Mention</td><td>Gender</td><td>8.1</td></tr><tr><td/><td>Mention</td><td>Age</td><td>9.7</td></tr></table>", |
| "html": null |
| } |
| } |
| } |
| } |