| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:12:01.382798Z" |
| }, |
| "title": "Mapping the Multilingual Margins: Intersectional Biases of Sentiment Analysis Systems in English, Spanish, and Arabic", |
| "authors": [ |
| { |
| "first": "Ant\u00f3nio", |
| "middle": [], |
| "last": "C\u00e2mara", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Taneja", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Tamjeed", |
| "middle": [], |
| "last": "Azad", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Allaway", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "eallaway@cs.columbia.edu" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zemel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Columbia University", |
| "location": {} |
| }, |
| "email": "zemel@cs.columbia.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "As natural language processing systems become more widespread, it is necessary to address fairness issues in their implementation and deployment to ensure that their negative impacts on society are understood and minimized. However, there is limited work that studies fairness using a multilingual and intersectional framework or on downstream tasks. In this paper, we introduce four multilingual Equity Evaluation Corpora, supplementary test sets designed to measure social biases, and a novel statistical framework for studying unisectional and intersectional social biases in natural language processing. We use these tools to measure gender, racial, ethnic, and intersectional social biases across five models trained on emotion regression tasks in English, Spanish, and Arabic. We find that many systems demonstrate statistically significant unisectional and intersectional social biases. 1", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "As natural language processing systems become more widespread, it is necessary to address fairness issues in their implementation and deployment to ensure that their negative impacts on society are understood and minimized. However, there is limited work that studies fairness using a multilingual and intersectional framework or on downstream tasks. In this paper, we introduce four multilingual Equity Evaluation Corpora, supplementary test sets designed to measure social biases, and a novel statistical framework for studying unisectional and intersectional social biases in natural language processing. We use these tools to measure gender, racial, ethnic, and intersectional social biases across five models trained on emotion regression tasks in English, Spanish, and Arabic. We find that many systems demonstrate statistically significant unisectional and intersectional social biases. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Large-scale transformer-based language models, such as BERT (Devlin et al., 2018) , are now the state-of-the-art for a myriad of tasks in natural language processing. However, these models are well-documented to perpetuate harmful social biases, specifically by regurgitating the social biases present in their training data which are scraped from the Internet without careful consideration (Bender et al., 2021) . While steps have been taken to \"debias\", or remove, gender and other social biases from word embeddings (Bolukbasi et al., 2016; Manzini et al., 2019) , these methods have been demonstrated to be cosmetic (Gonen and Goldberg, 2019) . Furthermore, these studies neglect to recognize both the impact of social biases on downstream task results as well as the complex and interconnected nature of social biases. In this paper, we 1 We make our code and datasets available for download at https://github.com/ascamara/ ml-intersectionality. detect and discuss unisectional 2 and intersectional social biases in multilingual language models applied to downstream tasks using a novel statistical framework and novel multilingual datasets.", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 81, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 391, |
| "end": 412, |
| "text": "(Bender et al., 2021)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 519, |
| "end": 543, |
| "text": "(Bolukbasi et al., 2016;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 544, |
| "end": 565, |
| "text": "Manzini et al., 2019)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 620, |
| "end": 646, |
| "text": "(Gonen and Goldberg, 2019)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 842, |
| "end": 843, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Intersectionality is a framework introduced by Crenshaw (1990) to study how the composite identity of an individual across different social cleavages (e.g., race and gender) informs that individual's social advantages and disadvantages. For example, individuals who identify with multiple disadvantaged social cleavages (e.g., Black women) face a greater and altered risk for discrimination and oppression than individuals with a subset of those identities (e.g., white women). This framework for understanding overlapping systems of discrimination has been explored in some studies of fairness in machine learning, including by Buolamwini and Gebru (2018) who show that face detection systems perform markedly worse for female users of color, compared to female users or users of color.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 62, |
| "text": "Crenshaw (1990)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although work has begun to study intersectional social biases in natural language processing, to the best of our knowledge no work has explored fairness in an intersectional framework on downstream tasks (e.g. sentiment analysis). Social biases in downstream tasks expose users with multiple disadvantaged sensitive attributes to unknown but potentially harmful outcomes, especially when models trained on downstream tasks are used in real-world decision making, such as for screening r\u00e9sumes or predicting recidivism in criminal proceedings (Bolukbasi et al., 2016; Angwin et al., 1999) . In this work, we choose emotion regression as a downstream task because social biases are often realized through emotion recognition (Elfenbein and Ambady, 2002) and machine learning models have been shown to reflect gender bias in emotion recognition tasks (Domnich and Anbarjafari, 2021) . For example, sentiment analysis and emotion regression may be used by companies to measure product engagement for different social groups.", |
| "cite_spans": [ |
| { |
| "start": 542, |
| "end": 566, |
| "text": "(Bolukbasi et al., 2016;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 567, |
| "end": 587, |
| "text": "Angwin et al., 1999)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 723, |
| "end": 751, |
| "text": "(Elfenbein and Ambady, 2002)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 848, |
| "end": 879, |
| "text": "(Domnich and Anbarjafari, 2021)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In addition, while some work has studied gender biases across different languages (Zhou et al., 2019; Zhao et al., 2020) , no work to our knowledge has studied racial, ethnic, and intersectional social biases across different languages. This lack of a multilingual analysis neglects non-English speaking users and their complex social environments.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 101, |
| "text": "(Zhou et al., 2019;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 102, |
| "end": 120, |
| "text": "Zhao et al., 2020)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we demonstrate the presence of gender, racial, ethnic, and intersectional social biases on five language models trained on an emotion regression task in English, Spanish, and Arabic. We do so by introducing novel supplementary test sets designed to measure social biases and a novel statistical framework for detecting the presence of unisectional and intersectional social biases in models trained on sentiment analysis tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our contributions are summarized as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Following , we introduce four supplementary test sets designed to detect social biases in language systems trained on sentiment analysis tasks in English, Spanish, and Arabic, which we make available for download. \u2022 We propose a novel statistical framework to detect unisectional and intersectional social biases in language models trained on sentiment analysis tasks. \u2022 We detect and analyze numerous gender, racial, ethnic, and intersectional social biases present in five language models trained on emotion regression tasks in English, Spanish, and Arabic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The presence and impact of harmful social biases in machine learning and natural language processing systems is pervasive and well-documented in popular word embedding methods (Caliskan et al., 2017; Garg et al., 2018; Bolukbasi et al., 2016; due to large amounts of humanproduced training data that includes historical social biases. Notably, Caliskan et al. (2017) demonstrate such biases by introducing the Word Embedding Association Test (WEAT) which measures how similar socially sensitive sets of words (e.g., racial or gendered names) are to attributive sets of words (e.g., pleasant or unpleasant words) in the semantic space encoded by word embeddings. While Bolukbasi et al. (2016) ; Manzini et al. (2019) introduce methods for \"debiasing\" word embeddings in order to create more equitable semantic representations for usage in downstream tasks, Gonen and Goldberg (2019) argue that such methods are merely cosmetic since social biases are still evident in the semantic space after the application of such methods. Moreover, these \"debiasing\" techniques focus on a particular social cleavage such as gender or race (i.e., unisectional cleavages). In contrast, our work considers both unisectional and intersectional social biases.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 199, |
| "text": "(Caliskan et al., 2017;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 200, |
| "end": 218, |
| "text": "Garg et al., 2018;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 219, |
| "end": 242, |
| "text": "Bolukbasi et al., 2016;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 344, |
| "end": 366, |
| "text": "Caliskan et al. (2017)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 668, |
| "end": 691, |
| "text": "Bolukbasi et al. (2016)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 694, |
| "end": 715, |
| "text": "Manzini et al. (2019)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 856, |
| "end": 881, |
| "text": "Gonen and Goldberg (2019)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Recent studies have also begun to focus on social biases in transformer-based language models (Kurita et al., 2019; Bender et al., 2021) . In particular, Bender et al. (2021) discusses how increasingly large transformer-based language model in practice regurgitate their training data, resulting in such models perpetuating social biases and harming users. Therefore, in this work we consider both static word embedding techniques and transformerbased language models. Crenshaw (1990) introduces intersectionality as an analytical framework to study the complex character of the privilege and marginalization faced by an individual with a variety of identities across a set of social cleavages such as race and gender. A canonical usage of intersectionality is in service of studying the simultaneous racial and gender discrimination faced by Black women, which cannot be understood in its totality using racial or gendered frameworks independently; for one example, we point to the angry Black woman stereotype (Collins, 2004) . As such, we argue that existing studies in fairness are limited in their ability both to uncover bias in and to \"debias\" language models without engaging with the intersectionality framework.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 115, |
| "text": "(Kurita et al., 2019;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 116, |
| "end": 136, |
| "text": "Bender et al., 2021)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 154, |
| "end": 174, |
| "text": "Bender et al. (2021)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 469, |
| "end": 484, |
| "text": "Crenshaw (1990)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1012, |
| "end": 1027, |
| "text": "(Collins, 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Intersectional social biases have been documented in natural language processing models. Herbelot et al. (2012) first studied intersectional social bias by employing distributional semantics on a Wikipedia dataset while Tan and Celis (2019) studied intersectional social bias in contextualized word embeddings by using the WEAT on language referring to white men and Black women. Guo and Caliskan (2021) introduce tests that detect both known and emerging intersectional social biases in static word embeddings and extend the WEAT to contextualized word embeddings. Similarly, May et al. (2019) also extend the WEAT to a contextualized word embedding framework using sentence embeddings. However, these methods do not consider the effect of intersectional social biases on the results of downstream tasks, which is the focus of this work.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 111, |
| "text": "Herbelot et al. (2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 380, |
| "end": 403, |
| "text": "Guo and Caliskan (2021)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 577, |
| "end": 594, |
| "text": "May et al. (2019)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Studies on non-English social biases in natural language processing are limited, with Zhou et al. (2019) extending the WEAT to study gender bias in Spanish and French and Zhao et al. (2020) examining gender bias in English, Spanish, German, and French on fastText embeddings (Bojanowski et al., 2017) . Notably, to the best of our knowledge there has been no work on studying intersectional social biases in languages other than English in natural language processing. While Herbelot et al. (2012) and Guo and Caliskan (2021) study the intersectional social biases faced by Asian and Mexican women respectively using natural language processing, both do so in English. In contrast, our work seeks to understand intersectional social biases in the languages that are used by the individuals and the communities that they help constitute.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 104, |
| "text": "Zhou et al. (2019)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 171, |
| "end": 189, |
| "text": "Zhao et al. (2020)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 275, |
| "end": 300, |
| "text": "(Bojanowski et al., 2017)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 475, |
| "end": 497, |
| "text": "Herbelot et al. (2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 502, |
| "end": 525, |
| "text": "Guo and Caliskan (2021)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Most closely related to our work, evaluate racial and gender bias in 219 sentiment analysis systems trained on datasets from and submitted to SemEval-2018 Task 1: Affect in Tweets . Their work introduces the Equity Evaluation Corpus (EEC), a supplementary test set of 8,640 English sentences designed to extract gender and racial biases in sentiment analysis systems. Despite Spanish and Arabic data and submissions for the task, did not explore biases in either language. Moreover, this study focused on submissions to the competition. In contrast, our work focuses on large-scale transformer-based language models and explores both unisectional and intersectional social biases in multiple languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section, we introduce our framework for detecting unisectional and intersectional social bias on results from downstream tasks. Given a model trained on emotion regression, we evaluate the model on a supplementary test set using our framework to measure social biases. First, we discuss our supplementary test sets composed of sentences corresponding to social cleavages (e.g., Black women, Black men, white women, and white men) ( \u00a73.1). We then use the results from each test set to run a Beta regression model (Ferrari and Cribari-Neto, 2004) where we fit coefficients for gender, racial, and intersectional social biases ( \u00a73.2). Finally, we test the coefficients for statistical significance to determine if a model, trained on a given emotion regression task in a given language, demonstrates gender, racial, or intersectional social bias ( \u00a73.3).", |
| "cite_spans": [ |
| { |
| "start": 521, |
| "end": 553, |
| "text": "(Ferrari and Cribari-Neto, 2004)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods: Framework for Evaluating Intersectionality", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We introduce four novel Equity Evaluation Corpora (EECs) following the work of . An EEC is a set of carefully crafted simple sentences that differ only in their reference to different social cleavages as seen in Table 1 . Therefore, differences in the predictions on a downstream task between sentences can be ascribed to language models learning those social biases. We use these corpora as supplementary test sets to measure unisectional and intersectional social biases of models trained on downstream tasks in English, Spanish, and Arabic.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 212, |
| "end": 219, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Equality Evaluation Corpora", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Following , each EEC consists of eleven template sentences as shown in Table 1 . Each template includes a [person] tag which is instantiated using both given names representing gender-racial/ethnic cleavages (e.g. given names common for Black women, Black men, white women, and white men in the original EEC) 3 and noun phrases representing gender cleavages (e.g. she/her, he/him, my mother, my brother). The first seven templates also include an emotion word, the first four of which are [emotion state word] tags, instantiated with words like angry and the last three are [emotion situation word] tags, instantiated with words like annoying.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 71, |
| "end": 78, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Equality Evaluation Corpora", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We contribute novel English, Spanish, and Arabic-language EECs that use the same sentence templates, noun phrases, and emotion words, but substitute Black and white names for Latino and Anglo names as well as Arab and Anglo names respectively. We introduce an English EEC and a Spanish EEC for Latino and Anglo names as well as an English EEC and an Arabic EEC for Arab and Anglo names, for a total of four novel EECs. The complete translated sentence templates, noun phrases, emotion words, and given names are available in the appendix and we make all four of our novel EECs available for download. The original EEC uses ten names for each gender-racial cleavage, selected from the list of names used in Caliskan et al. 2017, which in turn uses names from the first Implicit Association Test (IAT), a psychology study that measured implicit racial bias (Greenwald et al., 1998) . For example, given names include Ebony for Black women, Alonzo for Black men, Amanda for white women, and Adam for white men. The original EEC also uses five emotional state words and five emotional situation words sourced from Roget's Thesaurus for each of the emotions studied. For example, furious and irritating for Anger, ecstatic and amazing for Joy, anxious and horrible for Fear, and miserable and gloomy for Sadness. Each of the sentence templates was instantiated with chosen examples to generate 8640 sentences.", |
| "cite_spans": [ |
| { |
| "start": 855, |
| "end": 879, |
| "text": "(Greenwald et al., 1998)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Equality Evaluation Corpora", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For names representing Latino women, Latino men, Anglo women, and Anglo men in the English and Spanish-language EECs we used the ten most popular given names for babies born in the United States during the 1990s according to the Social Security Administration 4 . For the English and Arabic-language EECs, ten names are selected from Caliskan et al. (2017) for Anglo names of both genders. For male Arab names, ten names are selected from a study that employs the IAT to study attitudes towards Arab-Muslims (Park et al., 2007) . Since female Arab names were not available using this source, we use the top ten names for baby girls born in the Arab world according to the Arabic-language site BabyCenter 5 . All names are available in the appendix.", |
| "cite_spans": [ |
| { |
| "start": 334, |
| "end": 356, |
| "text": "Caliskan et al. (2017)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 508, |
| "end": 527, |
| "text": "(Park et al., 2007)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Equality Evaluation Corpora", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For the Spanish and Arabic EECs, fluent nativespeaker volunteers translated the original sentence templates, noun phrases, and emotion words. They then verified the generated sentences (i.e., using selected names and emotion words) for proper grammar and semantic meaning. Note that for the Arabic EEC, the authors transliterated names using English and Arabic Wikipedia pages of individuals with a given name. Due to fewer translated emotion words (e.g., two different English emotion words corresponded to the same word in the target language), each of the sentence templates were instantiated with chosen examples to generate 8640 sentences in English for both novel EECs, 8460 in Spanish, and 8040 in Arabic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Equality Evaluation Corpora", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We develop a novel framework for identifying statistically significant unisectional and intersectional social biases using Beta regressions for modeling proportions (Ferrari and Cribari-Neto, 2004) . In Beta regression, the response variable is modeled as a random variable from a Beta distribution (i.e., a family of distributions with support in (0, 1)). This is in contrast to linear regression which models response variables in R.", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 197, |
| "text": "(Ferrari and Cribari-Neto, 2004)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regression on Intersectional Variables", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Let Y i be the response variable. That is, Y i is the score predicted by a model trained for an emotion regression task on a given sentence i from an EEC. The labels for emotion regression restrict Y i \u2208 [0, 1], although 0 and 1 do not occur in practice, such that we may use Beta regression to measure biases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regression on Intersectional Variables", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The Beta regression (Eq. 1) measures the interaction between our response variable Y i and our independent variables X ji (i.e., the social cleavages j represented by sentence i from an EEC).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regression on Intersectional Variables", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Y i = \u03b2 0 + \u03b2 1 X 1i + \u03b2 2 X 2i + \u03b2 3 X 1i X 2i (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regression on Intersectional Variables", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In our model, we define X 1 to be an indicator function over sentences representing a minority group (e.g., Black people, women). For example, X 1i = 1 for any sentence i that refers to a Black person. As such, the corresponding coefficient \u03b2 1 describes the change in model prediction for sentences referring to an individual who identifies with that minority group, all else equal. For example, \u03b2 1 provides a measure of racial bias in the model. We define X 2 analogously for a second minority group. Therefore, the variable X 1 X 2 = 1 if and only if a sentence refers to the intersectional identity (e.g., Black women) and thus \u03b2 3 is a measure of intersectional social bias.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Regression on Intersectional Variables", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "After fitting the regression model, we test each regression coefficient for statistical significance. That is, we divide the coefficient by the standard error and then calculate the p-value for a two-sided ttest. If the coefficient for an independent variable (e.g., X 1 ) is statistically significant, we say that the model shows statistically significant social bias against the race and ethnicity, gender, or intersectionality identity corresponding to that variable. A positive coefficient for a variable implies that the emotion is exhibited more strongly by sentences representing the minority group that is coded by that variable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Testing", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We experiment with five methods in this work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Our first three methods use pre-trained language models from Huggingface (Wolf et al., 2019) : BERT+ -for English we use BERT-base (Devlin et al., 2018) , for Spanish BETO (Ca\u00f1ete et al., 2020) , and for Arabic ArabicBERT (Safaya et al., 2020) , mBERT -multilingual BERT-base (Devlin et al., 2018) , XLM-RoBERTa -XLM-RoBERTabase (Conneau et al., 2019) . For each language model, we fit a two-layer feed-forward neural network on the [CLS] (or equivalent) token embedding from the last layer of the model implemented in PyTorch (Paszke et al., 2019) , We do not fine-tune these models because we are interested in measuring the bias specifically encoded in the pre-trained publicly available model. Moreover, since the training datasets we use are small, fine-tuning has a high risk of causing overfitting.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 92, |
| "text": "(Wolf et al., 2019)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 131, |
| "end": 152, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 172, |
| "end": 193, |
| "text": "(Ca\u00f1ete et al., 2020)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 222, |
| "end": 243, |
| "text": "(Safaya et al., 2020)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 276, |
| "end": 297, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 329, |
| "end": 351, |
| "text": "(Conneau et al., 2019)", |
| "ref_id": null |
| }, |
| { |
| "start": 527, |
| "end": 548, |
| "text": "(Paszke et al., 2019)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In addition, we also experiment with two methods using Scikit-learn (Pedregosa et al., 2011) : SVM-tfidf -an SVM trained on Tf-idf sentence representations, and fastText -fastText pre-trained multilingual word embeddings (Bojanowski et al., 2017) average-pooled over the sentence and then passed to an MLP regressor.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 92, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 221, |
| "end": 246, |
| "text": "(Bojanowski et al., 2017)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We first train models on the emotion intensity regression tasks in English, Spanish, and Arabic from SemEval-2018 Task 1: Affect in Tweets (Sem2018-T1) . Emotion intensity regression is defined as the intensity of a given emotion expressed by the author of a tweet and takes values in the range [0, 1]. We consider the following set of emotions: anger, fear, joy, and sadness. For each model and language combination, we report the performance using the official competition metric, Pearson Correlation Coefficient (\u03c1) as defined in (Benesty et al., 2009) , for each emotion in the emotion regression task.", |
| "cite_spans": [ |
| { |
| "start": 533, |
| "end": 555, |
| "text": "(Benesty et al., 2009)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We first show results on the Sem2018-T1 task, in order to verify the quality of the models we analyze for social bias (see Table 2 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 123, |
| "end": 130, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Emotion Intensity Regression", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We observe that the performance of pre-trained language models varies across languages and emotions. BERT+, mBERT, and RoBERTa performed best on the English tasks, compared to Spanish and Arabic. Additionally, BERT+ had better perfor- mance than the multilingual models (e.g. mBERT and XLM-RoBERTa) across all languages and tasks, showing that language-specific models (e.g., BETO) can be superior to multilingual models. SVM-tfidf and fastText typically outperformed the multilingual models but were at-par or only slightly better than the language-specific models. This difference is likely due to the lack of fine-tuning performed on the transformer-based models. Our decision to not fine-tune does decrease performance on downstream tasks but is prudent given the risk of overfitting on a small training set and our interest in studying the social biases encoded in off-the-shelf pre-trained language models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emotion Intensity Regression", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "After training a model for a given emotion regression task in a language, we utilize the five EECs as supplementary test sets. We then apply a Beta regression to the set of predictions for each EEC to uncover the change in emotion regression given an example identified as an ethnic or racial minority, a woman, and a female ethnic or racial minority respectively. We showcase the beta coefficients and their level of statistical significance for each variable in the regression in Tables 3, 4, and 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation using EECs", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this section, we discuss the unisectional and intersectional social biases that we do and do not detect, across our five models that we trained on emotion regression tasks and evaluated using the EECs and novel statistical framework. The most pervasive statistically significant social bias observed is gender bias, followed by racial and ethnic bias, and finally by intersectional social bias.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Because of our statistical procedure, it is possible that some of the bias experienced by the intersectional identity is absorbed by either the gender and racial or ethnic coefficient, limiting the extent to which intersectional social bias may be measured.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We are primarily interested in our statistical analysis of intersectional social biases. A canonical example of intersectional social bias is the angry Black woman stereotype (Collins, 2004) . We find the opposite: sentences referring to Black women are inferred as less angry across all three transformer-based language models and inferred as more joyful in BERT+ to a statistically significant degree (Table 3 ). It is possible that this bias is captured by other coefficients. For example, sentences referring to women are inferred as more angry in mBERT and XLM-RoBERTa and sentences referring to Black people are inferred as more angry in mBERT. It also is possible that the language models do not exhibit this stereotype, which supports experimental results in psychology (Walley-Jean, 2009) despite being well-established in the critical theory literature (Collins, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 190, |
| "text": "(Collins, 2004)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 863, |
| "end": 878, |
| "text": "(Collins, 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 403, |
| "end": 411, |
| "text": "(Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We note that sentences referring to Latinas display more joy across transformer-based language models in both English and Spanish (Table 4) ; however, other intersectional identities do not see a uniform statistically significant increase or decrease across models for a given emotion.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 130, |
| "end": 139, |
| "text": "(Table 4)", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We find evidence of racial biases in our experiments. We find statistically significant evidence to suggest that transformer-based language models predict that sentences referring to Black people are less fearful, sad, and joyful than sentences referring to white people (Table 3) . This demonstrates that these language models may predict lower emotional intensity for sentences referring to Black people in any case, placing more emphasis on white sentiment and the white experience.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 271, |
| "end": 280, |
| "text": "(Table 3)", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We observe that ethnic biases are sometimes split by language. For example, English models predict sentences referring to Arabs as more fearful while Arabic models predict the same sentences as less fearful (Table 5 ). However, both languages predict those sentences as more sad. Future work ought to consider the interplay between ethnic biases across languages because the same social biases may be expressed and measured differently in different languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 215, |
| "text": "(Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We observe multiple gender biases across emotions and languages. In all Arabic models, sen- tences referring to women are predicted to be less angry than sentences referring to men (Table 5) . Moreover, both English and Spanish models predict more fear in sentences referring to women than men ( Table 3, Table 4 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 181, |
| "end": 190, |
| "text": "(Table 5)", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 296, |
| "end": 312, |
| "text": "Table 3, Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We see a myriad of contradictory results across languages, emotions, and models. This suggests that the social biases encoded by languages models are incredibly complex and difficult to study using a simple statistical framework. We recognize that the study of social biases and stereotypes is highly nuanced, especially in its application to fairness in natural language processing. Future analysis of these language models, their training data, and any downstream task data is necessary for the detection and comprehension of the impact of social biases in natural language processing. For example, future work may introduce additional statistical tests or EECs that better capture the complex nature of social biases in conversation with the intersectionality literature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Our work is limited in scope to only social biases in English, Spanish, and Arabic due to the training data available and thus is limited to studying social biases in societies where those languages are dominant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ethical Considerations and Limitations", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In addition, our statistical framework formalizes intersectional social bias across strictly defined gender-racial cleavages. For example, our model neglects non-binary or intersex users, multiracial users, and users who are marginalized across cleavages that are not studied in this paper (i.e. users with disabilities). Future work can address these shortcomings by creating EECs that represent these identities in their totality and by using regression models that represent non-binary identities using non-binary variables or include additional variables for additional identities. Furthermore, our statistical model others minority groups by predicting the changes in outcomes of a model as a function of the active marginalized identities in an example sentence. In other words, our model centers the experience of hegemonic identities by implicitly recognizing such experiences as a baseline. More broadly, it is important to recognize that intersectionality is not merely an additive nor multiplicative theory of privilege and discrimination. Rather, there is an complex interdependence between an individual's various identities and the oppression they face (Bowleg, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 1167, |
| "end": 1181, |
| "text": "(Bowleg, 2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ethical Considerations and Limitations", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, we emphasize that there exists no set of carefully curated sentences that can detect the extent nor the intricacies of social biases. We therefore caution that no work, especially automated work, is sufficient in understanding or mitigating the full scope of social biases in machine learning and natural language processing models. This is especially true for intersectional social biases, where marginalization and discrimination takes places within and across gender, sexual, racial, ethnic, religious, and other cleavages in concert.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ethical Considerations and Limitations", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this paper, we introduce four Equity Evaluation Corpora to measure racial, ethnic, and gender biases in English, Spanish, and Arabic. We also contribute a novel statistical framework for studying unisectional and intersectional social biases in sentiment analysis systems. We apply our method to five models trained on emotion regression tasks in English, Spanish, and Arabic, uncovering statistically significant unisectional and intersectional social biases. Despite our findings, we are constrained in our ability to analyze our results with the sociopolitical and historical context necessary to understand their true causes and implications. In future work, we are interested in working with community members and scholars from the groups we study to better interpret the causes and implications of these social biases so that the natural language processing community can create more equitable systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Dilon\u00e9, Peter Gado, Astrid Liden, Bettina Oberto, Hasanian Rahi, Russel Rahi, Raya Tarawneh, and two anonymous volunteers provided outstanding translation work. This work is supported in part by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1644869. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Translators were recruited at universities and are all university students. All translators are at least 18 and are fluent native speakers of the languages for which they translated. Each translator received an ID number to anonymize their work. Dear translator, Thank you for your help with our project. Your contribution is helping us conduct one of the first multilingual and intersectional bias analysis studies for natural language processing, a subset of artificial intelligence and linguistics. Natural language processing is responsible for tasks such as auto-completion, spell-check, spam detection, and searches on sites like Google. You and your work will be acknowledged in our final report.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Instructions to Original Translators", |
| "sec_num": null |
| }, |
| { |
| "text": "In the following document are the instructions for translations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Instructions to Original Translators", |
| "sec_num": null |
| }, |
| { |
| "text": "First, answer the survey questions. For each sentence, translate the template or individual word. We provide space for the female singular, female plural, male singular and female plural. If your language does not have separate masculine and feminine forms for any of the sentences, please include the singular and plural version in the first two boxes and if your does not have separate singular and plural forms, please include the singular versions for each gendered form as appropriate. If your language has additional cases, such as neutral, please make another column and note it for us (e.g. neuter in German). For the last ten, only give translations for the sentences as they are written. tag denotes emotional event words, e.g. annoying, funny. For the emotion vocabulary, there are four categories: anger (red), fear (green), joy (yellow) and sadness (blue). If the English words do not correspond well, feel free to write the most approximate set of words for your language in any order. Let us know if there are intricacies in spelling due to, for example, consonants and vowels (e.g. a/an in English or le l' in French).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Instructions to Original Translators", |
| "sec_num": null |
| }, |
| { |
| "text": "OPTIONAL: We are also looking for popular names of large socially cleaved groups in countries where your language is spoken. For example, in English, this includes male, female, Black and white names (5 for each combination of race and gender). If you are familiar with social cleavages or popular names in those cleavages in countries where your language is spoken, please note it. : angry, annoyed, enraged, furious, irritated, annoying, displeasing, irritating, outrageous, vexing, anxious, discouraged,fearful, scared, terrified, dreadful, horrible, shocking, terrifying, threatening, ecstatic, excited, glad, happy, relieved, amazing, funny, great, hilarious, wonderful, depressed, devastated, disappointed, miserable, sad, depressing, gloomy, grim, heartbreaking, serious, Dear translator, Thank you for your help with our project. Your contribution is helping us conduct one of the first multilingual and intersectional bias analysis studies for natural language processing, a subset of artificial intelligence and linguistics. Natural language processing is responsible for tasks such as auto-completion, spell-check, spam detection, and searches on sites like Google. You and your work will be acknowledged in our final report. In the following document are the instructions for translations. First, answer the survey questions. Second, go through the sentences provided. For each sentence, indicate if the sentence is grammatically and semantically incorrect in the D column. You do not need to mark the cell if the sentence is correct. If it is incorrect, write the correct translation. If multiple consecutive sentences are incorrect in the same fashion: indicate the correct translation for the first sentence, note the error, and note the ID numbers for the sentences that are incorrect in that fashion. Ignore the lines that are blacked out.", |
| "cite_spans": [ |
| { |
| "start": 383, |
| "end": 778, |
| "text": ": angry, annoyed, enraged, furious, irritated, annoying, displeasing, irritating, outrageous, vexing, anxious, discouraged,fearful, scared, terrified, dreadful, horrible, shocking, terrifying, threatening, ecstatic, excited, glad, happy, relieved, amazing, funny, great, hilarious, wonderful, depressed, devastated, disappointed, miserable, sad, depressing, gloomy, grim, heartbreaking, serious,", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Instructions to Original Translators", |
| "sec_num": null |
| }, |
| { |
| "text": "Here are some points to keep in mind: 1. Is the sentence grammatically correct? For example: does the sentence use the correct gendered language? Is the tense correct? 2. Is the meaning of the sentence the same as the English sentence listed next to it? It is okay if it is not the exact same as how you would translate it as long as the emotional word is similar. Informed Consent Form Benefits: Although it may not directly benefit you, this study may benefit society by improving our understanding of intersectional biases in natural language processing models across different languages. Risks: There are no known risks from participation. The broader work deals with sensitive topics in race and gender studies. Voluntary participation: You may stop participating at any time without penalty by not submitting the translations. We may end your participation or not use your work if you do not have adequate knowledge of the language. Confidentiality: No identifying information will be kept about you except for the translations you submit to us. No information will be shared about your work except an acknowledgement in the paper. Questions/concerns: You may e-mail questions to ac4443@columbia.edu. Submitting translations to Ant\u00f3nio C\u00e2mara at ac4443@columbia.edu indicates that you understand the information in this consent form. You have not waived any legal rights you otherwise would have as a participant in a research study. I have read the above purpose of the study, and understand my role in participating in the research. I volunteer to take part in this research. I have had a chance to ask questions. If I have questions later, about the research, I can ask the investigator listed above. I understand that I may refuse to participate or withdraw from participation at any time. The investigator may withdraw me at his/her professional discretion. I certify that I am 18 years of age or older and freely give my consent to participate in this study.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A.2 Instructions to Original Translators", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we refer to biases against a single social cleavage, such as racial bias or gender bias, as unisectional.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Caliskan et al. (2017); refer to the racial groups as African-American and European-American. For consistency and in accordance with style guides for the Associated Press and the New York Times, we refer to the groups as Black and white with intentional casing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://www.ssa.gov/oact/babynames/ decades/names1990s.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://arabia.babycenter.com/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We are grateful to Max Helman for his helpful comments and conversations. Alejandra Quintana Arocho, Catherine Rose Chrin, Maria Chrin, Rafael ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| }, |
| { |
| "text": "The names used in the original English EEC can be found in Table 6 . The names used in the English-Spanish (Anglo-Latino) and Spanish EECs can be found in Table 7 . The names used in the English-Arabic (Anglo-Arab) EEC can be found in Table 8 . The names in the Arabic EEC (in Arabic text) can be found in Table 9 .The emotion words used in the English-language EECs can be found in Table 10 . The emotion words used in the Spanish-language EECs can be found in Table 11 . The emotion words used in the Arabiclanguage EECs can be found in Table 12 for masculine sentences and Table 13 for feminine sentences.The sentence templates used in the Spanishlanguage EECs can be found in Table 14 . The sentence templates used in the Arabic-language EECs can be found in Table 15 for masculine sentences and Table 16 ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 59, |
| "end": 66, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 155, |
| "end": 162, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 235, |
| "end": 242, |
| "text": "Table 8", |
| "ref_id": null |
| }, |
| { |
| "start": 306, |
| "end": 313, |
| "text": "Table 9", |
| "ref_id": null |
| }, |
| { |
| "start": 383, |
| "end": 391, |
| "text": "Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 462, |
| "end": 470, |
| "text": "Table 11", |
| "ref_id": null |
| }, |
| { |
| "start": 539, |
| "end": 547, |
| "text": "Table 12", |
| "ref_id": null |
| }, |
| { |
| "start": 576, |
| "end": 584, |
| "text": "Table 13", |
| "ref_id": null |
| }, |
| { |
| "start": 680, |
| "end": 688, |
| "text": "Table 14", |
| "ref_id": null |
| }, |
| { |
| "start": 763, |
| "end": 809, |
| "text": "Table 15 for masculine sentences and Table 16", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A.1 Equity Evaluation Corpora", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Angwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Larson", |
| "suffix": "" |
| }, |
| { |
| "first": "Surya", |
| "middle": [], |
| "last": "Mattu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lauren", |
| "middle": [], |
| "last": "Kirchner", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 1999. Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "On the dangers of stochastic parrots: Can language models be too big?", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [ |
| "M" |
| ], |
| "last": "Bender", |
| "suffix": "" |
| }, |
| { |
| "first": "Timnit", |
| "middle": [], |
| "last": "Gebru", |
| "suffix": "" |
| }, |
| { |
| "first": "Angelina", |
| "middle": [], |
| "last": "Mcmillan-Major", |
| "suffix": "" |
| }, |
| { |
| "first": "Shmargaret", |
| "middle": [], |
| "last": "Shmitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21", |
| "volume": "", |
| "issue": "", |
| "pages": "610--623", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/3442188.3445922" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language mod- els be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Trans- parency, FAccT '21, page 610-623, New York, NY, USA. Association for Computing Machinery.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Pearson correlation coefficient", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Benesty", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingdong", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Noise reduction in speech processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1--4", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Benesty, Jingdong Chen, Yiteng Huang, and Is- rael Cohen. 2009. Pearson correlation coefficient. In Noise reduction in speech processing, pages 1-4. Springer.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Enriching word vectors with subword information", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the association for computational linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "135--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the associa- tion for computational linguistics, 5:135-146.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", |
| "authors": [ |
| { |
| "first": "Tolga", |
| "middle": [], |
| "last": "Bolukbasi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| }, |
| { |
| "first": "Venkatesh", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [ |
| "T" |
| ], |
| "last": "Saligrama", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kalai", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to home- maker? debiasing word embeddings. Advances in neural information processing systems, 29.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "When black+ lesbian+ woman\u0338 = black lesbian woman: The methodological challenges of qualitative and quantitative intersectionality research", |
| "authors": [ |
| { |
| "first": "Lisa", |
| "middle": [], |
| "last": "Bowleg", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Sex roles", |
| "volume": "59", |
| "issue": "5", |
| "pages": "312--325", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lisa Bowleg. 2008. When black+ lesbian+ woman\u0338 = black lesbian woman: The methodological chal- lenges of qualitative and quantitative intersectionality research. Sex roles, 59(5):312-325.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", |
| "authors": [ |
| { |
| "first": "Joy", |
| "middle": [], |
| "last": "Buolamwini", |
| "suffix": "" |
| }, |
| { |
| "first": "Timnit", |
| "middle": [], |
| "last": "Gebru", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Conference on fairness, accountability and transparency", |
| "volume": "", |
| "issue": "", |
| "pages": "77--91", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in com- mercial gender classification. In Conference on fair- ness, accountability and transparency, pages 77-91. PMLR.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Semantics derived automatically from language corpora contain human-like biases", |
| "authors": [ |
| { |
| "first": "Aylin", |
| "middle": [], |
| "last": "Caliskan", |
| "suffix": "" |
| }, |
| { |
| "first": "Joanna", |
| "middle": [ |
| "J" |
| ], |
| "last": "Bryson", |
| "suffix": "" |
| }, |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Science", |
| "volume": "356", |
| "issue": "6334", |
| "pages": "183--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from lan- guage corpora contain human-like biases. Science, 356(6334):183-186.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Spanish pre-trained bert model and evaluation data", |
| "authors": [ |
| { |
| "first": "Jos\u00e9", |
| "middle": [], |
| "last": "Ca\u00f1ete", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Chaperon", |
| "suffix": "" |
| }, |
| { |
| "first": "Rodrigo", |
| "middle": [], |
| "last": "Fuentes", |
| "suffix": "" |
| }, |
| { |
| "first": "Jou-Hui", |
| "middle": [], |
| "last": "Ho", |
| "suffix": "" |
| }, |
| { |
| "first": "Hojin", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "P\u00e9rez", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge P\u00e9rez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Black sexual politics: African Americans, gender, and the new racism", |
| "authors": [ |
| { |
| "first": "Patricia", |
| "middle": [ |
| "Hill" |
| ], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patricia Hill Collins. 2004. Black sexual politics: African Americans, gender, and the new racism. Routledge.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Mapping the margins: Intersectionality, identity politics, and violence against women of color", |
| "authors": [ |
| { |
| "first": "Kimberle", |
| "middle": [], |
| "last": "Crenshaw", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Stan. L. Rev", |
| "volume": "43", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kimberle Crenshaw. 1990. Mapping the margins: In- tersectionality, identity politics, and violence against women of color. Stan. L. Rev., 43:1241.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Responsible ai: Gender bias assessment in emotion recognition", |
| "authors": [ |
| { |
| "first": "Artem", |
| "middle": [], |
| "last": "Domnich", |
| "suffix": "" |
| }, |
| { |
| "first": "Gholamreza", |
| "middle": [], |
| "last": "Anbarjafari", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2103.11436" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Artem Domnich and Gholamreza Anbarjafari. 2021. Responsible ai: Gender bias assessment in emotion recognition. arXiv preprint arXiv:2103.11436.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "On the universality and cultural specificity of emotion recognition: a meta-analysis", |
| "authors": [ |
| { |
| "first": "Hillary", |
| "middle": [], |
| "last": "Anger Elfenbein", |
| "suffix": "" |
| }, |
| { |
| "first": "Nalini", |
| "middle": [], |
| "last": "Ambady", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Psychological bulletin", |
| "volume": "128", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hillary Anger Elfenbein and Nalini Ambady. 2002. On the universality and cultural specificity of emotion recognition: a meta-analysis. Psychological bulletin, 128(2):203.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Beta regression for modelling rates and proportions", |
| "authors": [ |
| { |
| "first": "Silvia", |
| "middle": [], |
| "last": "Ferrari", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Cribari-Neto", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Journal of applied statistics", |
| "volume": "31", |
| "issue": "7", |
| "pages": "799--815", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Silvia Ferrari and Francisco Cribari-Neto. 2004. Beta regression for modelling rates and proportions. Jour- nal of applied statistics, 31(7):799-815.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Word embeddings quantify 100 years of gender and ethnic stereotypes", |
| "authors": [ |
| { |
| "first": "Nikhil", |
| "middle": [], |
| "last": "Garg", |
| "suffix": "" |
| }, |
| { |
| "first": "Londa", |
| "middle": [], |
| "last": "Schiebinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "115", |
| "issue": "16", |
| "pages": "3635--3644", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635- E3644.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them", |
| "authors": [ |
| { |
| "first": "Hila", |
| "middle": [], |
| "last": "Gonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1903.03862" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Measuring individual differences in implicit cognition: the implicit association test", |
| "authors": [ |
| { |
| "first": "Debbie", |
| "middle": [ |
| "E" |
| ], |
| "last": "Anthony G Greenwald", |
| "suffix": "" |
| }, |
| { |
| "first": "Jordan Lk", |
| "middle": [], |
| "last": "Mcghee", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Journal of personality and social psychology", |
| "volume": "74", |
| "issue": "6", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony G Greenwald, Debbie E McGhee, and Jor- dan LK Schwartz. 1998. Measuring individual differ- ences in implicit cognition: the implicit association test. Journal of personality and social psychology, 74(6):1464.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Aylin", |
| "middle": [], |
| "last": "Caliskan", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society", |
| "volume": "", |
| "issue": "", |
| "pages": "122--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Guo and Aylin Caliskan. 2021. Detecting emergent intersectional biases: Contextualized word embed- dings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 122-133.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Distributional techniques for philosophical enquiry", |
| "authors": [ |
| { |
| "first": "Aur\u00e9lie", |
| "middle": [], |
| "last": "Herbelot", |
| "suffix": "" |
| }, |
| { |
| "first": "Eva", |
| "middle": [ |
| "Von" |
| ], |
| "last": "Redecker", |
| "suffix": "" |
| }, |
| { |
| "first": "Johanna", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities", |
| "volume": "", |
| "issue": "", |
| "pages": "45--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aur\u00e9lie Herbelot, Eva Von Redecker, and Johanna M\u00fcller. 2012. Distributional techniques for philo- sophical enquiry. In Proceedings of the 6th Work- shop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 45-54.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Examining gender and race bias in two hundred sentiment analysis systems", |
| "authors": [ |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Kiritchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "CoRR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Svetlana Kiritchenko and Saif M. Mohammad. 2018. Examining gender and race bias in two hundred sen- timent analysis systems. CoRR, abs/1805.04508.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Measuring bias in contextualized word representations", |
| "authors": [ |
| { |
| "first": "Keita", |
| "middle": [], |
| "last": "Kurita", |
| "suffix": "" |
| }, |
| { |
| "first": "Nidhi", |
| "middle": [], |
| "last": "Vyas", |
| "suffix": "" |
| }, |
| { |
| "first": "Ayush", |
| "middle": [], |
| "last": "Pareek", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Tsvetkov", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the First Workshop on Gender Bias in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "166--172", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-3823" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in con- textualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Lan- guage Processing, pages 166-172, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Manzini", |
| "suffix": "" |
| }, |
| { |
| "first": "Chong", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yulia", |
| "middle": [], |
| "last": "Lim", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Tsvetkov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1904.04047" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as cau- casian is to police: Detecting and removing mul- ticlass bias in word embeddings. arXiv preprint arXiv:1904.04047.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "On measuring social biases in sentence encoders", |
| "authors": [ |
| { |
| "first": "Chandler", |
| "middle": [], |
| "last": "May", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shikha", |
| "middle": [], |
| "last": "Bordia", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Samuel", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1903.10561" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. arXiv preprint arXiv:1903.10561.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "SemEval-2018 task 1: Affect in tweets", |
| "authors": [ |
| { |
| "first": "Saif", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Felipe", |
| "middle": [], |
| "last": "Bravo-Marquez", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Salameh", |
| "suffix": "" |
| }, |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Kiritchenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "1--17", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/S18-1001" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1-17, New Orleans, Louisiana. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Implicit attitudes toward arab-muslims and the moderating effects of social information", |
| "authors": [ |
| { |
| "first": "Jaihyun", |
| "middle": [], |
| "last": "Park", |
| "suffix": "" |
| }, |
| { |
| "first": "Karla", |
| "middle": [], |
| "last": "Felix", |
| "suffix": "" |
| }, |
| { |
| "first": "Grace", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Basic and Applied Social Psychology", |
| "volume": "29", |
| "issue": "1", |
| "pages": "35--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jaihyun Park, Karla Felix, and Grace Lee. 2007. Im- plicit attitudes toward arab-muslims and the moderat- ing effects of social information. Basic and Applied Social Psychology, 29(1):35-45.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Pytorch: An imperative style, high-performance deep learning library", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Paszke", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Massa", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lerer", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Bradbury", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Chanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Killeen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zeming", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Natalia", |
| "middle": [], |
| "last": "Gimelshein", |
| "suffix": "" |
| }, |
| { |
| "first": "Luca", |
| "middle": [], |
| "last": "Antiga", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Scikit-learn: Machine learning in Python", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Vanderplas", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cournapeau", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Brucher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Perrot", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Duchesnay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "KUISAIL at SemEval-2020 task 12: BERT-CNN for offensive speech identification in social media", |
| "authors": [ |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Safaya", |
| "suffix": "" |
| }, |
| { |
| "first": "Moutasem", |
| "middle": [], |
| "last": "Abdullatif", |
| "suffix": "" |
| }, |
| { |
| "first": "Deniz", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "2054--2059", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ali Safaya, Moutasem Abdullatif, and Deniz Yuret. 2020. KUISAIL at SemEval-2020 task 12: BERT- CNN for offensive speech identification in social me- dia. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2054-2059, Barcelona (online). International Committee for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Assessing social and intersectional biases in contextualized word representations", |
| "authors": [ |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Chern Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Elisa", |
| "middle": [], |
| "last": "Celis", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yi Chern Tan and L Elisa Celis. 2019. Assessing so- cial and intersectional biases in contextualized word representations. Advances in Neural Information Processing Systems, 32.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Debunking the myth of the \"angry black woman\": An exploration of anger in young african american women", |
| "authors": [ |
| { |
| "first": "Celeste", |
| "middle": [], |
| "last": "Walley-Jean", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Black Women, Gender & Families", |
| "volume": "3", |
| "issue": "2", |
| "pages": "68--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J Celeste Walley-Jean. 2009. Debunking the myth of the \"angry black woman\": An exploration of anger in young african american women. Black Women, Gender & Families, 3(2):68-86.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Huggingface's transformers: State-ofthe-art natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1910.03771" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of- the-art natural language processing. arXiv preprint arXiv:1910.03771.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Gender bias in multilingual embeddings and crosslingual transfer", |
| "authors": [ |
| { |
| "first": "Jieyu", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Subhabrata", |
| "middle": [], |
| "last": "Mukherjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Saghar", |
| "middle": [], |
| "last": "Hosseini", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [ |
| "Hassan" |
| ], |
| "last": "Awadallah", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2005.00699" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jieyu Zhao, Subhabrata Mukherjee, Saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gender bias in multilingual embeddings and cross- lingual transfer. arXiv preprint arXiv:2005.00699.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Gender bias in contextualized word embeddings", |
| "authors": [ |
| { |
| "first": "Jieyu", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Tianlu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Yatskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Vicente", |
| "middle": [], |
| "last": "Ordonez", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1904.03310" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gen- der bias in contextualized word embeddings. arXiv preprint arXiv:1904.03310.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Examining gender bias in languages with grammatical gender", |
| "authors": [ |
| { |
| "first": "Pei", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Weijia", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jieyu", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuan-Hao", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Muhao", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1909.02224" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai- Wei Chang. 2019. Examining gender bias in lan- guages with grammatical gender. arXiv preprint arXiv:1909.02224.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "content": "<table><tr><td colspan=\"5\">Template 1 [The situation makes Latoya feel excited. Example</td><td>EEC en (Black-white)</td></tr><tr><td>3</td><td colspan=\"3\">I made [person] feel [emotional state</td><td>I made Jorge feel furious.</td><td>en (Latino-Anglo)</td></tr><tr><td/><td>word].</td><td/><td/><td/><td/></tr><tr><td>4</td><td colspan=\"3\">[Person] made me feel [emotional state</td><td>Sarah made me feel depressed.</td><td>en (Latino-Anglo)</td></tr><tr><td/><td>word].</td><td/><td/><td/><td/></tr><tr><td>5</td><td>[emotional situation word] [Person] found him/herself</td><td>in</td><td>situa-a/an</td><td>Ana se encontr\u00f3 en una situaci\u00f3n maravillosa.</td><td>es (Anglo-Latino)</td></tr><tr><td/><td>tion.</td><td/><td/><td/><td/></tr><tr><td>6</td><td colspan=\"3\">[emotional situation word] events. [Person] told us all about the recent</td><td>imientos absurdos. Jacob nos cont\u00f3 todo sobre los recientes acontec-</td><td>es (Anglo-Latino)</td></tr><tr><td>7</td><td colspan=\"3\">tion word]. The conversation with [person] was [emotional situa-</td><td>The conversation with Muhammad was hilarious.</td><td>en (Anglo-Arab)</td></tr><tr><td>8</td><td>I saw [person] in the market.</td><td/><td/><td>I saw Betsy in the market.</td><td>en (Anglo-Arab)</td></tr><tr><td>9</td><td>I talked to [person] yesterday.</td><td/><td/><td>(tahadatht mae jas-tayn</td><td>ar (Anglo-Arab)</td></tr><tr><td/><td/><td/><td/><td>il'ams)</td><td/></tr><tr><td colspan=\"4\">10 [Person] goes to the school in our neighborhood.</td><td>(fatimah tadhhab</td><td>ar (Anglo-Arab)</td></tr><tr><td/><td/><td/><td/><td>'ilaa almadrasah fi hina)</td><td/></tr></table>", |
| "html": null, |
| "text": "Person] feels [emotional state word]. Adam feels angry. en (Black-white) 2 The situation makes [person] feel [emotional state word].", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>: Pearson Correlation Coefficent (\u03c1) on models trained on SemEval 2018 Task 1, Emotion Regression</td></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td colspan=\"6\">Statistically significant results (p \u2264 0.01) are marked with three asterisks ***, (p \u2264 0.05) are marked with two asterisks **, (p \u2264 0.10) are marked with one asterisk *</td></tr><tr><td>Language</td><td>Model</td><td colspan=\"4\">Anger Coefficients Race/Ethnicity Gender Intersection Race/Ethnicity Fear Coefficients Gender Intersection</td></tr><tr><td colspan=\"2\">English (Anglo-Latino) mBERT BERT+ XLM-RoBERTa SVM-tfidf fastText Spanish BERT+ mBERT XLM-RoBERTa SVM-tfidf fastText Language Model</td><td colspan=\"4\">0.005 \u22120.014 * * * 0.014 * * * \u22120.014 * * * \u22120.0 0.002 * * * \u22120.003 0.001 \u22120.0 \u22120.001 \u22120.011 \u22120.006 0.03 * * * \u22120.005 * 0.003 * * * \u22120.002 * * * \u22120.004 0.031 * * * 0.0 0.053 * * * Joy Coefficients Race/Ethnicity Gender Intersection Race/Ethnicity 0.002 0.01 \u22120.02 * * * \u22120.005 \u22120.034 * * * 0.013 * * * \u22120.002 * * 0.0 0.002 * * 0.003 0.003 \u22120.003 \u22120.0 0.0 0.001 0.02 * \u22120.017 * \u22120.009 0.006 * 0.026 * * * 0.013 * * * \u22120.002 * * * 0.002 * * * \u22120.0 0.004 \u22120.002 \u22120.006 0.0 \u22120.0 \u22120.007 Sadness Coefficients Gender Intersection 0.015 * 0.007 0.0 0.003 \u22120.0 0.042 * * * \u22120.005 * \u22120.001 * * 0.002 0.0</td></tr><tr><td colspan=\"2\">English (Anglo-Latino) mBERT BERT+ XLM-RoBERTa SVM-tfidf fastText Spanish BERT+ mBERT XLM-RoBERTa SVM-tfidf fastText</td><td>0.001 \u22120.025 * * * 0.005 0.02 * * * 0.002 * * 0.006 * * * \u22120.0 \u22120.0 \u22120.0 0.001 0.012 0.015 * \u22120.021 * * * \u22120.008 * * \u22120.0 0.002 * * 0.002 0.015 * * * \u22120.0 \u22120.004</td><td>0.016 * * 0.017 * * 0.0 0.0 0.0 \u22120.006 0.025 * * * \u22120.001 \u22120.001 \u22120.0</td><td>\u22120.005 \u22120.013 * * \u22120.006 0.009 * 0.001 \u22120.002 * * 0.0 \u22120.002 0.0 \u22120.0 0.004 0.019 * * 0.016 * * * 0.002 \u22120.0 0.0 0.006 \u22120.006 0.0 \u22120.002</td><td>0.028 * * * 0.011 0.001 0.002 \u22120.0 0.004 \u22120.008 \u22120.0 0.006 \u22120.0</td></tr></table>", |
| "html": null, |
| "text": "Beta coefficients for the English (Black-white) EEC inference for all model, emotion combinations.", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF6": { |
| "content": "<table/>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF8": { |
| "content": "<table/>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF9": { |
| "content": "<table><tr><td>Anglo</td><td>Arab</td></tr><tr><td colspan=\"2\">Female Male Female Male</td></tr></table>", |
| "html": null, |
| "text": "Names used in new English-Arabic EECs", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF10": { |
| "content": "<table><tr><td>Anger</td><td>Joy</td><td>Fear</td><td>Sadness</td></tr><tr><td>angry</td><td>ecstatic</td><td>anxious</td><td>depressed</td></tr><tr><td>annoyed</td><td>excited</td><td>discouraged</td><td>devastated</td></tr><tr><td>enraged</td><td>glad</td><td>fearful</td><td>disappointed</td></tr><tr><td>furious</td><td>happy</td><td>scared</td><td>miserable</td></tr><tr><td>irritated</td><td>relieved</td><td>terrified</td><td>sad</td></tr><tr><td>annoying</td><td>amazing</td><td>dreadful</td><td>depressing</td></tr><tr><td>displeasing</td><td>funny</td><td>horrible</td><td>gloomy</td></tr><tr><td>irritating</td><td>great</td><td>shocking</td><td>grim</td></tr><tr><td colspan=\"2\">outrageous hilarious</td><td>terrifying</td><td>heartbreaking</td></tr><tr><td>vexing</td><td colspan=\"2\">wonderful threatening</td><td>serious</td></tr></table>", |
| "html": null, |
| "text": "Names used in new English-Arabic EECs in Arabic", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF11": { |
| "content": "<table><tr><td>: Emotion words used in English EECs</td></tr><tr><td>100</td></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF12": { |
| "content": "<table><tr><td/><td colspan=\"3\">: Emotion words used in Spanish EEC</td></tr><tr><td>Anger</td><td>Joy</td><td>Fear</td><td>Sadness</td></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF13": { |
| "content": "<table><tr><td colspan=\"4\">: Emotion words used in Arabic EEC for mas-culine sentences</td></tr><tr><td>Anger</td><td>Joy</td><td>Fear</td><td>Sadness</td></tr><tr><td/><td>-</td><td/><td/></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF14": { |
| "content": "<table><tr><td>: Emotion words used in Arabic EEC for femi-nine sentences</td></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF15": { |
| "content": "<table><tr><td/><td colspan=\"3\">: Sentence templates used in the Spanish EEC</td></tr><tr><td colspan=\"2\">Template</td><td/></tr><tr><td colspan=\"2\">1. <person></td><td colspan=\"2\"><emotional state word></td></tr><tr><td>2.</td><td colspan=\"2\"><person></td><td><emotional state word></td></tr><tr><td>3.</td><td><person></td><td/><td><emotional state word></td></tr><tr><td colspan=\"2\">4. <person></td><td/><td><emotional state word></td></tr><tr><td colspan=\"2\">5. <person></td><td/><td><emotional situation word></td></tr><tr><td colspan=\"2\">6. <person></td><td/><td><emotional situation word></td></tr><tr><td>7.</td><td colspan=\"3\"><person> <emotional situation word></td></tr><tr><td>8.</td><td><person></td><td/></tr><tr><td>9.</td><td colspan=\"2\"><person></td></tr><tr><td colspan=\"2\">10. <person></td><td/></tr><tr><td colspan=\"2\">11. <person></td><td/></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF16": { |
| "content": "<table><tr><td colspan=\"2\">Template</td><td/></tr><tr><td colspan=\"2\">1. <person></td><td colspan=\"2\"><emotional state word></td></tr><tr><td>2.</td><td colspan=\"2\"><person></td><td><emotional state word></td></tr><tr><td>3.</td><td><person></td><td/><td><emotional state word></td></tr><tr><td colspan=\"2\">4. <person></td><td/><td><emotional state word></td></tr><tr><td colspan=\"2\">5. <person></td><td/><td><emotional situation word></td></tr><tr><td colspan=\"2\">6. <person></td><td/><td><emotional situation word></td></tr><tr><td>7.</td><td colspan=\"3\"><person> <emotional situation word></td></tr><tr><td>8.</td><td><person></td><td/></tr><tr><td>9.</td><td colspan=\"2\"><person></td></tr><tr><td colspan=\"2\">10. <person></td><td/></tr><tr><td colspan=\"2\">11. <person></td><td/></tr></table>", |
| "html": null, |
| "text": "Sentence templates used in the Arabic EEC for masculine sentences The gendered noun phrases used in the English, Spanish, and Arabic-language EECs can be found inTable 17.", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF17": { |
| "content": "<table><tr><td colspan=\"2\">English</td><td colspan=\"2\">Spanish</td><td>Arabic</td><td/></tr><tr><td>Female</td><td>Male</td><td>Female</td><td>Male</td><td>Female</td><td>Male</td></tr><tr><td>she</td><td>he</td><td>ella</td><td>\u00e9l</td><td/><td/></tr><tr><td>this woman</td><td>this man</td><td colspan=\"2\">esta mujer este hombre</td><td/><td/></tr><tr><td>this girl</td><td>this boy</td><td>este chico</td><td>esta chica</td><td/><td/></tr><tr><td>my sister</td><td>my brother</td><td colspan=\"2\">mi hermano mi hermana</td><td/><td/></tr><tr><td>my daughter</td><td>my son</td><td>mi hijo</td><td>mi hija</td><td/><td/></tr><tr><td>my wife</td><td>my husband</td><td>mi esposo</td><td>mi esposa</td><td/><td/></tr><tr><td colspan=\"2\">my girlfriend my boyfriend</td><td>mi novio</td><td>mi novia</td><td/><td/></tr><tr><td>my mother</td><td>my father</td><td>mi padre</td><td>mi madre</td><td/><td/></tr><tr><td>my aunt</td><td>my uncle</td><td>mi t\u00edo</td><td>mi t\u00eda</td><td/><td/></tr><tr><td>my mom</td><td>my dad</td><td>mi pap\u00e1</td><td>mi mam\u00e1</td><td/><td/></tr></table>", |
| "html": null, |
| "text": "Sentence templates used in the Arabic EEC for feminine sentences", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF18": { |
| "content": "<table/>", |
| "html": null, |
| "text": "Gendered noun phrases used in EECs", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF19": { |
| "content": "<table/>", |
| "html": null, |
| "text": "For the sentences with templates, Rearrange order of templates if necessary, but signify where [p] and [eA], [eB] tags belong in each template. For example, the [p] tag denotes person, e.g. she/her, this woman, my sister; the [eA] tag denotes emotional state words, e.g. angry, happy; and the [eB]", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF21": { |
| "content": "<table><tr><td>Survey questions</td></tr><tr><td>ID? (in your email)</td></tr><tr><td>\u2022 My brother has two children</td></tr><tr><td>\u2022 He feels enraged</td></tr><tr><td>\u2022 The situation makes her feel anxious</td></tr><tr><td>\u2022 I made her feel ecstatic</td></tr><tr><td>\u2022 My boyfriend made me feel disappointed</td></tr><tr><td>\u2022 This woman found herself in a vexing situa-</td></tr><tr><td>tion</td></tr><tr><td>\u2022 She told us all about the recent wonderful</td></tr><tr><td>events</td></tr><tr><td>\u2022 The conversation with my uncle was gloomy</td></tr><tr><td>A.3 Instructions to Checking Translators</td></tr><tr><td>dad</td></tr><tr><td>Sentences:</td></tr><tr><td>\u2022 My dad feels angry</td></tr><tr><td>\u2022 The situation makes her feel terrified</td></tr><tr><td>\u2022 I made this girl feel glad</td></tr><tr><td>\u2022 She made me feel miserable</td></tr><tr><td>\u2022 He found himself in a displeasing situation</td></tr><tr><td>\u2022 My boyfriend told us all about the recent</td></tr><tr><td>dreadful events</td></tr><tr><td>\u2022 The conversation with him was amazing</td></tr><tr><td>\u2022 I saw this boy in the market</td></tr><tr><td>\u2022 I talked to my mother yesterday</td></tr><tr><td>\u2022 This man goes to the school in our neighbor-</td></tr><tr><td>hood</td></tr></table>", |
| "html": null, |
| "text": "she/her, this woman, this girl, my sister, my daughter, my wife, my girlfriend, my mother, my aunt, my mom, he/him, this man, this boy, my brother, my son, my husband, my boyfriend, my father, my uncle, my Full name (will be printed as written, unless you prefer anonymity) Language Dialect Are you a native speaker? (e.g. spoken in early childhood) Are you a fluent speaker? Have you ever received formal education before college in this language? What language(s) were you formally educated in before college?", |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |